Autonomic hardware assist for patching code

Information

  • Patent Grant
  • 8782664
  • Patent Number
    8,782,664
  • Date Filed
    Wednesday, January 11, 2012
    12 years ago
  • Date Issued
    Tuesday, July 15, 2014
    10 years ago
Abstract
Hardware assist to autonomically patch code. The present invention provides hardware microcode to a new type of metadata to selectively identify instructions to be patched for specific performance optimization functions. The present invention also provides a new flag in the machine status register (MSR) to enable or disable a performance monitoring application or process to perform code-patching functions. If the code patching function is enabled, the application or process may patch code at run time by associating the metadata with the selected instructions. The metadata includes pointers pointing to the patch code block code. The program code may be patched autonomically without modifying original code.
Description

The present invention is related to the following applications entitled “Method and Apparatus for Counting Instruction Execution and Data Accesses”, Ser. No. 10/675,777, filed on Sep. 30, 2003, now U.S. Pat. No. 7,395,527 issued Jul. 1, 2008; “Method and Apparatus for Selectively Counting Instructions and Data Accesses”, Ser. No. 10/674,604, filed on Sep. 30, 2003; “Method and Apparatus for Generating Interrupts Upon Execution of Marked Instructions and Upon Access to Marked Memory Locations”, Ser. No. 10/675,831, filed on Sep. 30, 2003; “Method and Apparatus for Counting Data Accesses and Instruction Executions that Exceed a Threshold”, Ser. No. 10/675, filed on Sep. 30, 2003; “Method and Apparatus for Counting Execution of Specific Instructions and Accesses to Specific Data Locations”, Ser. No. 10/675,776, filed on Sep. 30, 2003, now U.S. Pat. No. 7,937,691 issued May 3, 2011; “Method and Apparatus for Debug Support for Individual Instructions and Memory Locations”, Ser. No. 10/675,751, filed on Sep. 30, 2003; “Method and Apparatus to Autonomically Select Instructions for Selective Counting”, Ser. No. 10/675,721, filed on Sep. 30, 2003; “Method and Apparatus to Autonomically Count Instruction Execution for Applications”, Ser. No. 10/674,642, filed on Sep. 30, 2003; “Method and Apparatus to Autonomically Take an Exception on Specified Instructions”, Ser. No. 10/674,606, filed on Sep. 30, 2003; “Method and Apparatus to Autonomically Profile Applications”, Ser. No. 10/675,783, filed on Sep. 30, 2003; “Method and Apparatus for Counting Instruction and Memory Location Ranges”, Ser. No. 10/675,872, filed on Sep. 30, 2003, now U.S. Pat. No. 7,373,637 issued May 13, 2008; “Method and Apparatus For Maintaining Performance Monitoring Structure in a Page Table For Use in Monitoring Performance of a Computer Program”, Ser. No. 10/757,250, filed on Jan. 14, 2004, now U.S. Pat. No. 7,526,757 issued Apr. 28, 2009; “Autonomic Method and Apparatus for Counting Branch Instructions to Improve Branch Predictions”, Ser. No. 10/757,237, filed on Jan. 14, 2004, now U.S. Pat. No. 7,293,164 issued Nov. 6, 2007; and “Autonomic Method and Apparatus for Local Program Code Reorganization Using Branch Count Per Instruction Hardware”, Ser. No. 10/757,156, filed on Jan. 14, 2004, now U.S. Pat. No. 7,290,255 issued Oct. 30, 2007. All of the above related applications are assigned to the same assignee, and incorporated herein by reference.


BACKGROUND OF THE INVENTION

1. Technical Field


The present invention relates generally to an improved data processing system and, in particular, to a method and system for improving performance of a program in a data processing system. Still more particularly, the present invention relates to a method, apparatus, and computer instructions for hardware assist for autonomically patching code.


2. Description of Related Art


In a conventional computer system, the processor fetches and executes program instructions stored in a high-speed memory known as cache memory. Instructions fetched from cache memory are normally executed without much delay. However, if the program instruction code requires access to data or instructions located in a memory location other than the high-speed cache memory, a decrease in system performance may result, particularly in a pipelined processor system where multiple instructions are executed at the same time.


Such accesses to data and/or instructions located in a memory location other than the high-speed cache memory may occur when the code of the computer program being executed is not organized to provide contiguous execution of the computer program as much as possible. That is, for example, when the computer program is not organized such that basic blocks of code are not organized in memory in the same sequence in which they are executed. One common approach to reduce the negative impact on system performance is to reorganize program code such that data or instructions accessed or executed by a computer program may be grouped together as close as possible.


Various approaches are known in the art to better organize program code. One approach is proposed by Heisch in “PROFILE-BASED OPTIMIZING POSTPROCESSORS FOR DATA REFERENCES” (U.S. Pat. No. 5,689,712). Heisch teaches optimization of programs by creating an instrumented program to capture effective address trace data for each of the memory references, and then analyzing the access patterns of the effective trace data in order to reorder the memory references to create an optimized program. The instrumented program generates an improved memory address allocation reorder list that indicates an optimal ordering for the data items in the program based upon how they are referenced during program execution.


Another approach to optimize program code is suggested by Pettis et al. in “METHOD FOR OPTIMIZING COMPUTER CODE TO PROVIDE MORE EFFICIENT EXECUTION ON COMPUTERS HAVING CACHE MEMORIES” (U.S. Pat. No. 5,212,794). Pettis teaches running program code with test data to produce statistics in order to determine a new ordering for the code blocks. The new order places code blocks that are often executed after one another close to one another in the memory. However, the above approaches require modification of the original code. That is, the above approaches require that the code itself be modified by overwriting the code.


Moreover, when a portion of code is determined to be in need of patching, the code is typically modified so that that original code is shifted downward in the instruction stream with the reorganized code being inserted above it in the instruction stream. Thus, the original code is again modified from its original form.


Code patching may apply to various types of performance optimization functions. For example, the program may determine to reorganize code at run time. In addition, when a computer system is running slow, code patching may be used to switch program execution to an instrumented interrupt service routine that determines how much time the system is spending in interrupts. Furthermore, when a performance monitoring program wants to build a targeted instruction trace for specific instructions, code patching may also be used to hook each instruction block to produce a trace.


It would be advantageous to have an improved method, apparatus, and computer instructions for autonomically patching code by selectively identifying branch instructions or other types of instructions to optimize performance, and providing a pointer indicating where to branch without modifying the original program code.


SUMMARY OF THE INVENTION

The present invention provides an improved method, apparatus, and computer instructions for providing and making use of hardware assistance to autonomically patch code. The terms “patch” or “patching” as they are used in the present application refer to a process by which the execution of the code is modified without the original code itself being modified, as opposed to the prior art “patching” which involves modification of the original code. This process may involve branching the execution to a set of instructions that are not present in the original code in the same form. This set of instructions may be, for example, a reorganized copy of a set of instructions within the original code, an alternative set of instructions that are not based on the original code, or the like.


In the context of the present invention, the hardware assistance used by the present invention may include providing hardware microcode that supports a new type of metadata, so that patch code may be executed easily at run time for a specific performance optimization function, such as, for example, obtaining more contiguous execution of the code by reorganizing the series of instructions in the original code. The metadata takes the form of a memory word, which is stored in the performance instrumented segment of the application.


For example, the code may be overridden at run time to change the order in which instructions are executed by patching the code. The patching of the code in the present invention performs patching of code by constructing a new order of program execution or providing alternative instrumented code in an allocated memory location. The present invention also provides a metadata that identifies the allocated memory location from which the patch instructions are executed. Thus, the original code of the computer program is not modified, only the execution of the computer program is modified.


In addition, the present invention provides a new flag to the machine status register (MSR) in the processor for enabling or disabling the functionality of patching code using metadata. When the functionality is enabled, a performance monitoring application may patch code at run time for a specific performance optimization function. One example of patching code is to reorganize portions of code in accordance with the present invention. If a performance monitoring application determines that a block of code should be reorganized, the performance monitoring application may copy the portion of code that needs to be reorganized to a dedicated memory region and then reorganize it in a manner designated by the performance monitoring application. The performance monitoring application may then generate and associate metadata with the original portion of code.


As the program instructions are executed, the processor reads the metadata generated during the program execution. The program loads the metadata into the allocated workspace, such as a performance shadow cache, and associates the metadata with the instructions.


In one embodiment, the metadata may be associated with a branch instruction. The metadata includes a ‘branch to’ pointer pointing to the starting address of the patch instructions in an allocated memory location. The starting address may be an absolute or offset address. During program execution, if the branch is not taken, the metadata is ignored. If the branch is taken, this ‘branch to’ pointer is read by the processor which then executes an unconditional branch to the starting address indicated by the ‘branch to’ pointer of the metadata.


At the end of the patch instructions, an instruction may redirect the execution of the computer program back to the original code at an appropriate place in the code where the branch would have continued to had the original code been executed during the execution of the branch. This place in the code may also be some other place in the code. For example, if a number of original instructions are duplicated to perform certain functionality when constructing patch instructions, the appropriate place in the code to return to is the instruction where the functionality is complete.


In an alternative embodiment, the metadata may be associated with both branch and non-branch instructions. The metadata includes a pointer pointing to the starting address of the patch instructions in the allocated memory location. The starting address may be an absolute or offset address. During execution of the computer program, the original program instruction associated with the metadata is ignored. Instead, the processor branches unconditionally to the starting address identified by the pointer of the metadata.


These and other features and advantages of the present invention will be described in, or will become apparent to those of ordinary skill in the art in view of, the following detailed description of the preferred embodiments.





BRIEF DESCRIPTION OF THE DRAWINGS

The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, as well as a preferred mode of use, further objectives and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:



FIG. 1 is an exemplary block diagram of a data processing system in which the present invention may be implemented;



FIG. 2 is an exemplary block diagram of a processor system for processing information in accordance with a preferred embodiment of the present invention;



FIG. 3 is an exemplary diagram illustrating an example of metadata in accordance with a preferred embodiment of the present invention;



FIG. 4A is a flowchart outlining an exemplary process for enabling or disabling the functionality of a performance monitoring application or process for patching code using metadata in a preferred embodiment in accordance of the present invention;



FIG. 4B is a flowchart outlining an exemplary process for providing and using hardware assistance in patching code in accordance with a preferred embodiment of the present invention;



FIG. 5 is a flowchart outlining an exemplary process of handling metadata associated with instructions from the processor's perspective when code patching functionality is enabled with a value of ‘01’ in accordance with a preferred embodiment of the present invention; and



FIG. 6 is a flowchart outlining an exemplary process of handling metadata associated with instructions from the processor's perspective when code patching functionality is enabled with a value of ‘10’ in accordance with a preferred embodiment of the present invention.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

The present invention provides a method, apparatus and computer instructions to autonomically patch code using hardware assistance without modifying the original code. The terms “patch”, “patching”, or other forms of the word “patch”, as they are used in the present application refer to a process by which the execution of the code is modified without the original code itself being modified, as opposed to the prior art “patching” which involves modification of the original code.


As described in the related U.S. patent applications listed and incorporated above, the association of metadata with program code may be implemented in three ways: by directly associating the metadata with the program instructions to which it applies; by associating metadata with program instructions using a performance shadow cache, wherein the performance shadow cache is a separated area of storage, which may be any storage device, such as for example, a system memory, a flash memory, a cache, or a disk; and by associating metadata with page table entries. While any of these three ways may be utilized with the present invention, the latter two ways of association are used in the present description of the preferred embodiments of the present invention for illustrative purposes.


The present invention uses a new type of metadata, associated with program code in one of the three ways as described above, to selectively identify instructions of a program. The metadata takes the form of a new memory word. This new memory word is stored in a performance instrumentation segment of the program, which is linked to the text segment of the program code. The performance instrumentation segment is described in the above applications incorporated by reference.


The present invention also uses a new flag in the machine status register (MSR) to enable or disable a performance monitoring application's or process's availability for patching code using metadata. The MSR is described in applications incorporated by reference above. Many existing processors include a MSR, which contains a set of flags that describe the context of the processor during execution. The new flag of the present invention is added to this set of flags to describe the functionality desired for each process.


For example, the new flag may be used to describe three states: a value of ‘00’ indicates disabling the process's or application's functionality for patching code; a value of ‘01’ indicates enabling the process's or performance monitoring application's functionality for patching code by using metadata to jump to patch code indicated by the ‘branch to’ pointer if a branch is taken; and a value of ‘10’ indicates enabling the process's or performance monitoring application's functionality for patching code by using metadata to jump to the patch code unconditionally, which allows the performance monitoring application or process to execute the patch code and ignore the original program instructions.


When the functionality of patching code using metadata is enabled, the performance monitoring application determines at run time that the code should be patched, the performance monitoring application may allocate an alternative memory location and generate a patched version of the original code for use in subsequent executions of the computer program. This code may be a copy of the original portion of code or an instrumented portion of code, such as an interrupt service routine that tracks the amount of time spent on interrupts or the like. The patched code may then be linked to the original portion of code by metadata generated by the performance monitoring application and stored in association with the original code.


The metadata includes a ‘branch to’ pointer pointing to the patched code. In one embodiment, when the processor encounters a branch instruction that has metadata associated with it, execution is redirected to a patched portion of code if the branch is taken. The metadata is then read in by the processor, which then loads and executes the instructions of the patched portion of code starting at the address identified by the ‘branch to’ pointer in the metadata. Once the patched code has been executed, the processor returns to the original code indicated by end of the patch instructions. If the branch is not taken, the metadata is ignored by the processor. In an alternative embodiment, the ‘branch to’ execution could start at the ‘branch to’ address identified in the metadata only when the branch is not taken.


In an alternative embodiment, instead of checking if the branch is taken, the branch instruction or any other type of instruction with metadata associated is ignored. Execution is redirected to a patched code unconditionally. The metadata is read in by the processor, which then loads and executes the instructions of the patched code starting at the address identified by the ‘branch to’ pointer of the metadata. In this way, the metadata generated by the performance monitoring application permits patching of the original code by overriding the execution of the original code, without modifying the original program code.


The present invention may be implemented in a computer system. The computer system may be a client or a server in a client-server environment that is interconnected over a network. Therefore, the following FIGS. 1-3 are provided in order to give an environmental context in which the operations of the present invention may be implemented. FIGS. 1-3 are only exemplary and no limitation on the computing environment or computing devices in which the present invention may be implemented is intended or implied by the depictions in FIGS. 1-3.


With reference now to FIG. 1, an exemplary block diagram of a data processing system is shown in which the present invention may be implemented. Client 100 is an example of a computer, in which code or instructions implementing the processes of the present invention may be located. Client 100 employs a peripheral component interconnect (PCI) local bus architecture. Although the depicted example employs a PCI bus, other bus architectures such as Accelerated Graphics Port (AGP) and Industry Standard Architecture (ISA) may be used. Processor 102 and main memory 104 connect to PCI local bus 106 through PCI bridge 108. PCI bridge 108 also may include an integrated memory controller and cache memory for processor 102. Additional connections to PCI local bus 106 may be made through direct component interconnection or through add-in boards.


In the depicted example, local area network (LAN) adapter 110, small computer system interface SCSI host bus adapter 112, and expansion bus interface 114 are connected to PCI local bus 106 by direct component connection. In contrast, audio adapter 116, graphics adapter 118, and audio/video adapter 119 are connected to PCI local bus 106 by add-in boards inserted into expansion slots. Expansion bus interface 114 provides a connection for a keyboard and mouse adapter 120, modem 122, and additional memory 124. SCSI host bus adapter 112 provides a connection for hard disk drive 126, tape drive 128, and CD-ROM drive 130. Typical PCI local bus implementations will support three or four PCI expansion slots or add-in connectors.


An operating system runs on processor 102 and coordinates and provides control of various components within data processing system 100 in FIG. 1. The operating system may be a commercially available operating system such as Windows XP, which is available from Microsoft Corporation. An object oriented programming system such as Java may run in conjunction with the operating system and provides calls to the operating system from Java programs or applications executing on client 100. “Java” is a trademark of Sun Microsystems, Inc. Instructions for the operating system, the object-oriented programming system, and applications or programs are located on storage devices, such as hard disk drive 126, and may be loaded into main memory 104 for execution by processor 102.


Those of ordinary skill in the art will appreciate that the hardware in FIG. 1 may vary depending on the implementation. Other internal hardware or peripheral devices, such as flash read-only memory (ROM), equivalent nonvolatile memory, or optical disk drives and the like, may be used in addition to or in place of the hardware depicted in FIG. 1. Also, the processes of the present invention may be applied to a multiprocessor data processing system.


For example, client 100, if optionally configured as a network computer, may not include SCSI host bus adapter 112, hard disk drive 126, tape drive 128, and CD-ROM 130. In that case, the computer, to be properly called a client computer, includes some type of network communication interface, such as LAN adapter 110, modem 122, or the like. As another example, client 100 may be a stand-alone system configured to be bootable without relying on some type of network communication interface, whether or not client 100 comprises some type of network communication interface. As a further example, client 100 may be a personal digital assistant (PDA), which is configured with ROM and/or flash ROM to provide non-volatile memory for storing operating system files and/or user-generated data. The depicted example in FIG. 1 and above-described examples are not meant to imply architectural limitations.


The processes of the present invention are performed by processor 102 using computer implemented instructions, which may be located in a memory such as, for example, main memory 104, memory 124, or in one or more peripheral devices 126-130.


Turning next to FIG. 2, an exemplary block diagram of a processor system for processing information is depicted in accordance with a preferred embodiment of the present invention. Processor 210 may be implemented as processor 102 in FIG. 1.


In a preferred embodiment, processor 210 is a single integrated circuit superscalar microprocessor. Accordingly, as discussed further herein below, processor 210 includes various units, registers, buffers, memories, and other sections, all of which are formed by integrated circuitry. Also, in the preferred embodiment, processor 210 operates according to reduced instruction set computer (“RISC”) techniques. As shown in FIG. 2, system bus 211 connects to a bus interface unit (“BIU”) 212 of processor 210. BIU 212 controls the transfer of information between processor 210 and system bus 211.


BIU 212 connects to an instruction cache 214 and to data cache 216 of processor 210. Instruction cache 214 outputs instructions to sequencer unit 218. In response to such instructions from instruction cache 214, sequencer unit 218 selectively outputs instructions to other execution circuitry of processor 210.


In addition to sequencer unit 218, in the preferred embodiment, the execution circuitry of processor 210 includes multiple execution units, namely a branch unit 220, a fixed-point unit A (“FXUA”) 222, a fixed-point unit B (“FXUB”) 224, a complex fixed-point unit (“CFXU”) 226, a load/store unit (“LSU”) 228, and a floating-point unit (“FPU”) 230. FXUA 222, FXUB 224, CFXU 226, and LSU 228 input their source operand information from general-purpose architectural registers (“GPRs”) 232 and fixed-point rename buffers 234. Moreover, FXUA 222 and FXUB 224 input a “carry bit” from a carry bit (“CA”) register 239. FXUA 222, FXUB 224, CFXU 226, and LSU 228 output results (destination operand information) of their operations for storage at selected entries in fixed-point rename buffers 234. Also, CFXU 226 inputs and outputs source operand information and destination operand information to and from special-purpose register processing unit (“SPR unit”) 237.


FPU 230 inputs its source operand information from floating-point architectural registers (“FPRs”) 236 and floating-point rename buffers 238. FPU 230 outputs results (destination operand information) of its operation for storage at selected entries in floating-point rename buffers 238.


In response to a Load instruction, LSU 228 inputs information from data cache 216 and copies such information to selected ones of rename buffers 234 and 238. If such information is not stored in data cache 216, then data cache 216 inputs (through BIU 212 and system bus 211) such information from a system memory 239 connected to system bus 211. Moreover, data cache 216 is able to output (through BIU 212 and system bus 211) information from data cache 216 to system memory 239 connected to system bus 211. In response to a Store instruction, LSU 228 inputs information from a selected one of GPRs 232 and FPRs 236 and copies such information to data cache 216.


Sequencer unit 218 inputs and outputs information to and from GPRs 232 and FPRs 236. From sequencer unit 218, branch unit 220 inputs instructions and signals indicating a present state of processor 210. In response to such instructions and signals, branch unit 220 outputs (to sequencer unit 218) signals indicating suitable memory addresses storing a sequence of instructions for execution by processor 210. In response to such signals from branch unit 220, sequencer unit 218 inputs the indicated sequence of instructions from instruction cache 214. If one or more of the sequence of instructions is not stored in instruction cache 214, then instruction cache 214 inputs (through BIU 212 and system bus 211) such instructions from system memory 239 connected to system bus 211.


In response to the instructions input from instruction cache 214, sequencer unit 218 selectively dispatches the instructions to selected ones of execution units 220, 222, 224, 226, 228, and 230. Each execution unit executes one or more instructions of a particular class of instructions. For example, FXUA 222 and FXUB 224 execute a first class of fixed-point mathematical operations on source operands, such as addition, subtraction, ANDing, ORing and XORing. CFXU 226 executes a second class of fixed-point operations on source operands, such as fixed-point multiplication and division. FPU 230 executes floating-point operations on source operands, such as floating-point multiplication and division.


As information is stored at a selected one of rename buffers 234, such information is associated with a storage location (e.g. one of GPRs 232 or carry bit (CA) register 242) as specified by the instruction for which the selected rename buffer is allocated. Information stored at a selected one of rename buffers 234 is copied to its associated one of GPRs 232 (or CA register 242) in response to signals from sequencer unit 218. Sequencer unit 218 directs such copying of information stored at a selected one of rename buffers 234 in response to “completing” the instruction that generated the information. Such copying is called “writeback.”


As information is stored at a selected one of rename buffers 238, such information is associated with one of FPRs 236. Information stored at a selected one of rename buffers 238 is copied to its associated one of FPRs 236 in response to signals from sequencer unit 218. Sequencer unit 218 directs such copying of information stored at a selected one of rename buffers 238 in response to “completing” the instruction that generated the information.


Processor 210 achieves high performance by processing multiple instructions simultaneously at various ones of execution units 220, 222, 224, 226, 228, and 230. Accordingly, each instruction is processed as a sequence of stages, each being executable in parallel with stages of other instructions. Such a technique is called “pipelining.” In a significant aspect of the illustrative embodiment, an instruction is normally processed as six stages, namely fetch, decode, dispatch, execute, completion, and writeback.


In the fetch stage, sequencer unit 218 selectively inputs (from instruction cache 214) one or more instructions from one or more memory addresses storing the sequence of instructions discussed further hereinabove in connection with branch unit 220, and sequencer unit 218. In the decode stage, sequencer unit 218 decodes up to four fetched instructions.


In the dispatch stage, sequencer unit 218 selectively dispatches up to four decoded instructions to selected (in response to the decoding in the decode stage) ones of execution units 220, 222, 224, 226, 228, and 230 after reserving rename buffer entries for the dispatched instructions' results (destination operand information). In the dispatch stage, operand information is supplied to the selected execution units for dispatched instructions. Processor 210 dispatches instructions in order of their programmed sequence.


In the execute stage, execution units execute their dispatched instructions and output results (destination operand information) of their operations for storage at selected entries in rename buffers 234 and rename buffers 238 as discussed further hereinabove. In this manner, processor 210 is able to execute instructions out-of-order relative to their programmed sequence.


In the completion stage, sequencer unit 218 indicates an instruction is “complete.” Processor 210 “completes” instructions in order of their programmed sequence.


In the writeback stage, sequencer 218 directs the copying of information from rename buffers 234 and 238 to GPRs 232 and FPRs 236, respectively. Sequencer unit 218 directs such copying of information stored at a selected rename buffer. Likewise, in the writeback stage of a particular instruction, processor 210 updates its architectural states in response to the particular instruction. Processor 210 processes the respective “writeback” stages of instructions in order of their programmed sequence. Processor 210 advantageously merges an instruction's completion stage and writeback stage in specified situations.


In the illustrative embodiment, each instruction requires one machine cycle to complete each of the stages of instruction processing. Nevertheless, some instructions (e.g., complex fixed-point instructions executed by CFXU 226) may require more than one cycle. Accordingly, a variable delay may occur between a particular instruction's execution and completion stages in response to the variation in time required for completion of preceding instructions.


Completion buffer 248 is provided within sequencer 218 to track the completion of the multiple instructions which are being executed within the execution units. Upon an indication that an instruction or a group of instructions have been completed successfully, in an application specified sequential order, completion buffer 248 may be utilized to initiate the transfer of the results of those completed instructions to the associated general-purpose registers.


In addition, processor 210 also includes performance monitor unit 240, which is connected to instruction cache 214 as well as other units in processor 210. Operation of processor 210 can be monitored utilizing performance monitor unit 240, which in this illustrative embodiment is a software-accessible mechanism capable of providing detailed information descriptive of the utilization of instruction execution resources and storage control. Although not illustrated in FIG. 2, performance monitor unit 240 is coupled to each functional unit of processor 210 to permit the monitoring of all aspects of the operation of processor 210, including, for example, reconstructing the relationship between events, identifying false triggering, identifying performance bottlenecks, monitoring pipeline stalls, monitoring idle processor cycles, determining dispatch efficiency, determining branch efficiency, determining the performance penalty of misaligned data accesses, identifying the frequency of execution of serialization instructions, identifying inhibited interrupts, and determining performance efficiency. The events of interest also may include, for example, time for instruction decode, execution of instructions, branch events, cache misses, and cache hits.


Performance monitor unit 240 includes an implementation-dependent number (e.g., 2-8) of counters 241-242, labeled PMC1 and PMC2, which are utilized to count occurrences of selected events. Performance monitor unit 240 further includes at least one monitor mode control register (MMCR). In this example, two control registers, MMCRs 243 and 244 are present that specify the function of counters 241-242. Counters 241-242 and MMCRs 243-244 are preferably implemented as SPRs that are accessible for read or write via MFSPR (move from SPR) and MTSPR (move to SPR) instructions executable by CFXU 226. However, in one alternative embodiment, counters 241-242 and MMCRs 243-244 may be implemented simply as addresses in I/O space. In another alternative embodiment, the control registers and counters may be accessed indirectly via an index register. This embodiment is implemented in the IA-64 architecture in processors from Intel Corporation. Counters 241-242 may also be used to collect branch statistics per instruction when a program is executed.


As mentioned above, the present invention provides an improved method, apparatus, and computer instructions for providing and using hardware assistance in autonomically patching code. The present invention makes use of hardware microcode that supports a new type of metadata to selectively identify portions of code that require patching, or for which patching is desired, in order to provide more efficient execution, or even alternative execution, of the computer program or to perform specific performance optimization functions. The metadata takes the form of a new memory word, which is stored in a performance instrumentation segment of the program. The performance monitoring application links the performance instrumentation segment to the text segment of the program code by adding a reference in the text segment. This performance instrumentation segment includes a table listing program metadata.


Patching code may include reorganizing the identified portions of code or replacing identified portions of code with alternative instrumented code. Metadata may then be associated with the original portion of code that directs the processor to the reorganized or alternative instrumented portion of code.


During execution of instructions, a performance monitoring application identifies a portion of code that is in need of optimization. An example of optimization includes reorganizing instructions to increase efficiency, switching execution to instrumented interrupt service routines to determine time spent in interrupts, providing hooks to instructions to build an instruction trace, or the like. Alternatively, the performance monitoring application may identify a portion of code for which it is desirable to modify the execution of the portion of code, whether that be for optimization purposes or to obtain a different execution result. For example, the execution of the original code may be modified such that a new functionality is added to the execution of the code that was not present in the original code. This new functionality may be added without modifying the original code itself, but only modifying the execution of the original code. For purposes of the following description, however, it will be assumed that the present invention is being used to optimize the execution of the original code through non-invasive patching of the execution of the original code to execute a reorganized portion of code according to the present invention. However, it should be appreciated that the present invention is not limited to such applications of the present invention and many other uses of the present invention may be made without departing from the spirit and scope of the present invention.


For example, the performance monitoring application may reorganize code autonomically by analyzing the access patterns of branch instructions. The performance monitoring application reorganizes the sequence of instructions such that the instructions within the branch of the portion of code appear prior to the non-branch instructions in the sequence of instructions. In this way, the instructions within the branch, which are more likely to be executed during execution of the computer program, are executed in a more contiguous manner than in the original code.


Similarly, if the performance monitoring application determines that at a branch instruction, the branch is seldom taken, the performance monitoring application may perform the reorganization itself, such that the non-branch instructions appear in the sequence of instructions prior to the instructions in the branch. In either case, metadata pointing to this dedicated memory area storing the reorganized code is generated at run time by the performance monitoring application and associated with the original code so that the reorganized code may be executed instead.


In a preferred embodiment, if a branch instruction is associated with metadata and the branch is taken as a result of executing the branch instruction, the processor reads the metadata, which includes a ‘branch to’ pointer that points to the starting address of the reorganized code to which the processor branches the execution. Thus, the address in the original branch instruction is ignored. Alternatively, if the branch is not taken as a result of executing the branch instruction, the metadata is ignored by the processor.


In an alternative embodiment, when the branch instruction, or any other type of instruction, is executed, if the instruction is associated with metadata, the processor reads the metadata and ignores the address in the original instruction. That is, the processor reads the metadata, which includes a pointer pointing to the starting address of the reorganized code, and executes the reorganized code.


When execution of the reorganized portion of code in the allocated memory location is complete, the execution of the computer program may be redirected back to some place in the original code. This place in the original code may be the instruction after the ignored original instruction or the instruction after the original instructions that were duplicated.


Turning now to FIG. 3, an exemplary diagram illustrating an example of metadata is depicted in accordance with a preferred embodiment of the present invention. In this example implementation, metadata 312 is in the form of a new memory word, which is stored in the performance instrumentation segment of the program. Metadata 300 includes three entries, entry 302, 304 and 306. Each of these entries includes an offset and data for describing the ‘branch to’ pointer pointing to the patch code.


In this example, entry 1 offset 310 is the displacement from the beginning of the text segment to the instruction to which the metadata word applies. This offset location identifies which instruction of the program with which the metadata is associated. Entry 1 data 312 is the metadata word that indicates the ‘branch to’ pointer that points to the starting address of the patch code.


The processor may utilize this metadata in any of the three ways described earlier, for example, via a ‘shadow cache’. The processor detects the performance instrumentation segment linked to the text segment at the time that instructions are loaded into the instruction cache. At instruction load time, the processor also loads the corresponding performance metadata into its shadow cache. Then, as an instruction is executed out of the instruction cache, the processor may detect the existence of a metadata word in the shadow cache, mapped to the instruction it is executing. The format of the data in the shadow cache is very similar to the format of the data in FIG. 3 with a series of entries correlating the metadata word 312 with the instruction in the instruction cache. The preferred means of associating the metadata with the instruction using a performance instrumentation shadow cache are described in related U.S. patent application “Method and Apparatus for Counting Execution of Specific Instructions and Accesses to Specific Data Locations”, Ser. No. 10/675,776, filed on Sep. 30, 2003, which is incorporated above.


In one embodiment, if a branch is taken as a result of executing a branch instruction, the processor executes the patch code block at starting address 0x80001024, indicated by the ‘branch to’ pointer in entry 1 data 312 in the shadow cache. If the branch is not taken, entry 1 data 312 is ignored by the processor. Once the execution of patch code is complete, the processor returns to the original instructions as directed at the end of the patch code block.


In an alternative embodiment, entry 1 data 312 may be associated with an instruction other than a branch instruction. The processor examines entry 1 data 312 in entry 1 302 and executes the patch code block at the starting address indicated by the entry 1 data 312 unconditionally. Thus, the original instruction, at offset address 0x120 as described by entry 1 offset 310, is ignored by the processor.


Turning next to FIG. 4A, a flowchart outlining an exemplary process for enabling or disabling the functionality of a performance monitoring application or process for patching code using metadata is depicted in a preferred embodiment in accordance with a preferred embodiment of the present invention. The process begins when the user runs a specific performance monitoring application or process (step 412). The processor, such as processor 210 in FIG. 2, checks the new flag in the machine status register (MSR) (step 414). A determination is then made by the processor as to what the value of the new flag is (step 416). If the value is ‘00’, the performance monitoring application or process is disabled from performing code patching functions, therefore the processor starts executing the program instruction immediately (step 418) and the process terminating thereafter.


Turning back to step 416, if the flag value is ‘01’, the performance monitoring application or process is enabled to perform the code patching function by using metadata to jump to the ‘branch to’ pointer only if a branch is taken, in order to execute the patch code (step 422. A branch is taken as a result of executing a branch instruction. If the branch is not taken, the metadata is ignored. Next, the processor starts executing the program instruction immediately (step 418) and the process terminating thereafter.


Turning back to step 416, if the flag value is ‘10’, the performance monitoring application or process is enabled to perform code patching function unconditionally. Thus, the performance monitoring application or process uses ‘branch to’ pointer in the metadata to jump to the starting address of the patch code unconditionally (step 420). Thus, the processor ignores the original instruction of the program when the metadata is encountered. Once the performance monitoring application or process is enabled to use metadata to perform code patching function, the processor starts executing the program instruction (step 418), the process terminating thereafter.


Turning next to FIG. 4B, a flowchart outlining an exemplary process for providing and using hardware assistance in patching code is depicted in accordance with a preferred embodiment of the present invention. The process begins when the processor executes program instructions (step 402) after the process steps of FIG. 4A are complete. If the code patching functionality is enabled using process steps in FIG. 4A, a determination is made by the performance monitoring application at run time as to whether one or more portions of code should be patched for specific performance optimization function (step 404). For example, the performance monitoring application determines whether to reorganize code by examining the access patterns of the branch instructions. If the code does not need to be patched, the operation terminates.


If the performance monitoring application determines that the code should be patched in step 404, the performance monitoring application patches the code (step 406) and associates metadata with the original code instructions (step 408), with the process terminating thereafter.


Turning next to FIG. 5, a flowchart outlining an exemplary process of handling metadata associated with instructions from the processor's perspective when code patching functionality is enabled with a value of ‘01’ is depicted in accordance with a preferred embodiment of the present invention. The process begins when the processor sees a branch instruction or other types of instruction during program execution (step 500). This step is performed after the process steps of FIG. 4A are complete. The processor determines if metadata is associated with the instruction (step 502). If no metadata is associated with the instruction, the processor continues to execute code instructions (step 514), the process terminating thereafter.


Turning back to step 502, if metadata is associated with the instruction, a determination is made by the processor as to whether the instruction is a branch instruction (step 504). In a preferred embodiment, if the instruction is a branch instruction, the processor executes the branch instruction (step 506).


After the branch instruction is executed, a determination is made as to whether the branch is taken (step 508). If the branch is taken as a result of executing the branch instruction, the processor looks up the address of the patch code indicated by the ‘branch to’ pointer of the metadata (step 510). If the branch is not taken as a result of executing the branch instruction, the metadata is ignored and the processor continues to execute original code instructions (step 514), the process terminating thereafter.


Turning back to step 504, if the instruction is not a branch instruction, the process continues to execute original code instructions (step 514), the process terminating thereafter.


Continuing from step 510, the processor executes the patch code (step 512) at the starting address obtained from step 510 and returns to execute the original code instructions (step 514) indicated by the end of the patch code, the process terminating thereafter.


Turning next to FIG. 6, an exemplary diagram illustrating an example of handling metadata associated with instructions from the processor's perspective when code patching functionality is enabled with a value of ‘10’ is depicted in accordance with the present invention. The process begins when the processor sees a branch instruction or other types of instruction during program execution (step 600). This step is performed after the process steps of FIG. 4A are complete.


The processor then determines if metadata is associated with the instruction (step 602). If no metadata is associated with the instruction, the process continues to execute original code instructions (step 608), the process terminating thereafter. If metadata is associated with the instruction, the processor looks up the address of the patch code indicated by the ‘branch to’ pointer of the metadata (step 604). The processor executes the patch instructions unconditionally and ignores the original program instruction (step 606). The processor continues to execute original program instructions (step 608) and the process terminating thereafter.


Thus, the present invention allows a user to enable or disable the functionality of code patching performed by a performance monitoring application or process. The present invention provides a new flag in the machine status register (MSR) for enabling or disabling the functionality. When the functionality is enabled, the present invention allows the performance monitoring application or process to use metadata to selectively identify portions of code to patch. This allows an alternative or optimized execution of computer program code.


The metadata takes the form of a memory word, which is stored in the performance instrumentation segment of the application. The present invention does not require that the original code itself be modified and instead, makes use of the metadata, to autonomically determine what instructions are executed at run time. In this way, the original code is not modified, only the execution of the code is modified.


The metadata includes a ‘branch to’ pointer pointing to the starting address of the patch code that is to be executed. Thus, using the innovative features of the present invention, the program may patch code autonomically by selectively identifying the branch instruction or other types of instruction and associating metadata comprising pointers to the patch code.


It is important to note that while the present invention has been described in the context of a fully functioning data processing system, those of ordinary skill in the art will appreciate that the processes of the present invention are capable of being distributed in the form of a computer readable storage medium of instructions and a variety of forms and that the present invention applies equally regardless of the particular type of signal bearing media actually used to carry out the distribution. Examples of computer readable storage media include recordable-type media, such as a floppy disk, a hard disk drive, a RAM, CD-ROMs, DVD-ROMs, and transmission-type media, such as digital and analog communications links, wired or wireless communications links using transmission forms, such as, for example, radio frequency and light wave transmissions. The computer readable storage media may take the form of coded formats that are decoded for actual use in a particular data processing system.


The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention, the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims
  • 1. A computer system having a processor configured to autonomically patch computer program code, comprising: checking a flag in a machine status register to determine whether code patching functionality is to be enabled;responsive to determining that code patching functionality is to be enabled, enabling the code patching functionality;executing a computer program instruction, wherein the computer program instruction is located at a start of a block of code of an execution sequence of original code instructions;determining whether metadata is associated with the computer program instruction, wherein the metadata identifies the computer program instruction as a computer program instruction having associated patch instructions, and indicates an address of the patch instructions, wherein the patch instructions are created by: copying instructions from the block of code to a new memory location; modifying the order of the instructions of the block of code; and populating metadata with a pointer to the patch instructions;responsive to determining that the metadata is associated with the computer program instruction, redirecting execution to the patch instructions at the address indicated by the metadata;executing the patch instructions; returning to an instruction of the execution sequence of original code instructions in the computer program; andstoring a result of executing the execution sequence.
  • 2. The computer system of claim 1, wherein the patch instructions are created during execution of the computer program.
  • 3. The computer system of claim 1, wherein the metadata is in a form of a memory word.
  • 4. The computer system of claim 1, wherein the metadata includes a pointer to the patch instructions for indicating the address of the patch instructions.
  • 5. The computer system of claim 4, wherein the pointer to the patch instructions includes a starting address of the patch instructions in an allocated memory location.
  • 6. The computer system of claim 5, wherein the starting address includes at least one of an absolute or offset address.
  • 7. The computer system of claim 1, wherein the patch instructions includes at least one of reorganized instructions, instrumented alternative instructions, and hooks to build an instruction trace.
  • 8. A computer to autonomically patch computer program code, comprising: a bus system;a memory connected to the bus system, wherein the memory includes a computer usable program code; anda processing unit connected to the bus system, wherein the processing unit executes the computer usable program code to execute program code: to check a flag in a machine status register to determine whether code patching functionality is enabled; to enable the code patching functionality in response to determining that code patching functionality is enabled; to execute set a computer program instruction, wherein the computer program instruction is located at a start of a block of code of an execution sequence of original code instructions; determine whether metadata is associated with the computer program instruction, wherein the metadata identifies the computer program instruction as a computer program instruction having associated patch instructions, and indicates an address of the patch instructions, wherein the patch instructions are created by: copying instructions from the block of code to a new memory location; modifying the order of the instructions of the block of code; and populating metadata with a pointer to the patch instructions; redirect execution to the patch instructions at the address indicated by the metadata in response to determining that metadata is associated with the computer program instruction; execute the patch instructions; return to an instruction of the execution sequence of original code instructions in the computer program; and store a result of executing the execution sequence.
  • 9. The computer of claim 8, wherein the patch instructions are created during execution of the computer program.
  • 10. The computer of claim 8, wherein the metadata is in a form of a memory word.
  • 11. The computer of claim 8, wherein the metadata includes a pointer to the patch instructions for indicating the address of the patch instructions.
  • 12. The computer of claim 11, wherein the pointer to the patch instructions includes a starting address of the patch instructions in an allocated memory location.
  • 13. The computer of claim 12, wherein the starting address includes at least one of an absolute or offset address.
  • 14. The computer of claim 8, wherein the patch instructions includes at least one of reorganized instructions, instrumented alternative instructions, and hooks to build an instruction trace.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a continuation of application Ser. No. 12/122,558, filed May 16, 2008, now U.S. Pat. No. 8,141,099, which is a continuation of application Ser. No. 10/757,171, filed Jan. 14, 2004, now U.S. Pat. No. 7,415,705 issued Aug. 19, 2008.

US Referenced Citations (407)
Number Name Date Kind
2112794 Stickney Mar 1938 A
3707725 Dellheim Dec 1972 A
4034353 Denny et al. Jul 1977 A
4145735 Soga Mar 1979 A
4291371 Holtey Sep 1981 A
4316245 Luu et al. Feb 1982 A
4374409 Bienvenu et al. Feb 1983 A
4395757 Bienvenu et al. Jul 1983 A
4558413 Schmidt et al. Dec 1985 A
4590555 Bourrez May 1986 A
4598364 Gum et al. Jul 1986 A
4682283 Robb Jul 1987 A
4794472 Doyama Dec 1988 A
4821178 Levin et al. Apr 1989 A
4825359 Ohkami et al. Apr 1989 A
4912623 Rantala et al. Mar 1990 A
4928222 Vriezen et al. May 1990 A
5032982 Dalrymple et al. Jul 1991 A
5051944 Fetterolf et al. Sep 1991 A
5103394 Blasciak Apr 1992 A
5113507 Jaeckel May 1992 A
5142634 Fite et al. Aug 1992 A
5142635 Saini Aug 1992 A
5150349 Takai et al. Sep 1992 A
5151981 Westcott et al. Sep 1992 A
5206584 Nishimori Apr 1993 A
5212794 Pettis et al. May 1993 A
5256775 Froehler Oct 1993 A
5257358 Cohen Oct 1993 A
5276833 Auvinen et al. Jan 1994 A
5287481 Lin Feb 1994 A
5339426 Aoshima Aug 1994 A
5339435 Lubkin et al. Aug 1994 A
5355487 Keller et al. Oct 1994 A
5394529 Brown, III et al. Feb 1995 A
5404500 Legvold et al. Apr 1995 A
5438670 Baror et al. Aug 1995 A
5450349 Brown, III et al. Sep 1995 A
5463775 DeWitt et al. Oct 1995 A
5479633 Wells et al. Dec 1995 A
5537541 Wibecan Jul 1996 A
5537572 Michelsen et al. Jul 1996 A
5544342 Dean Aug 1996 A
5548762 Creedon et al. Aug 1996 A
5555432 Hinton et al. Sep 1996 A
5557548 Gover et al. Sep 1996 A
5564015 Bunnell Oct 1996 A
5574872 Rotem et al. Nov 1996 A
5581482 Wiedenman et al. Dec 1996 A
5581778 Chin et al. Dec 1996 A
5581981 Fulkerson et al. Dec 1996 A
5590352 Zuraski, Jr. et al. Dec 1996 A
5594864 Trauben Jan 1997 A
5603004 Kurpanek et al. Feb 1997 A
5628018 Matsuzaki et al. May 1997 A
5644692 Eick Jul 1997 A
5652858 Okada et al. Jul 1997 A
5657253 Dreyer et al. Aug 1997 A
5659679 Alpert et al. Aug 1997 A
5666507 Flora Sep 1997 A
5671920 Acquaviva et al. Sep 1997 A
5675802 Allen et al. Oct 1997 A
5684030 Elokdah et al. Nov 1997 A
5689712 Heisch Nov 1997 A
5691920 Levine et al. Nov 1997 A
5694540 Humelsine et al. Dec 1997 A
5708803 Ishimi et al. Jan 1998 A
5710881 Gupta et al. Jan 1998 A
5727167 Dwyer, III et al. Mar 1998 A
5740413 Alpert et al. Apr 1998 A
5745770 Thangadurai et al. Apr 1998 A
5748878 Rees et al. May 1998 A
5751942 Christensen et al. May 1998 A
5752062 Gover et al. May 1998 A
5754839 Pardo et al. May 1998 A
5758061 Plum May 1998 A
5758168 Mealey et al. May 1998 A
5758187 Young May 1998 A
5761103 Oakland et al. Jun 1998 A
5768500 Agrawal et al. Jun 1998 A
5772322 Burns et al. Jun 1998 A
5774724 Heisch Jun 1998 A
5775825 Hong et al. Jul 1998 A
5787280 Joseph et al. Jul 1998 A
5787286 Hooker Jul 1998 A
5794028 Tran Aug 1998 A
5794052 Harding Aug 1998 A
5796939 Berc et al. Aug 1998 A
5797019 Levine et al. Aug 1998 A
5802378 Arndt et al. Sep 1998 A
5802678 Puente Sep 1998 A
5805879 Hervin et al. Sep 1998 A
5815707 Krause et al. Sep 1998 A
5822578 Frank et al. Oct 1998 A
5822763 Baylor et al. Oct 1998 A
5822790 Mehrotra Oct 1998 A
5835702 Levine et al. Nov 1998 A
5839050 Baehr et al. Nov 1998 A
5855578 Guglielmi et al. Jan 1999 A
5857097 Henzinger et al. Jan 1999 A
5862381 Advani et al. Jan 1999 A
5872913 Berry et al. Feb 1999 A
5875294 Roth et al. Feb 1999 A
5875334 Chow et al. Feb 1999 A
5887159 Burrows Mar 1999 A
5889947 Starke Mar 1999 A
5896538 Blandy et al. Apr 1999 A
5909573 Sheaffer Jun 1999 A
5913925 Kahle et al. Jun 1999 A
5920689 Berry et al. Jul 1999 A
5920721 Hunter et al. Jul 1999 A
5923863 Adler et al. Jul 1999 A
5926640 Mason et al. Jul 1999 A
5928334 Mandyam et al. Jul 1999 A
5930508 Faraboschi et al. Jul 1999 A
5937437 Roth et al. Aug 1999 A
5938760 Levine et al. Aug 1999 A
5938778 John, Jr. et al. Aug 1999 A
5940618 Blandy et al. Aug 1999 A
5949971 Levine et al. Sep 1999 A
5950003 Kaneshiro et al. Sep 1999 A
5950009 Bortnikov et al. Sep 1999 A
5966537 Ravichandran Oct 1999 A
5966538 Granston et al. Oct 1999 A
5966539 Srivastava Oct 1999 A
5970439 Levine et al. Oct 1999 A
5973417 Goetz et al. Oct 1999 A
5973542 Okayasu et al. Oct 1999 A
5978907 Tran et al. Nov 1999 A
5987250 Subrahmanyam Nov 1999 A
5987598 Levine et al. Nov 1999 A
5991708 Levine et al. Nov 1999 A
5991908 Baxter et al. Nov 1999 A
5996069 Yasoshima et al. Nov 1999 A
6006033 Heisch Dec 1999 A
6009514 Henzinger et al. Dec 1999 A
6026235 Shaughnessy Feb 2000 A
6063134 Peters et al. May 2000 A
6067644 Levine et al. May 2000 A
6070009 Dean et al. May 2000 A
6073109 Flores et al. Jun 2000 A
6073215 Snyder Jun 2000 A
6094709 Baylor et al. Jul 2000 A
6098169 Ranganathan Aug 2000 A
6101524 Choi et al. Aug 2000 A
6105051 Borkenhagen et al. Aug 2000 A
6105129 Meier et al. Aug 2000 A
6112317 Berc et al. Aug 2000 A
6118448 McMillan et al. Sep 2000 A
6119075 Dean et al. Sep 2000 A
6128721 Yung et al. Oct 2000 A
6134676 VanHuben et al. Oct 2000 A
6145077 Sidwell et al. Nov 2000 A
6145123 Torrey et al. Nov 2000 A
6147318 Marhic Nov 2000 A
6148321 Hammond Nov 2000 A
6149318 Chase et al. Nov 2000 A
6161187 Mason et al. Dec 2000 A
6163840 Chrysos et al. Dec 2000 A
6182210 Akkary et al. Jan 2001 B1
6185652 Shek et al. Feb 2001 B1
6185671 Pentovski et al. Feb 2001 B1
6189072 Levine et al. Feb 2001 B1
6189141 Benitez et al. Feb 2001 B1
6189142 Johnston et al. Feb 2001 B1
6192513 Subrahmanyam Feb 2001 B1
6195765 Kislanko et al. Feb 2001 B1
6199204 Donohue Mar 2001 B1
6202199 Wygodny et al. Mar 2001 B1
6202207 Donohue Mar 2001 B1
6206235 Green Mar 2001 B1
6206584 Hastings Mar 2001 B1
6212675 Johnston et al. Apr 2001 B1
6223338 Smolders Apr 2001 B1
6233679 Holmberg May 2001 B1
6237019 Ault et al. May 2001 B1
6237141 Holzle et al. May 2001 B1
6240510 Yeh et al. May 2001 B1
6243804 Cheng Jun 2001 B1
6247113 Jaggar Jun 2001 B1
6253338 Smolders Jun 2001 B1
6256771 O'Neil et al. Jul 2001 B1
6256775 Flynn Jul 2001 B1
6275893 Bonola Aug 2001 B1
6278064 Hinkley et al. Aug 2001 B1
6285974 Mandyam et al. Sep 2001 B1
6286132 Tanaka et al. Sep 2001 B1
6286584 Frields Sep 2001 B1
6298521 Butterfield Oct 2001 B1
6311327 O'Brien et al. Oct 2001 B1
6324689 Lowney et al. Nov 2001 B1
6330662 Patel et al. Dec 2001 B1
6339818 Olszewski et al. Jan 2002 B1
6349406 Levine et al. Feb 2002 B1
6351844 Bala Feb 2002 B1
6353877 Duncan et al. Mar 2002 B1
6374364 McElroy et al. Apr 2002 B1
6378064 Edwards et al. Apr 2002 B1
6381679 Matsubara et al. Apr 2002 B1
6404500 Schneider et al. Jun 2002 B1
6406135 Watanabe et al. Jun 2002 B1
6408386 Hammond et al. Jun 2002 B1
6425118 Molloy et al. Jul 2002 B1
6430741 Mattson, Jr. et al. Aug 2002 B1
6430938 Royal et al. Aug 2002 B1
6438743 Boehm et al. Aug 2002 B1
6442585 Dean et al. Aug 2002 B1
6446019 Kynett et al. Sep 2002 B1
6446029 Davidson et al. Sep 2002 B1
6453468 D'Souza Sep 2002 B1
6457170 Boehm et al. Sep 2002 B1
6459998 Hoffman Oct 2002 B1
6460135 Suganuma Oct 2002 B1
6460693 Harrold Oct 2002 B1
6477703 Smith et al. Nov 2002 B1
6480938 Vondran, Jr. Nov 2002 B2
6480966 Rawson, III Nov 2002 B1
6484315 Ziese Nov 2002 B1
6501995 Kinney et al. Dec 2002 B1
6505292 Witt Jan 2003 B1
6513045 Casey et al. Jan 2003 B1
6519310 Chapple Feb 2003 B2
6526571 Aizikowitz et al. Feb 2003 B1
6530042 Davidson et al. Mar 2003 B1
6539458 Holmberg Mar 2003 B2
6542985 Johnson et al. Apr 2003 B1
6549930 Chrysos et al. Apr 2003 B1
6549959 Yates et al. Apr 2003 B1
6549998 Pekarich et al. Apr 2003 B1
6550002 Davidson et al. Apr 2003 B1
6559959 Miura et al. May 2003 B2
6560693 Puzak et al. May 2003 B1
6562858 Oxenkrug May 2003 B2
6569679 Barber et al. May 2003 B1
6594820 Ungar Jul 2003 B1
6598153 Flachs et al. Jul 2003 B1
6601233 Underwood Jul 2003 B1
6631514 Le Oct 2003 B1
6636950 Mithal et al. Oct 2003 B1
6647301 Sederlund et al. Nov 2003 B1
6654781 Browning Nov 2003 B1
6658416 Hussain et al. Dec 2003 B1
6658651 O'Brien et al. Dec 2003 B2
6662295 Yamaura Dec 2003 B2
6665776 Jouppi et al. Dec 2003 B2
6678755 Peterson et al. Jan 2004 B1
6681387 Hwu et al. Jan 2004 B1
6681388 Sato et al. Jan 2004 B1
6687794 Malik Feb 2004 B2
6687807 Damron Feb 2004 B1
6687811 Yamada Feb 2004 B1
6721875 McCormick, Jr. et al. Apr 2004 B1
6725457 Priem et al. Apr 2004 B1
6725458 Shimotani et al. Apr 2004 B2
6732354 Ebeling et al. May 2004 B2
6735666 Koning May 2004 B1
6735757 Kroening et al. May 2004 B1
6742179 Megiddo et al. May 2004 B2
6757771 Christie Jun 2004 B2
6758168 Koskinen et al. Jul 2004 B2
6772322 Merchant et al. Aug 2004 B1
6772412 Baba et al. Aug 2004 B2
6774724 Krvavac Aug 2004 B2
6775728 Zimmer et al. Aug 2004 B2
6775825 Grumann et al. Aug 2004 B1
6782454 Damron Aug 2004 B1
6785844 Wong et al. Aug 2004 B2
6801961 Chu et al. Oct 2004 B2
6820155 Ito Nov 2004 B1
6826749 Patel et al. Nov 2004 B2
6832296 Hooker Dec 2004 B2
6842850 Ganapathy et al. Jan 2005 B2
6848029 Coldewey Jan 2005 B2
6848030 Tokar et al. Jan 2005 B2
6857083 Floyd et al. Feb 2005 B2
6865663 Barry Mar 2005 B2
6865666 Yoshida et al. Mar 2005 B2
6871298 Cavanaugh et al. Mar 2005 B1
6918106 Burridge et al. Jul 2005 B1
6918606 Petrishe Jul 2005 B2
6925424 Jones et al. Aug 2005 B2
6928521 Burton et al. Aug 2005 B1
6928582 Adl-Tabatabai et al. Aug 2005 B2
6930508 Kim et al. Aug 2005 B2
6944720 Sperber et al. Sep 2005 B2
6944722 Cantrill Sep 2005 B2
6944734 Anzai et al. Sep 2005 B2
6948032 Kadambi et al. Sep 2005 B2
6948059 Sprecher et al. Sep 2005 B1
6951018 Long et al. Sep 2005 B2
6961681 Choquier et al. Nov 2005 B1
6961925 Callahan, II et al. Nov 2005 B2
6966057 Lueh Nov 2005 B2
6970999 Kurihara et al. Nov 2005 B2
6971091 Arnold et al. Nov 2005 B1
6972417 Suganuma et al. Dec 2005 B2
6972541 Matsushiro et al. Dec 2005 B2
6973417 Maxwell, III et al. Dec 2005 B1
6973542 Schmuck et al. Dec 2005 B1
6981128 Fluhr et al. Dec 2005 B2
6988186 Eickemeyer et al. Jan 2006 B2
7020808 Sato et al. Mar 2006 B2
7024668 Shiomi et al. Apr 2006 B2
7035996 Woodall et al. Apr 2006 B2
7065634 Lewis et al. Jun 2006 B2
7069541 Dougherty et al. Jun 2006 B2
7086035 Mericas Aug 2006 B1
7089535 Bates et al. Aug 2006 B2
7093081 DeWitt, Jr. et al. Aug 2006 B2
7093154 Bartfai et al. Aug 2006 B2
7093236 Swaine et al. Aug 2006 B2
7114036 DeWitt, Jr. et al. Sep 2006 B2
7114150 Dimpsey et al. Sep 2006 B2
7131115 Hundt et al. Oct 2006 B2
7155575 Krishnaiyer et al. Dec 2006 B2
7162594 Bungo Jan 2007 B2
7168067 Betker et al. Jan 2007 B2
7181723 Luk et al. Feb 2007 B2
7194732 Fisher et al. Mar 2007 B2
7197586 DeWitt, Jr. et al. Mar 2007 B2
7207043 Blythe et al. Apr 2007 B2
7210126 Ghobrial et al. Apr 2007 B2
7225309 DeWitt, Jr. et al. May 2007 B2
7237242 Blythe et al. Jun 2007 B2
7257657 DeWitt, Jr. et al. Aug 2007 B2
7290254 Comp et al. Oct 2007 B2
7293164 DeWitt, Jr. et al. Nov 2007 B2
7296130 Dimpsey et al. Nov 2007 B2
7296259 Betker et al. Nov 2007 B2
7299319 Dimpsey et al. Nov 2007 B2
7313655 Hsu Dec 2007 B2
7373637 DeWitt, Jr. et al. May 2008 B2
7392370 DeWitt, Jr. et al. Jun 2008 B2
7395527 DeWitt, Jr. et al. Jul 2008 B2
7415699 Gouriou et al. Aug 2008 B2
7415705 DeWitt, Jr. et al. Aug 2008 B2
7421681 DeWitt, Jr. et al. Sep 2008 B2
7421684 Dimpsey et al. Sep 2008 B2
7448025 Kalafatis et al. Nov 2008 B2
7458078 DeWitt, Jr. et al. Nov 2008 B2
7469407 Burky et al. Dec 2008 B2
7480899 Dimpsey et al. Jan 2009 B2
7487301 Mutz et al. Feb 2009 B2
7496908 DeWitt, Jr. et al. Feb 2009 B2
7496909 Kuch et al. Feb 2009 B2
7526616 Dimpsey et al. Apr 2009 B2
7526757 Levine et al. Apr 2009 B2
7574587 DeWitt, Jr. et al. Aug 2009 B2
7577951 Partamian et al. Aug 2009 B2
7581218 Johnson Aug 2009 B2
7594219 Ramachandran et al. Sep 2009 B2
7620777 Dimpsey et al. Nov 2009 B2
7779394 Homing et al. Aug 2010 B2
7783886 Walmsley Aug 2010 B2
7895382 DeWitt, Jr. et al. Feb 2011 B2
7895473 Alexander, III et al. Feb 2011 B2
7902986 Takei Mar 2011 B2
7926041 Dimpsey et al. Apr 2011 B2
7937685 Weil et al. May 2011 B2
7937691 Dewitt, Jr. et al. May 2011 B2
7987453 Dewitt, Jr. et al. Jul 2011 B2
8042102 Dewitt, Jr. et al. Oct 2011 B2
8070009 McKenzie et al. Dec 2011 B2
8135915 Dimpsey et al. Mar 2012 B2
8141099 Dewitt, Jr. et al. Mar 2012 B2
8171457 Dimpsey et al. May 2012 B2
8191049 Levine et al. May 2012 B2
8255880 Dewitt, Jr. et al. Aug 2012 B2
8381037 DeWitt, Jr. et al. Feb 2013 B2
20010014905 Onodera Aug 2001 A1
20020073406 Gove Jun 2002 A1
20020124161 Moyer et al. Sep 2002 A1
20020199179 Lavery et al. Dec 2002 A1
20030005422 Kosche et al. Jan 2003 A1
20030040955 Anaya et al. Feb 2003 A1
20030061471 Matsuo Mar 2003 A1
20030066055 Spivey Apr 2003 A1
20030115580 Arai et al. Jun 2003 A1
20030126590 Burrows et al. Jul 2003 A1
20030131343 French et al. Jul 2003 A1
20030135719 DeWitt, Jr. et al. Jul 2003 A1
20030135720 DeWitt, Jr. et al. Jul 2003 A1
20040003381 Suzuki et al. Jan 2004 A1
20040006546 Wedlake et al. Jan 2004 A1
20040030870 Buser Feb 2004 A1
20040128651 Lau Jul 2004 A1
20040139246 Arimilli et al. Jul 2004 A1
20040236993 Adkisson et al. Nov 2004 A1
20050071822 DeWitt, Jr. et al. Mar 2005 A1
20050081019 DeWitt, Jr. et al. Apr 2005 A1
20050081107 DeWitt, Jr. et al. Apr 2005 A1
20050091456 Huck Apr 2005 A1
20050102493 DeWitt, Jr. et al. May 2005 A1
20050155020 DeWitt, Jr. et al. Jul 2005 A1
20050155021 DeWitt, Jr. et al. Jul 2005 A1
20050155025 DeWitt, Jr. et al. Jul 2005 A1
20050155026 DeWitt, Jr. et al. Jul 2005 A1
20050155030 DeWitt, Jr. et al. Jul 2005 A1
20050210450 Dimpsey et al. Sep 2005 A1
20060090063 Theis Apr 2006 A1
20080088609 Chou et al. Apr 2008 A1
20080141005 Dewitt, Jr. et al. Jun 2008 A1
20080235495 DeWitt et al. Sep 2008 A1
20090287729 Chen et al. Nov 2009 A1
20090300587 Zheng et al. Dec 2009 A1
20110105970 Gainer, Jr. May 2011 A1
20110106994 DeWitt, Jr. et al. May 2011 A1
Foreign Referenced Citations (7)
Number Date Country
1164475 Dec 2001 EP
10083284 Mar 1998 JP
10260820 Sep 1998 JP
2000029731 Jan 2000 JP
2000347863 Dec 2000 JP
406239 Sep 2000 TW
457432 Oct 2001 TW
Non-Patent Literature Citations (168)
Entry
Sriram Vajapevam, Improving Supercalar Instruction Dispatch and Issue by Exploiting Dynamic Code Sequences, 1997.
USPTO Final Office Action dated Jun. 10, 2013 regarding U.S. Appl. No. 13/004,153, 7 pages.
USPTO non-final office action dated Jun. 17, 2013 regarding U.S. Appl. No. 12/021,425, 46 pages.
Office Action, dated May 27, 2011, regarding U.S. Appl. No. 13/004,153, 23 pages.
Final Office Action, dated Oct. 20, 2011, regarding U.S. Appl. No. 13/004,153, 10 pages.
Office Action, dated Feb. 25, 2013, regarding U.S. Appl. No. 13/004,153, 45 pages.
USPTO Office Action dated Jun. 5, 2006 regarding U.S. Appl. No. 10/675,872, 16 pages.
USPTO Final Office Action dated Nov. 3, 2006 regarding U.S. Appl. No. 10/675,872, 19 pages.
USPTO Office Action dated Jul. 13, 2007 regarding U.S. Appl. No. 10/675,872, 22 pages.
USPTO Notice of Allowance dated Jan. 2, 2008 regarding U.S. Appl. No. 10/675,872, 6 pages.
USPTO Supplemental Notice of Allowance dated Mar. 19, 2008 regarding U.S. Appl. No. 10/675,872, 6 pages.
USPTO Office Action dated Apr. 20, 2007 regarding U.S. Appl. No. 10/806,917, 45 pages.
USPTO Final Office Action dated Oct. 4, 2007 regarding U.S. Appl. No. 10/806,917, 18 pages.
USPTO Notice of Allowance dated May 1, 2008 regarding U.S. Appl. No. 10/806,917, 9 pages.
Aho et al., “Compilers: Principles, Techniques, and Tools”, published by Addison-Wesley, Mar. 1988, pp. 488-497.
Ammons et al., “Exploiting Hardware Performance Counters with Flow and Context Sensitive Profiling”, ACM SIGPLAN Notices, vol. 32, Iss.5, May 1997, pp. 85-96.
Armand et al., “Multi-threaded Processes in Chorus/MIX,” Proceedings of EEUG Spring 1990 Conference, Apr. 1990, pp. 1-16.
Briggs et al., “Synchronization, Coherence, and Event Ordering in Multiprocessors,” Computer, vol. 21, Issue 2, Feb. 1988, pp. 9-21.
“Cache Miss Director—A Means of Prefetching Cache Missed Lines”, IBM Technical Disclosure Bulletin, vol. 25, Iss.3A, Aug. 1982, p. 1286.
Cai, “Architectural and Multiprocessor Design Verification of the PowerPC 604 Data Cache,” Conference Proceedings of the 1995 IEEE Fourteenth Annual International Phoenix Conference on Computers and Communications, Mar. 1995, pp. 383-388.
Carey et al., “The Architecture of the EXODUS Extensible DBMS”, 1986 IEEE, ACM Digital Library, pp. 52-65.
Chang et al., “Using Profile Information to assist Classic Code Optimizations”, Software—Practice and Experience, vol. 21, Iss.12, Dec. 1991, pp. 1301-1321.
Cohen et al., “Hardware-Assisted Characterization of NAS Benchmarks”, Cluster Computer, vol. 4, Iss.3, Jul. 2001, pp. 189-196.
Conte et al., “Accurate and Practical Profile-Driven Compilation Using the Profile Buffer”, Proceedings of the 29th annual ACM/IEEE international symposium on Microarchitecture, Dec. 1996, pp. 36-45.
Conte et al., “Using Branch Handling Hardware to Support Profile-Driven Optimization”, Proceedings of the 27th annual international symposium on Microarchitecture, Nov./Dec. 1994, pp. 12-21.
“CPU cache,” Wikipedia definition, article dated Oct. 2006, 14 pages, accessed Nov. 1, 2006 http://en.wikipedia.org/wiki/CPU—cache.
Schulz, “EDG Testbed Experience,” Oct. 2002, pp. 1-27, accessed Aug. 27, 2012 http://conferences.fnal.gov/lccws/papers2/tue/edgtestbmarkusfinal.pdf.
“Enable debuggers as an objective performance measurement toll for software development cost reduction”, IBM Research Disclosure 444188, Apr. 2001, pp. 686-688.
Fisher, “Trace Scheduling: A Technique for Global Microcode Compaction”, IEEE Transactions on Computers, vol. C-30, No. 7, Jul. 1981, pp. 478-490.
Grunwald et al., “Whole-Program Optimization for Time and Space Efficient Threads”, ASPLOS-VII Proceedings—Seventh International Conference on Architectural Support for Programming Languages and Operating Systems, Cambridge, MA, Oct. 1-5, 1996, 10 pages.
“Hardware Cycle Based memory Residency,” IBM, May 2003, ip.com, IPCOM000012728D, 3 pages.
Hyde “4.5 Decoding and Executing Instructions: Random Logic Versus Microcode”, The Art of Assembly Language, Copyright 2001, pp. 247-248.
Inoue “Digital mobile communication system designed for nationwide police activities—WIDE system”, 30th Annual 1995, International Carnahan Conference on Security Technology, Oct. 1996, pp. 33-36 (Abstract).
“Intel IA-64 Architecture Software Developer's Manual vol. 4: Itanium Processor Programmer's Guide”, Intel, Document No. 245320-002, Jul. 2000, 110 pages.
Iwasawa “Parellelization Method of Fortran DO Loops by Parallelizing Assist System”, Joho Shori Gakkai Ronbushi (Transactions of Information Processing Society of Japan), vol. 36, Iss.8, Aug. 1995, pp. 1995-2006.
“JavaServer Pages”, Wikipedia, 7 pages, accessed Jan. 24, 2006, http://en.wikipedia.org/wiki/JavaServer—Pages.
Jya, “Software Design of A UNIX-like Kernel”, eThesys, accessed Jun. 7, 2010, 4 pages.
Kikuchi, “Parallelization Assist System”, Joho Shori, vol. 34, Iss.9, Sep. 1993, pp. 1158-1169.
Kistler et al., “Continuous Program Optimization: A Case Study,” ACM Transactions on Programming Languages and Systems, vol. 25, No. 4, Jul. 2003, pp. 500-548.
Mano, “Ch.11 Input-Output Organization”, Computer System Architecture, Copyright 1982, pp. 434-443.
Merten et al., “A Hardware-Driven Profiling Scheme for Identifying Program Hot Spots to Support Runtime Optimization”, Proceedings of the 26th International Symposium on Computer Architecture, May 1999, pp. 136-147.
“Method for the dynamic prediction of nonsquential memory accesses”, ip.com, IPCOM000009888D, Sep. 2002, 4 pages.
Ramirez et al., “The Effect of Code Reordering on Branch Prediction”, Proceedings of the 2000 International Conference on Parallel Architectures and Compilation Techniques, Oct. 2000, pp. 189-198.
Rothman “Analysis of Shared Memory Misses and Reference Patterns”, Proceedings of the 2000 IEEE International Conference on Computer Design, Sep. 2000, pp. 187-198.
Santhanam et al., “Software Verification Tools Assessment Study”, Department of Transportation: Federal Aviation Administration Technical Report, Report No. DOT/FAA/AR-06/54, Jun. 2007, 139 pages.
Schmidt et al., “Profile-directed restructuring of operating system code”, IBM Systems Journal, vol. 37, No. 2, Apr. 1998, pp. 270-297.
Yang et al., “Improving Performance by Branch Reordering”, Proceedings of the ACM SIGPLAN 1998 Conference on Programming Language Design and Implementation, Jun. 1998, pp. 130-141.
Shye et al., “Code Coverage Testing Using Hardware Performance Monitoring Support”, Proceedings of the Sixth International Symposium on Automated Analysis-Driven Debugging, Sep. 2005, 5 pages.
Soffa et al., “Exploiting Hardware Advances for Software Testing and Debugging (NIER Track)”, Proceedings of the 33rd International Conference on Software Engineering, May 2011, 4 pages.
Stolicny et al., “Alpha 21164 Manufacturing Test Development and Coverage Analysis,” IEEE Design & Test of Computers, vol. 15, Issue 3, Jul./Sep. 1988, pp. 98-104.
Talla et al., “Evaluating Signal Processing and Multimedia Applications on SIMD, VLIW, and Super Scalar Architectures”, Proceedings of the International Conference on Computer Design, Sep. 2000, pp. 163-172.
Talla et al., “Execution Characteristics of Multimedia Applications on a Pentium II Processor”, Proceedings of the 19th IEEE International Performance, Computing, and Communications Conference, Feb. 2000, pp. 516-524.
Tanenbaum, “1.4 Hardware, Software, and Multilevel Machines”, Structured Computer Organization, Copyright 1984, pp. 10-12.
“Interrupt,” Wikipedia definition, article undated,last modified Aug. 8, 2012, 7 pages, accessed Aug. 27, 2012 http://en.wikipedia.org/wiki/Interrupt.
Torrellas, “False Sharing and Spatial Locality in Multiprocessor Caches,” IEEE Transaction on Computers, vol. 43, No. 6, Jun. 1994, pp. 651-662.
“Intel Architecture Software Developer's Manual,” vol. 3, System Programming, Appendix A: Performance-Monitoring Events, Jan. 1999, 25 pages.
USPTO notice of allowance dated Sep. 4, 2012 regarding U.S. Appl. No. 12/021,425, 5 Pages.
Saltz et al., “Run-Time Scheduling and Execution of Loops on Message Passing Machines,” Journal of Parallel and Distributed Computing, vol. 8, Issue 4, Apr. 990, pp. 303-312.
“Tool to Facilitate Testing of Software to Insur Compatibility,” IBM Technical Disclosure Bulletin, vol. 30, Issue 11, Apr. 1988, pp. 162,165.
Tran et al., “Student Paper: A hardware-Assisted Tool for Fast, Full Code Coverage Analysis,” 19th International Symposium on Software Reliability Engineering, Nov. 2008, pp. 321,322.
Zhou, “Using Coverage Information to Guide Test Case Selection in Adaptive Random Testing,” 2010 IEEE 34th Annual Computer Software and Applications Conference Workshops, Jul. 2010, pp. 208-213.
Short, “Embedded Microprocessor Systems Design: An Introduction Using the Intel 80C188EB,” Prentice Hall, Inc.: 1998, p. 761.
USPTO notice of allowance dated Oct. 2, 2012 regarding U.S. Appl. No. 10/682,385, 30 Pages.
USPTO Final Office Action dated Aug. 23, 2007 regarding U.S. Appl. No. 10/674,642, 15 pages.
USPTO Office Action dated Mar. 27, 2008 regarding U.S. Appl. No. 10/674,642, 16 pages.
USPTO Office Action dated May 4, 2007 regarding U.S. Appl. No. 10/803,663, 36 pages.
USPTO Office Action dated Oct. 18, 2007 regarding U.S. Appl. No. 10/803,663, 24 pages.
USPTO Office Action dated May 2, 2008 regarding U.S. Appl. No. 10/803,663, 23 pages.
USPTO Notice of Allowance dated Mar. 18, 2011 regarding U.S. Appl. No. 10/803,663, 10 pages.
USPTO Office Action dated Feb. 8, 2007 regarding U.S. Appl. No. 10/808,716, 37 pages.
USPTO Final Office Action dated Jul. 24, 2007 regarding U.S. Appl. No. 10/808,716, 19 pages.
USPTO Final Office Action dated Nov. 16, 2007 regarding U.S. Appl. No. 10/808,716, 10 pages.
USPTO Final Office Action dated Apr. 29, 2008 regarding U.S. Appl. No. 10/808,716, 10 pages.
USPTO Notice of Allowance dated Sep. 12, 2008 regarding U.S. Appl. No. 10/808,716, 19 pages.
USPTO Office Action dated Feb. 26, 2007 regarding U.S. Appl. No. 10/675,721, 40 pages.
USPTO Final Office Action dated Sep. 25, 2007 regarding U.S. Appl. No. 10/675,721, 16 pages.
USPTO Office Action dated Apr. 9, 2008 regarding U.S. Appl. No. 10/675,721, 8 pages.
USPTO Final Office Action dated Oct. 3, 2008 regarding U.S. Appl. No. 10/675,721, 8 pages.
USPTO Office Action dated Jan. 30, 2006 regarding U.S. Appl. No. 10/675,751, 17 pages.
USPTO Final Office Action dated May 31, 2006 regarding U.S. Appl. No. 10/675,751, 15 pages.
USPTO Final Office Action dated Aug. 1, 2007 regarding U.S. Appl. No. 10/675,776, 23 pages.
USPTO Office Action dated Dec. 21, 2007 regarding U.S. Appl. No. 10/675,776, 17 pages.
USPTO Final Office Action dated Jun. 17, 2008 regarding U.S. Appl. No. 10/675,776, 17 pages.
USPTO Office Action dated Dec. 18, 2008 regarding U.S. Appl. No. 10/675,776, 22 pages.
USPTO Final Office Action dated Jun. 30, 2009 regarding U.S. Appl. No. 10/675,776, 22 pages.
USPTO Office Action dated Jan. 21, 2010 regarding U.S. Appl. No. 10/675,776, 13 pages.
USPTO Final Office Action dated Jun. 15, 2010 regarding U.S. Appl. No. 10/675,776, 9 pages.
USPTO Notice of Allowance dated Dec. 29, 2010 regarding U.S. Appl. No. 10/675,776, 10 pages.
USPTO Office Action dated Oct. 2, 2006 regarding U.S. Appl. No. 10/675,777, 25 pages.
USPTO Final Office Action dated Mar. 21, 2007 regarding U.S. Appl. No. 10/675,777, 20 pages.
USPTO Notice of Allowance dated Sep. 25, 2007 regarding U.S. Appl. No. 10/675,777, 14 pages.
USPTO Supplemental Notice of Allowance dated Jan. 9, 2008 regarding U.S. Appl. No. 10/675,777, 2 pages.
USPTO Supplemental Notice of Allowance dated May 8, 2008 regarding U.S. Appl. No. 10/675,777, 4 pages.
USPTO Office Action dated Feb. 3, 2006 regarding U.S. Appl. No. 10/675,778, 19 pages.
USPTO Final Office Action dated May 31, 2006 regarding U.S. Appl. No. 10/675,778, 19 pages.
USPTO Office Action dated May 14, 2007 regarding U.S. Appl. No. 10/675,783, 42 pages.
USPTO Final Office Action dated Oct. 26, 2007 regarding U.S. Appl. No. 10/675,783, 14 pages.
USPTO Office Action dated Apr. 17, 2008 regarding U.S. Appl. No. 10/675,783, 16 pages.
USPTO Final Office Action dated Oct. 30, 2008 regarding U.S. Appl. No. 10/675,783, 22 pages.
USPTO Office Action dated Jan. 27, 2006 regarding U.S. Appl. No. 10/675,831, 17 pages.
USPTO Final Office Action dated Jun. 2, 2006 regarding U.S. Appl. No. 10/675,831, 18 pages.
USPTO Office Action dated Oct. 5, 2006 regarding U.S. Appl. No. 10/806,866, 32 pages.
USPTO Final Office Action dated Mar. 29, 2007 regarding U.S. Appl. No. 10/806,866, 22 pages.
USPTO Office Action dated Aug. 24, 2007 regarding U.S. Appl. No. 10/806,866, 23 pages.
USPTO Final Office Action dated Feb. 11, 2008 regarding U.S. Appl. No. 10/806,866, 17 pages.
USPTO Office Action dated Jun. 25, 2008 regarding U.S. Appl. No. 10/806,866, 6 pages.
USPTO Notice of Allowance dated Dec. 16, 2008 regarding U.S. Appl. No. 10/806,866, 8 pages.
USPTO Office Action dated Sep. 19, 2006 regarding U.S. Appl. No. 10/806,871, 35 pages.
USPTO Final Office Action dated Mar. 22, 2007 regarding U.S. Appl. No. 10/806,871, 23 pages.
USPTO Office Action dated Aug. 27, 2007 regarding U.S. Appl. No. 10/806,871, 26 pages.
USPTO Final Office Action dated Apr. 9, 2008 regarding U.S. Appl. No. 10/806,871, 20 pages.
USPTO Examiner's Answer to Appeal Brief dated Oct. 31, 2008 regarding U.S. Appl. No. 10/806,871, 21 pages.
USPTO Notice of Allowance dated Sep. 7, 2011 regarding U.S. Appl. No. 10/806,871, 9 pages.
USPTO Office Action dated Aug. 24, 2007 regarding U.S. Appl. No. 10/757,171, 30 pages.
Response to Office Action dated Nov. 20, 2007 regarding U.S. Appl. No. 10/757,171, 17 pages.
USPTO Notice of Allowance dated Jan. 14, 2008 regarding U.S. Appl. No. 10/757,171, 7 pages.
Supplemental Response to Office Action dated Jan. 14, 2008 regarding U.S. Appl. No. 10/757,171, 7 pages.
Amendment after Notice of Allowance dated Feb. 11, 2008 regarding U.S. Appl. No. 10/757,171, 4 pages.
USPTO Office Action dated Nov. 28, 2005 regarding U.S. Appl. No. 10/757,186, 17 pages.
USPTO Final Office Action dated Mar. 12, 2007 regarding U.S. Appl. No. 10/757,186, 28 pages.
USPTO Office Action dated Aug. 1, 2007 regarding U.S. Appl. No. 10/757,186, 34 pages.
USPTO Final Office Action dated Nov. 29, 2007 regarding U.S. Appl. No. 10/757,186, 30 pages.
USPTO Notice of Allowance dated Oct. 20, 2010 regarding U.S. Appl. No. 10/757,186, 8 pages.
USPTO Office Action dated Dec. 8, 2005 regarding U.S. Appl. No. 10/757,192, 12 pages.
USPTO Final Office Action dated Jun. 16, 2006 regarding U.S. Appl. No. 10/757,192, 11 pages.
USPTO Notice of Allowance dated Nov. 3, 2006 regarding U.S. Appl. No. 10/757,192, 15 pages.
USPTO Office Action dated Sep. 13, 2006 regarding U.S. Appl. No. 10/757,197, 22 pages.
USPTO Final Office Action dated Mar. 2, 2007 regarding U.S. Appl. No. 10/757,197, 19 pages.
USPTO Office Action dated Aug. 17, 2007 regarding U.S. Appl. No. 10/757,197, 17 pages.
USPTO Final Office Action dated Jan. 30, 2008 regarding U.S. Appl. No. 10/757,197, 11 pages.
USPTO Notice of Allowance dated Oct. 15, 2008 regarding U.S. Appl. No. 10/757,197, 6 pages.
USPTO Office Action dated Dec. 2, 2005 regarding U.S. Appl. No. 10/757,198, 13 pages.
USPTO Office Action dated Mar. 14, 2006 regarding U.S. Appl. No. 10/757,198, 7 pages.
USPTO Office Action dated Feb. 28, 2006 regarding U.S. Appl. No. 10/757,227, 13 pages.
USPTO Notice of Allowance dated Jul. 28, 2006 regarding U.S. Appl. No. 10/757,227, 11 pages.
USPTO Supplemental Notice of Allowance dated Aug. 10, 2006 regarding U.S. Appl. No. 10/757,227, 4 pages.
USPTO Office Action dated Jan. 6, 2006 regarding U.S. Appl. No. 10/687,248, 24 pages.
USPTO Final Office Action dated May 8, 2006 regarding U.S. Appl. No. 10/687,248, 23 pages.
USPTO Office Action dated Oct. 6, 2006 regarding U.S. Appl. No. 10/757,250, 10 pages.
USPTO Final Office Action dated Jul. 5, 2007 regarding U.S. Appl. No. 10/757,250, 11 pages.
USPTO Notice of Allowance dated Nov. 20, 2007 regarding U.S. Appl. No. 10/757,250, 6 pages.
USPTO Office Action dated Sep. 2, 2011 regarding U.S. Appl. No. 12/185,254, 21 pages.
USPTO Notice of Allowance dated Dec. 27, 2011 regarding U.S. Appl. No. 12/185,254, 10 pages.
USPTO Notice of Allowance dated Jul. 1, 2009 regarding U.S. Appl. No. 12/431,389, 31 pages.
USPTO Office Action dated Apr. 13, 2012 regarding U.S. Appl. No. 12/021,425, 30 pages.
USPTO Office Action dated Feb. 26, 2007 regarding U.S. Appl. No. 10/682,437, 18 pages.
USPTO Final Office Action dated Aug. 15, 2007 regarding U.S. Appl. No. 10/757,250, 15 pages.
USPTO Notice of Allowance dated Apr. 30, 2008 regarding U.S. Appl. No. 10/757,250, 6 pages.
USPTO Office Action dated Jun. 15, 2011 regarding U.S. Appl. No. 12/122,558, 18 pages.
Response to Office Action dated Aug. 23, 2011 regarding U.S. Appl. No. 12/122,558, 19 pages.
USPTO Notice of Allowance dated Nov. 22, 2011 regarding U.S. Appl. No. 12/122,558, 13 pages.
USPTO Office Action dated May 4, 2006 regarding U.S. Appl. No. 10/806,576, 16 pages.
USPTO Final Office Action dated Oct. 24, 2006 regarding U.S. Appl. No. 10/806,576, 25 pages.
USPTO Office Action dated Jan. 17, 2006 regarding U.S. Appl. No. 10/674,604, 22 pages.
USPTO Final Office Action dated May 9, 2006 regarding U.S. Appl. No. 10/674,604, 21 pages.
USPTO Office Action dated Jan. 12, 2006 regarding U.S. Appl. No. 10/674,606, 12 pages.
USPTO Final Office Action dated Jun. 23, 2006 regarding U.S. Appl. No. 10/674,606, 17 pages.
USPTO Office Action dated May 15, 2006 regarding U.S. Appl. No. 10/806,633, 19 pages.
USPTO Final Office Action dated Dec. 20, 2006 regarding U.S. Appl. No. 10/806,633, 21 pages.
USPTO Notice of Allowance dated Apr. 20, 2007 regarding U.S. Appl. No. 10/806,633, 4 pages.
USPTO Supplemental Notice of Allowance dated Oct. 5, 2007 regarding U.S. Appl. No. 10/806,633, 7 pages.
USPTO Office Action dated Aug. 24, 2006 regarding U.S. Appl. No. 10/674,642, 24 pages.
USPTO Office Action dated Feb. 27, 2007 regarding U.S. Appl. No. 10/674,642, 16 pages.
Jeong et al., “Cost Sensitive Cache Replacement Algorithmy,” Second Workshop on Cashing, Coherence and Consistency, Jun. 2002, 11 Pages.
TW search report dated Apr. 19, 2010 regarding Taiwan invention 094100082A, filing date Jan. 3, 2005, 2 Pages.
TW search report dated Jun. 30, 2010 regarding Taiwan invention 094107739A, filing date Mar. 14, 2005, 2 Pages.
Notice of allowance dated Aug. 15, 2013 regarding U.S. Appl. No. 13/004,153, 8 pages.
Notice of allowance dated Oct. 3, 2013 regarding U.S. Appl. No. 13/004,153, 30 pages.
Related Publications (1)
Number Date Country
20120151465 A1 Jun 2012 US
Continuations (2)
Number Date Country
Parent 12122558 May 2008 US
Child 13347876 US
Parent 10757171 Jan 2004 US
Child 12122558 US