Methods and apparatus to dynamically insert prefetch instructions based on compiler and garbage collector analysis

Information

  • Patent Application
  • 20050138294
  • Publication Number
    20050138294
  • Date Filed
    December 19, 2003
    20 years ago
  • Date Published
    June 23, 2005
    19 years ago
Abstract
Methods and apparatus to insert prefetch instructions based on garbage collector analysis and compiler analysis are disclosed. In an example method, one or more batches of samples associated with cache misses from a performance monitoring unit in a processor system are received. One or more samples from the one or more batches of samples based on delinquent information are selected. A performance impact indicator associated with the one or more samples is generated. Based on the performance indicator, at least one of a garbage collector analysis and a compiler analysis is initiated to identify one or more delinquent paths. Based on the at least one of the garbage collector analysis and the compiler analysis, one or more prefetch points to insert prefetch instructions are identified.
Description
TECHNICAL FIELD

The present disclosure relates generally to compilers, and more particularly, to methods and apparatus to dynamically insert prefetch instructions based on compiler and garbage collector analysis.


BACKGROUND

In an effort to improve and optimize performance of processor systems, many different prefetching techniques (i.e., anticipating the need for data input requests) are used to remove or “hide” latency (i.e., delay) of processor systems. In particular, prefetch algorithms (i.e., pre-execution or pre-computation) are used to prefetch data for cache misses associated with data addresses that are difficult to predict during compile time. That is, a compiler first identifies the instructions needed to generate data addresses of the cache misses, and then speculatively pre-execute those instructions.


In general, linked data structures (LDSs) are collections of software objects that are traversed using pointers found in the preceding object(s). Traversing an LDS may result in a high latency cache miss on each object in the LDS. Because an address of an object in the LDS is loaded from the preceding object before the object itself may be loaded, cache misses may be unavoidable. On the other hand, when accessing a data array structure where the address of subsequent objects may be calculated from the base of the data array structure, loops may be unrolled and techniques such as stride prefetching may be performed to avoid cache misses while iterating through the data array structure. These techniques assume that the address of subsequent objects may be calculated using the base of the data array structure. However, most LDSs do not have layout properties that may be exploited by stride prefetching techniques. Further, the gap between processor and memory speeds continues to increase. As a result, managed runtime environments (MRTEs) may encounter difficulties when attempting to insert prefetch instructions properly to reduce latencies while traversing LDSs.




BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram representation of an example prefetch instruction insertion system.



FIG. 2 is a flow diagram representation of example machine readable instructions that may be executed to cause the example prefetch instruction insertion system shown in FIG. 1 to insert prefetch instructions.



FIG. 3 is a flow diagram representation of example machine readable instructions that may be executed to cause the example prefetch instruction insertion system shown in FIG. 1 to perform post-processing.



FIG. 4 is a block diagram representation of an example delinquent type graph.



FIG. 5 is a block diagram representation of an example path graph.



FIG. 6 is a block diagram representation of an example allocation sequence of the path graph shown in FIG. 5.



FIG. 7 is a block diagram representation of an example processor system that may be used to implement the example prefetch instruction insertion system shown in FIG. 1.




DETAILED DESCRIPTION

Although the following discloses example systems including, among other components, software or firmware executed on hardware, it should be noted that such systems are merely illustrative and should not be considered as limiting. For example, it is contemplated that any or all of the disclosed hardware, software, and/or firmware components could be embodied exclusively in hardware, exclusively in software, exclusively in firmware or in some combination of hardware, software, and/or firmware.


In the example of FIG. 1, the illustrated prefetch instruction insertion system 100 includes a running application 110, a virtual machine (VM) class loader 115, a performance monitoring unit (PMU) 120, a VM 130, a garbage collector (GC) 140, and a compiler 150. The VM 130 includes a software statistical filtering unit (SSFU) 132 and an initial profitability analysis unit (IPAU) 134.


The running application 110 (also known as a mutator) includes one or more methods (i.e., functions, routines, or subroutines for manipulating data) compiled into instructions that a processor (e.g., the processor 1020 of FIG. 7) can execute. Persons of ordinary skill in the art will readily recognize that the VM class loader 115 is configured to maintain metadata structures such as a VTable (i.e., a virtual table), which includes information that uniquely identifies a class or type. A class provides a template for objects and describes how the objects are structured internally. An instance is an object created or instantiated from a class.


The PMU 120 is configured to identify and provide samples associated with cache misses when the running application 110 is executed under the control of a managed runtime environment (MRTE) to the VM 130. Each sample includes delinquent information such as an effective address causing a cache miss, an instruction pointer (IP) of an instruction (e.g., a load instruction) causing the cache miss, a thread that executed the instruction, and latency information. The effective address includes an address of data accessible by the load instruction. The IP includes an address of the instruction causing the cache miss. The latency information includes a number of cycle(s) required to service the cache miss.


Based on the delinquent information, the SSFU 132 selects one or more of the samples provided by PMU 120 that are likely to be useful in optimizing cache performance. Samples not selected by SSFU 132 are no longer processed. Samples are selected based on (1) their impact on performance, and (1) if delinquent indicators associated with the samples are concise. For example, the SSFU 132 may identify a sample associated with high-latency misses such as addresses that miss in all cache levels because the impact on performance is higher. In another example, the SSFU 132 may take samples from a selected set of (1) load instructions causing a majority of cache misses (i.e., delinquent loads), (2) types of objects where a majority of cache misses occurs (i.e., delinquent types), and/or (3) threads that contribute to a majority of cache misses (i.e., delinquent threads). The SSFU 132 may also identify and reject one or more of the samples that have been incorrectly generated and passed to the VM 130 by performing statistical validation of the samples from the PMU 120.


Further, the SSFU 132 provides a sample rate to the PMU 120 at which the PMU 120 identifies samples associated with cache misses. That is, the SSFU 132 may initially provide a high-sampling rate and gradually reduce the sampling rate to optimize overhead of the prefetch instruction insertion system 100. In a particular example, the SSFU 132 may exponentially reduce the sampling rate. The SSFU 132 may also control the sampling rate based on the results of the prefetch instruction insertion system 100. For example, the SSFU 132 may increase the sampling rate if PMU 120 fails to-identify and provide samples within a pre-determined time period. Alternatively, the SSFU 132 may decrease the sampling rate if the PMU 120 identifies and provides a number of samples greater than a threshold.


The IPAU 134 processes the filtered samples from the SSFU 132. In particular, the IPAU 134 compares the filtered samples to historical information. The IPAU 134 determines if further analysis is profitable (i.e., to optimize data access). If data access is further optimized by inserting prefetch instructions and/or improving data layout, the IPAU 134 determines the effect of such optimizations.


The IPAU 134 determines whether to perform further analysis if optimization has not been performed or if comparison of performance against existing optimization has already been performed. The IPAU 134 determines a performance impact (PI) indicator of data cache misses from a batch of samples. For example, the PI indicator is proportional to a constant and a number of samples contained in a particular batch (NS), and inversely proportional to an average number of delinquent threads in the batch of samples (NT), the sampling rate (SR), and time to collect the batch of samples (TIME) (i.e., PI=(constant*NS)/(NT*SR*TIME)). The average number of delinquent threads is an approximation of the contribution of a group of threads to the total number of cache misses. The IPAU 134 initiates further analysis in response to detecting a phase change. Phase change is detected if the number of threads generating samples is changed, if the number and/or composition of delinquent loads are changed, if the number and/or composition of delinquent types are changed, and/or if the PI indicator of a current batch of samples is greater than the PI indicator of a previous batch. Further analysis may not be necessary if optimization is performed, the number of delinquent types and delinquent loads is similar or reduced, and the current PI indicator is reduced relative to the previous PI indicator. The IPAU 134 is also configured to determine whether to change the sampling rate of the PMU 120 to reduce overhead or to get a sufficient number of samples.


The filtered samples are organized in delinquent regions (i.e., one or more memory regions with concentrated cache misses). The filtered samples are provided to GC 140. The GC 140 generates interesting paths of dependent objects by performing heap traversal on the delinquent regions. The GC 140 identifies a set of delinquent types based on the filtered samples. The GC 140 also generates delinquent paths based on the delinquent types and information from the heap traversal. Based on the IP information associated with the filtered samples, the GC 140 generates a list of IPs associated with each delinquent path. The list of IPs may be used by the compiler 150 to identify points to insert prefetch instructions.


The GC 140 is also configured to identify delta information between a base delinquent type and other delinquent types in a delinquent path based on the filtered samples and the delinquent paths. In particular, the GC 140 identifies a delta for each delinquent load associated with the base delinquent type of each delinquent path. The delta information includes offset information (i.e., a location within an object where the cache miss occurred). The compiler 150 may eliminate the need of pointer chasing across objects by inserting prefetch instructions at a relative offset from the base object in a path to hide the latency of the cache misses of the delinquent children further downstream on the path. Based on the delinquent paths and delta information, the GC 140 generates a delinquent path graph.


The compiler 150 generates a recompilation plan based on the delinquent path graph generated by the GC 140 and/or the analysis performed by the IPAU 134. If the GC 140 fails to identify interesting paths in the delinquent regions, the compiler 150 may identify locations to insert prefetch instructions. For example, the compiler 150 may identify locations to insert prefetch instructions if a delinquent type has only a few delinquent loads.


A flow diagram 200 representing machine readable instructions that may be executed by a processor to insert prefetch instructions is illustrated in FIG. 2. Persons of ordinary skill in the art will appreciate that the instructions may be implemented in any of many different ways utilizing any of many different programming codes stored on any of many computer or machine-readable mediums such as a volatile or nonvolatile memory or other mass storage device (e.g., a floppy disk, a CD, and a DVD). For example, the machine readable instructions may be embodied in a machine-readable medium such as an erasable programmable read only memory (EPROM), a read only memory (ROM), a random access memory (RAM), a magnetic media, an optical media, and/or any other suitable type of medium. Alternatively, the machine readable instructions may be embodied in a programmable gate array and/or an application specific integrated circuit (ASIC). Further, although a particular order of actions is illustrated in FIG. 2, persons of ordinary skill in the art will appreciate that these actions can be performed in other temporal sequences. Again, the flow diagram 200 is merely provided as an example of one way to insert prefetch instructions.


The flow diagram 200 begins with the PMU 120 identifying and providing a batch of samples associated with cache misses of an MRTE system to the VM 130 at a sample rate determined by the SSFU 132 (block 210). The VM 130 identifies delinquent objects from the batch of samples (block 220). The VM 130 also identifies delinquent information associated with each sample from the batch of samples (block 230). As noted above, each batch of samples includes delinquent information such as delinquent loads, delinquent types, delinquent regions, and/or delinquent threads. The delinquent loads include instruction addresses where most cache misses occur. Delinquent types include types of objects where most cache misses occur. Delinquent regions include memory region buckets where most cache misses occur. Each memory region bucket includes a number of contiguous memory addresses (e.g., one megabyte of memory (1 MB)). Delinquent threads include threads that generate the most cache misses.


The prefetch instruction insertion system 100 provides dynamic recompilation of the running application 110. Some delinquent loads may appear in methods that are recompiled versions of older methods. Thus, the VM 130 remaps the delinquent information (block 240). In particular, the older versions of delinquent loads are remapped to the latest version of the delinquent loads.


To reduce errors in identifying delinquent objects and to concentrate on samples with greater delinquent indicators, the SSFU 132 filters the batch of samples from the PMU 120 (block 250). For example, the SSFU 132 may discard samples with IPs that are not associated with any delinquent loads to concentrate on delinquent loads. To identify objects contributing to a significant number of cache misses, the SSFU 132 may discard samples with types that are not delinquent. The SSFU 132 may discard samples with types that are not important in the delinquent loads. The SSFU 132 may also discard samples with addresses that are not included in a delinquent region but included as dominant loads and/or dominant types, which are analyzed by the compiler 150. Such samples are not analyzed by the GC 140 because the GC 140 may not be configured to identify connections between objects in a low-cache miss region. Further, the SSFU 132 may discard samples of threads that do not generate a significant number of important cache misses and/or samples that do not point to areas in the heap.


The IPAU 134 determines whether further analysis is necessary as described in detail above (block 255). For example, IPAU 134 may generate the PI indicator associated with the batch of samples from the PMU 120. If further analysis is not necessary, the PMU 120 continues to identify and provide samples to the VM 130. Otherwise, if further analysis is necessary (i.e., phase change), the IPAU 134 determines whether the cache misses is in the delinquent regions (block 257). If the cache misses is in the delinquent regions, the IPAU 134 identifies cache misses in the delinquent regions (block 260) and initiates the GC 140 to identify delinquent paths and deltas associated with the delinquent regions as described in detail above (block 280). Referring back to block 257, if the cache misses are outside the delinquent regions, the IPAU 134 identifies types outside of the delinquent regions (block 270) and initiates the compiler 150 to identify delinquent paths as described in detail above (block 290). Accordingly, the compiler 150 performs post-processing 300 (i.e., generates a recompilation plan to insert prefetch instructions) of the batch of samples from the PMU 120 as shown in FIG. 3.


To determine whether prefetch instructions will be effective, the compiler 150 confirms paths (block 310). Each path generated by the GC 140 includes a sequence of types, and each type includes IPs. However, the type sequences generated by the GC 140 may not contain information indicating when the types were accessed. Typically, the types are accessed in a rapid succession. If a prefetch instruction is issued when the base type is retrieved from the child objects along a path, the cache lines brought by early prefetch instructions may not be discarded by other cache misses.


The compiler 150 matches IPs for the types in a path with a dynamic call graph, and estimates execution counts along the path. That is, the compiler 150 monitors for high-trip count loops or sections of code that are likely to discard cache lines brought by prefetch instructions. In particular, the compiler 150 identifies methods associated with the IPs for a path, and uses the execution count for the code for all the paths of the methods. If the execution count is even (i.e., no high-trip count loops are detected), the compiler 150 confirms the path. For example, assume that type A points to type B, both types A and B cause significant cache misses, two arrays (i.e., array A and array B) point to objects of both types A and B, respectively, and two separate loops access objects of types A and B through arrays A and B, respectively. The GC 140 may generate a path of type A pointing to type B because the cache misses associated with that particular path are more important than the cache misses produced in the two arrays. However, the first loop exhibits cache misses of objects of type A while the second loop exhibits misses of objects of type B. Therefore, a prefetch instruction inserted in the first loop from type A on behalf of type B may be ineffective. Thus, the compiler 150 may not confirm this particular path because two high-trip count loops with high execution frequency independently access objects of type A and type B.


The compiler 150 may also confirm the path if the IPs are all contained in a loop because a prefetch instruction may be effective. Further, the compiler 150 may use the batch of samples from the PMU 120 to match against a dynamic call graph obtained by other means, for example, using the MRTE system services as persons of ordinary skill in the art will readily recognize. A dynamic call graph is a graph that expresses the methods called by a particular method. Further, the graph is continuously updated at runtime (i.e., dynamic). The IP of each sample provided by the PMU 120 is searched in the dynamic call graph to identify the basic block of the method where the IP belongs. Basic block profile information obtained by other means is used to estimate the execution counts between two IP points. The profile information is analyzed along a path of two or more IP points. The distribution of profile information misses along the path may be used to estimate the benefit of a prefetch instruction. If the distribution is even, the compiler 150 confirms the path. However, a high trip-count loop is executed if the distribution is uneven (i.e., high execution counts intermixed with low execution counts), which may throw off the prefetch instruction. Thus, the compiler 150 rejects the path.


The analysis from the GC 140 and/or the compiler 150 (i.e., blocks 280 and/or 290, respectively, in FIG. 2) generates a series of delinquent paths and delta information. Accordingly, the compiler 150 generates a delinquent type graph to determine whether the series of delinquent paths share a common base type (i.e., to combine delinquent paths) (block 320). If the series of delinquent paths share a common base type and a prefetch point, the deltas associated with the series of delinquent paths may be combined to reduce redundant prefetching. In the example of FIG. 4, the analysis from the GC 140 and/or the compiler 150 generated three paths, generally shown as 410, 420, and 430. In particular, the first path 410 is from node 440 to array of long 442 via the field “keys” 452, the second path 420 is from node 440 to array of objects 444 via the field “branch” 454, and the third path 430 is from node 440 to array of objects 456 via the field “vals” 456. The compiler 150 combines the three paths 410, 420, and 430 to issue a prefetch instruction from node 440 for the three objects 442, 444, and 446 pointed out by the fields 452, 454, and 456. To combine deltas, the compiler 150 searches for matching base types that share similar delinquent loads. The matching base types are combined into a single type with several child types. The compiler 150 combines the deltas of the child types if a match is found. In particular, if the deltas indicate that the three child types are arranged consecutively, the compiler 150 may inject a single prefetch instruction for the shared cache lines. For example, the compiler 150 may match the IP associated with each of the base types in the three paths. If two base types in two paths share very similar delinquent loads, the compiler 150 may join or combine the two base types because the two base types are from the same code.


Referring back to FIG. 3, the compiler 150 generates confidence estimators to provide greater accuracy in locating prefetch instructions because delta values may not always be accurate. For example, alignment of two objects may result in the child object being on different cache lines even if the two objects are separated by a fixed distance. In another example, delta values may be inaccurate with objects of variable size in between objects connected by a path. Confidence estimators may reduce cache pollution and provide more precise prefetches.


The compiler 150 combines independent confidence estimators to generate a global confidence estimator for cache lines including objects adjacent in a path graph. In the example of FIG. 5, the illustrated path graph 500 includes type “Item” 510, type “String” 520, 530, and type “Char array” 540, 550. Based on the types, the path graph 500 includes two common paths. In a first path 512, type “Item” 510 points to an instance of type “String” 520 via field “name” 560, which in turn, points to an instance of “Char array” 540 via field “vals” 580. In a second path 514, type “brandlnfo” 570 points to another instance of type “String” 530 via field “brandlnfo” 570, which in turn, points to another instance of type “Char array” 550 via field “vals” 590. However, “Char array” 540 and “Char array” 550 may be different sizes.


In the example of FIG. 6, the illustrated allocation sequence 600 is based on the path graph 500. As illustrated in the allocation sequence 600 of FIG. 6, there is relatively little confidence in the delta distance between the base path (i.e., type “Item” 510) and the objects along either the first path 512 or the second path 514 of FIG. 5. However, if the compiler 150 combines the confidence estimator of the first and second paths 512, 514, the compiler 150 may capture objects of either the first or second paths 512, 514 by prefetching selected cache lines that otherwise would have a lower probability of containing an object of either the first or second paths 512, 514 individually. As noted above, “Char array” 540 and “Char array” 550 may vary in size. If the first path 512 and the second path 514 are considered independently, the varying size of the “Char array” 540 reduces the accuracy of the deltas between the type “Item” 510, “String” 530, and “Char array” 550. However, if the first and second paths 512, 514 are combined as shown in FIG. 6 and the deltas are combined, the resulting prefetch may help along one of the first and second paths 512, 514.


The compiler 150 combines the first and second paths 512, 514 by analyzing corresponding delta histograms and assigning one or more confidence values to each of the cache lines derived from the individual deltas. The compiler 150 sums up the confidence estimators for each of the cache lines to generate the confidence estimator. If the confidence estimator is greater than a threshold then the compiler prefetches the cache line.



FIG. 7 is a block diagram of an example processor system 1000 adapted to implement the methods and apparatus disclosed herein. The processor system 1000 may be a desktop computer, a laptop computer, a notebook computer, a personal digital assistant (PDA), a server, an Internet appliance or any other type of computing device.


The processor system 1000 illustrated in FIG. 7 provides memory and I/O management functions, as well as a plurality of general purpose and/or special purpose registers, timers, etc. that are accessible or used by a processor 1020. The processor 1020 is implemented using one or more processors. For example, the processor 1020 may be implemented using one or more of the Intel® Pentium® technology, the Intel® Itanium® technology, Intel® Centrino™ technology, and/or the Intel® XScale® technology. In the alternative, other processing technology may be used to implement the processor 1020. The processor 1020 includes a cache 1022, which may be implemented using a first-level unified cache (L1), a second-level unified cache (L2), a third-level unified cache (L3), and/or any other suitable structures to store data as persons of ordinary skill in the-art will readily recognize.


As is conventional, the volatile memory controller 1036 and the non-volatile memory controller 1038 perform functions that enable the processor 1020 to access and communicate with a main memory 1030 including a volatile memory 1032 and a non-volatile memory 1034 via a bus 1040. The volatile memory 1032 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM), and/or any other type of random access memory device. The non-volatile memory 1034 may be implemented using flash memory, Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), and/or any other desired type of memory device.


The processor system 1000 also includes an interface circuit 1050 that is coupled to the bus 1040. The interface circuit 1050 may be implemented using any type of well known interface standard such as an Ethernet interface, a universal serial bus (USB), a third generation input/output interface (3GIO) interface, and/or any other suitable type of interface.


One or more input devices 1060 are connected to the interface circuit 1050. The input device(s) 1060 permit a user to enter data and commands into the processor 1020. For example, the input device(s) 1060 may be implemented by a keyboard, a mouse, a touch-sensitive display, a track pad, a track ball, an isopoint, and/or a voice recognition system.


One or more output devices 1070 are also connected to the interface circuit 1050. For example, the output device(s) 1070 may be implemented by display devices (e.g., a light emitting display (LED), a liquid crystal display (LCD), a cathode ray tube (CRT) display, a printer and/or speakers). The interface circuit 1050, thus, typically includes, among other things, a graphics driver card.


The processor system 1000 also includes one or more mass storage devices 1080 to store software and data. Examples of such mass storage device(s) 1080 include floppy disks and drives, hard disk drives, compact disks and drives, and digital versatile disks (DVD) and drives.


The interface circuit 1050 also includes a communication device such as a modem or a network interface card to facilitate exchange of data with external computers via a network. The communication link between the processor system 1000 and the network may be any type of network connection such as an Ethernet connection, a digital subscriber line (DSL), a telephone line, a cellular telephone system, a coaxial cable, etc.


Access to the input device(s) 1060, the output device(s) 1070, the mass storage device(s) 1080 and/or the network is typically controlled by the I/O controller 1014 in a conventional manner. In particular, the I/O controller 1014 performs functions that enable the processor 1020 to communicate with the input device(s) 1060, the output device(s) 1070, the mass storage device(s) 1080 and/or the network via the bus 1040 and the interface circuit 1050.


While the components shown in FIG. 7 are depicted as separate blocks within the processor system 1000, the functions performed by some of these blocks may be integrated within a single semiconductor circuit or may be implemented using two or more separate integrated circuits. For example, although the I/O controller 1014, the volatile memory controller 1036, and the non-volatile memory controllers 1038 are depicted as separate blocks, persons of ordinary skill in the art will readily appreciate that the I/O controller 1014, the volatile memory controller 1036, and the non-volatile memory controllers 1038 may be integrated within a single semiconductor circuit.


Although certain example methods, apparatus, and articles of manufacture have been described herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus, and articles of manufacture fairly falling within the scope of the appended claims either literally or under the doctrine of equivalents.

Claims
  • 1. A method comprising: receiving one or more batches of samples associated with cache misses from a performance monitoring unit in a processor system; selecting one or more samples from the one or more batches of samples based on delinquent information; generating a performance impact indicator associated with the one or more samples; initiating at least one of a garbage collector analysis and a compiler analysis based on the PI indicator to identify one or more delinquent paths; and identifying one or more prefetch points to insert prefetch instructions based on the at least one of the garbage collector analysis and the compiler analysis.
  • 2. A method as defined in claim 1, wherein selecting the at least one of the one or more samples based on delinquent information comprises selecting the at least one of the one or more samples based on at least one of a delinquent region, a delinquent type, a delinquent load, and a delinquent thread.
  • 3. A method as defined in claim 1, wherein initiating the at least one of garbage collector analysis and compiler analysis based on the performance impact indicator comprises causing a garbage collector to identify delinquent paths and deltas associated with one or more delinquent regions in response to identifying cache misses in the one or more delinquent regions.
  • 4. A method as defined in claim 1, wherein initiating the at least one of garbage collector analysis and compiler analysis based on the performance impact indicator comprises causing a compiler to identify delinquent paths in response to identifying one or more types outside of one or more delinquent regions.
  • 5. A method as defined in claim 1, wherein identifying the one or more prefetch points to insert prefetch instructions based on the at least one of the garbage collector analysis and the compiler analysis comprises generating a global confidence estimator associated with the one or more prefetch points.
  • 6. A method as defined in claim 1, wherein identifying the one or more prefetch points to insert prefetch instructions based on the at least one of the garbage collector analysis and the compiler analysis comprises confirming the one or more delinquent paths.
  • 7. A method as defined in claim 1 further comprising selecting a sampling rate for receiving the one or more batches of samples from the performance monitoring unit in a processor system.
  • 8. A machine readable medium storing instructions, which when executed, cause a machine to: receive one or more batches of samples associated with cache misses from a performance monitoring unit in a processor system; select one or more samples from the one or more batches of samples based on delinquent information; generate a performance impact indicator associated with the one or more samples; initiate at least one of a garbage collector analysis and a compiler analysis based on the performance impact indicator to identify one or more delinquent paths; and identify one or more prefetch points to insert prefetch instructions based on the at least one of the garbage collector analysis and the compiler analysis.
  • 9. A machine readable medium as defined in claim 8, wherein the instructions, when executed, cause the machine to select the at least one of the one or more samples based on delinquent information by selecting the at least one of the one or more samples based on at least one of a delinquent region, a delinquent type, a delinquent load, and a delinquent thread.
  • 10. A machine readable medium as defined in claim 8, wherein the instructions, when executed, cause the machine to initiate the at least one of garbage collector analysis and compiler analysis based on the performance impact indicator by initiating a garbage collector to identify delinquent paths and deltas associated with one or more delinquent regions in response to identifying cache misses in the one or more delinquent regions.
  • 11. A machine readable medium as defined in claim 8, wherein the instructions, when executed, cause the machine to initiate the at least one of garbage collector analysis and compiler analysis based on the performance impact indicator by causing a compiler to identify delinquent paths in response to identifying one or more types outside of one or more delinquent regions.
  • 12. A machine readable medium as defined in claim 8, wherein the instructions, when executed, cause the machine to identify the one or more prefetch points to insert prefetch instructions based on the at least one of the garbage collector analysis and the compiler analysis by generating a global confidence estimator associated with the one or more prefetch points.
  • 13. A machine readable medium as defined in claim 8, wherein the instructions, when executed, cause the machine to identify the one or more prefetch points to insert prefetch instructions based on the at least one of the garbage collector analysis and the compiler analysis by confirming the one or more delinquent paths.
  • 14. A machine readable medium as defined in claim 8, wherein the instructions, when executed, cause the machine to select a sampling rate for receiving the one or more batches of samples from the performance monitoring unit in a processor system.
  • 15. A machine readable medium as defined in claim 8, wherein the machine readable medium comprises one of a programmable gate array, application specific integrated circuit, erasable programmable read only memory, read only memory, random access memory, magnetic media, and optical media.
  • 16. An apparatus comprising: a performance monitoring unit configured to provide one or more batches of samples associated with cache misses; a statistical filtering unit configured to identify and select one or more samples from the one or more batches of samples based on delinquent information; a garbage collector configured to identify one or more delinquent paths associated with one or more delinquent regions; a compiler configured to identify one or more delinquent paths associated with one or more types outside the one or more delinquent regions; and an initial profitability analyzing unit configured to initiate at least one of the garbage collector and the compiler based on a performance impact indicator to identify one or more delinquent paths, wherein the compiler identifies one or more prefetch points to insert prefetch instructions based on the at least one of the garbage collector analysis and the compiler analysis.
  • 17. An apparatus as defined in claim 16, wherein the delinquent information comprises at least one of at least one of a delinquent region, a delinquent type, a delinquent load, and a delinquent thread.
  • 18. An apparatus as defined in claim 16, wherein the garbage collector identifies delinquent paths and deltas associated with the one or more delinquent regions in response to identifying cache misses in the one or more delinquent regions.
  • 19. An apparatus as defined in claim 16, wherein the compiler identifies delinquent paths in response to identifying one or more types outside of one or more delinquent regions.
  • 20. An apparatus as defined in claim 16, wherein the compiler generates a global confidence estimator associated with the one or more prefetch points.
  • 21. An apparatus as defined in claim 16, wherein the compiler confirms the one or more delinquent paths.
  • 22. An apparatus as defined in claim 16, wherein the statistical filtering unit selects a sample rate for the performance monitoring unit to provide the one or more batches of samples associated with cache misses.
  • 23. A processor system comprising: a dynamic random access memory; and a processor operatively coupled to the dynamic random access memory, the processor being programmed to receive one or more batches of samples associated with cache misses from a performance monitoring unit in a processor system, to select one or more samples from the one or more batches of samples based on delinquent information, to generate a performance impact indicator associated with the one or more samples, to initiate at least one of a garbage collector analysis and a compiler analysis based on the performance impact indicator to identify one or more delinquent paths, and to identify one or more prefetch points to insert prefetch instructions based on the at least one of the garbage collector analysis and the compiler analysis.
  • 24. A processor system as defined in claim 23, wherein the processor is programmed to select the at least one of the one or more samples based on at least one of a delinquent region, a delinquent type, a delinquent load, and a delinquent thread.
  • 25. A processor system as defined in claim 23, wherein the processor is programmed to cause a garbage collector to identify delinquent paths and deltas associated with one or more delinquent regions in response to identifying cache misses in the one or more delinquent regions.
  • 26. A processor system as defined in claim 23, wherein the processor is programmed to cause a compiler to identify delinquent paths in response to identifying one or more types outside of one or more delinquent regions.
  • 27. A processor system as defined in claim 23, wherein the processor is programmed to generate a global confidence estimator associated with the one or more prefetch points.
  • 28. A processor system as defined in claim 23, wherein the processor is programmed to confirm the one or more delinquent paths.
  • 29. A processor system as defined in claim 23, wherein the processor is programmed to select a sampling rate to receive the one or more batches of samples from the performance monitoring unit in a processor system.