1. Field of the Invention
The present disclosure relates generally to an improved data processing system and in particular to a method and apparatus for processing data. Still more particularly, the present disclosure relates to a computer implemented method, apparatus, and computer program code for managing garbage collection in a data processing system.
2. Description of the Related Art
Garbage collection is a form of automatic memory management. A garbage collector or other process attempts to reclaim memory used by objects in the memory that will never be accessed or modified by an application or other process.
Garbage collection frees a programmer from having to worry about releasing objects that are no longer needed when writing applications. Further, garbage collection also may aid programmers in their efforts to make programs more stable because garbage collection may prevent various runtime errors that could occur. Examples of errors that may occur include, for example, dangling pointer bugs and double free bugs.
A dangling pointer bug may occur when a piece of memory is freed while pointers are still pointing to the memory and one of the pointers is used. A double free bug may occur when a region of memory is free and an attempt is made by a program to free that region of memory. Also, memory leaks that occur when a program fails to free memory that is no longer accessed also may be reduced and/or eliminated through garbage collection.
Garbage collection may be used in various environments including in a Java Virtual Machine (JVM). Garbage collection also may be available with other environments including, for example, C and C++.
The different illustrative embodiments provide a computer implemented method, apparatus, and computer program product for managing garbage collection. Monitoring is performed for a garbage collection state in a virtual machine. Responsive to detecting the garbage collection state, a priority for a set of garbage collection threads is increased.
In another illustrative embodiment, a computer comprises a bus; a storage device connected to the bus, wherein program code is stored on the storage device; and a processor unit is connected to the bus. The processor unit executes the program code to monitor for a garbage collection state within an execution environment, and increase a priority of a set of garbage collection threads in response to detecting the garbage collection state.
In still another illustrative embodiment, computer program product for managing garbage collection comprising a computer recordable storage medium and program code stored on the computer recordable storage medium. Program code is present for monitoring for a garbage collection state within an execution environment. Program code is also present, responsive to detecting the garbage collection state, for increasing a priority of a set of garbage collection threads.
As will be appreciated by one skilled in the art, the present invention may be embodied as a system, method, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.), or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module,” or “system.” Furthermore, the present invention may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium.
Any combination of one or more computer usable or computer readable medium(s) may be utilized. The computer usable or computer readable medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non-exhaustive list) of the computer readable medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CDROM), an optical storage device, a transmission media such as those supporting the Internet or an intranet, or a magnetic storage device.
Note that the computer usable or computer readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory. In the context of this document, a computer usable or computer readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer usable medium may include a propagated data signal with the computer usable program code embodied therewith, either in baseband or as part of a carrier wave. The computer usable program code may be transmitted using any appropriate medium, including, but not limited to, wireless, wireline, optical fiber cable, RF, etc.
Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as, for example, Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
The present invention is described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions.
These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a computer readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Turning now to
Processor unit 104 serves to execute instructions for software that may be loaded into memory 106. Processor unit 104 may be a number of processors, depending on the particular implementation. A number of items, as used herein, refers to one or more items. For example, a number of processors is one or more processors. These processors may be separate chips or may be cores within a multi-processor core. In other words, a processor may be a processor such as a central processing unit and/or a core within a multi-core central processing unit. Further, processor unit 104 may be implemented using one or more heterogeneous processor systems in which a main processor is present with secondary processors on a single chip. As another illustrative example, processor unit 104 may be a symmetric multi-processor system containing multiple processors of the same type.
Memory 106 and persistent storage 108 are examples of storage devices. A storage device is any piece of hardware that is capable of storing information either on a temporary basis and/or a permanent basis. Memory 106, in these examples, may be, for example, a random access memory or any other suitable volatile or non-volatile storage device. Persistent storage 108 may take various forms depending on the particular implementation. For example, persistent storage 108 may contain one or more components or devices. For example, persistent storage 108 may be a hard drive, a flash memory, a rewritable optical disk, a rewritable magnetic tape, or some combination of the above. The media used by persistent storage 108 also may be removable. For example, a removable hard drive may be used for persistent storage 108.
Communications unit 110, in these examples, provides for communications with other data processing systems or devices. In these examples, communications unit 110 is a network interface card. Communications unit 110 may provide communications through the use of either or both physical and wireless communications links.
Input/output unit 112 allows for input and output of data with other devices that may be connected to data processing system 100. For example, input/output unit 112 may provide a connection for user input through a keyboard and mouse. Further, input/output unit 112 may send output to a printer. Display 114 provides a mechanism to display information to a user.
Instructions for the operating system and applications or programs are located on persistent storage 108. These instructions may be loaded into memory 106 for execution by processor unit 104. The processes of the different embodiments may be performed by processor unit 104 using computer implemented instructions, which may be located in a memory, such as memory 106. These instructions are referred to as program code, computer usable program code, or computer readable program code that may be read and executed by a processor in processor unit 104. The program code in the different embodiments may be embodied on different physical or tangible computer readable media, such as memory 106 or persistent storage 108.
Program code 116 is located in a functional form on computer readable media 118 that is selectively removable and may be loaded onto or transferred to data processing system 100 for execution by processor unit 104. Program code 116 and computer readable media 118 form computer program product 120 in these examples. In one example, computer readable media 118 may be in a tangible form, such as, for example, an optical or magnetic disc that is inserted or placed into a drive or other device that is part of persistent storage 108 for transfer onto a storage device, such as a hard drive that is part of persistent storage 108. In a tangible form, computer readable media 118 also may take the form of a persistent storage, such as a hard drive, a thumb drive, or a flash memory that is connected to data processing system 100. The tangible form of computer readable media 118 is also referred to as computer recordable storage media. In some instances, computer readable media 118 may not be removable.
Alternatively, program code 116 may be transferred to data processing system 100 from computer readable media 118 through a communications link to communications unit 110 and/or through a connection to input/output unit 112. The communications link and/or the connection may be physical or wireless in the illustrative examples. The computer readable media also may take the form of non-tangible media, such as communications links or wireless transmissions containing the program code.
The different components illustrated for data processing system 100 are not meant to provide architectural limitations to the manner in which different embodiments may be implemented. The different illustrative embodiments may be implemented in a data processing system including components in addition to or in place of those illustrated for data processing system 100. Other components shown in
As one example, a storage device in data processing system 100 is any hardware apparatus that may store data. Memory 106, persistent storage 108 and computer readable media 118 are examples of storage devices in a tangible form.
In another example, a bus system may be used to implement communications fabric 102 and may be comprised of one or more buses, such as a system bus or an input/output bus. Of course, the bus system may be implemented using any suitable type of architecture that provides for a transfer of data between different components or devices attached to the bus system. Additionally, a communications unit may include one or more devices used to transmit and receive data, such as a modem or a network adapter. Further, a memory may be, for example, memory 106 or a cache such as found in an interface and memory controller hub that may be present in communications fabric 102.
With reference to
These components include processor unit 200, operating system 202, virtual machine 204, device driver 206, deferred procedure call handler 208, profiler 210, threads 212, sampling threads 214, device driver work area 216, and data area 218.
Processor unit 200 is similar to processor unit 104 in
In particular, interrupt 220 and interrupt 222 may be generated based on timed interrupts that may be initiated for all of the processors within processor unit 200. In these examples, this type of interrupt may be generated using an advanced programmable interrupt controller within each processor and processor unit 200. Processing of interrupt 220 and interrupt 222 may initiate garbage collection management processes in these illustrative embodiments.
Alternatively, virtual machine 204 initiates garbage collection by detecting that the amount of objects in heap 223, a work area for objects in object 219, exceeds a threshold. In one embodiment, virtual machine 204 notifies profiler 210 that it has started garbage collection for heap 223. Profiler 210 notifies device driver 206 that virtual machine 204 has entered the garbage collection state. In this embodiment, virtual machine 204 notifies profiler 210 that it has completed garbage collection. Profiler 210 then notifies device driver 206 that virtual machine 204 has completed the garbage collection and is no longer in a garbage collection state.
Profiler 210 may use interfaces to virtual machine 204 to identify garbage collection threads 221, in threads 212, and operating system interfaces for operating system 202 to change the priorities of garbage collection threads 221 instead of requiring support from device driver 206. In another embodiment, virtual machine 204 uses operating system interfaces to control the priorities of garbage collection threads 221.
The interrupts may be passed to device driver 206 in a number of different ways. For example, interrupt 220 is passed to device driver 206 through call 224. Alternatively, interrupt 222 is passed directly to device driver 206 via an Interrupt Vector Table (IVT). After receiving an interrupt, device driver 206 may process the interrupt using a deferred procedure call (DPC) to deferred procedure call handler 208 located within device driver 206. Of course, other routines or processes may be used to process these interrupts. The deferred procedure call initiated by device driver 206 is used to continue processing interrupt information from interrupt 222. Device driver 206 determines the interrupted process and thread and may use this information by applying policy 228.
This determination may be made using policy 228. Policy 228 may be a set of rules identifying what actions to take. The rules may include identification of one or more garbage collection thread identifications used to identify an interrupted thread as a garbage collection thread, or checking an indication of garbage collection mode set by virtual machine 204 or profiler 210. In response to determining the garbage collection state, other actions may be taken. These actions may include changing a priority of garbage collection threads. Whether changes to the priority of garbage collection threads occur may depend on what threads are currently executing, what memory ranges are being accessed, what processes are executing, and/or some other suitable criteria.
In these examples, the interrupt handler 229 may identify the address interrupted or the data address being accessed at the time of interrupt 222. For example, a user may identify a set of routines of interest. Profiler 210 may identify the address ranges for a set of routines by obtaining loaded module information or by monitoring addresses of JITed methods to form address ranges 227. Profiler 210 passes address ranges 227 to device driver 206, which places address ranges 227 into device driver work area 216.
In a similar manner, a user may specify a specific object class or object instance meeting specific criteria or a data area referenced by a lock or monitor using profiler 210. Profiler 210 may obtain the data information area from virtual machine 204 and pass this information to device driver 206. In turn, device driver 206 places this information into device driver work area 216 as address ranges 227. In this manner, the interrupt handler may compare the identified address with the set of address ranges stored in device driver work area 216.
Deferred procedure call handler may decide that garbage collection management processes are to be executed using policy 228. For example, policy 228 may state that priority for garbage collection threads should be increased if a garbage collection state is present when interrupt 222 is received. In other illustrative embodiments, policy 228 may include rules stating that garbage collection priority may change if a particular thread is being executed, a particular address range is being accessed, and/or when some other suitable criteria is present.
With reference now to
Heap 304 contains objects 306. These objects may be allocated during the execution of threads 308. Threads 308 may access objects 306. When a thread within threads 308 accesses an object within objects 306, a lock is obtained for that object from locks 310. This lock prevents other threads from accessing the same object. Once the thread releases the lock for the object, then that object may be accessed by another thread. Of course, in other illustrative embodiments, execution environment 301 may include environments other than virtual machines.
For example, execution environment 301 may be any environment in which threads execute and use memory that requires periodic garbage collection as part of the supported infrastructure, typically when there are data structures that are no longer being used or can be reallocated if needed. When garbage collection is supported, any threads within threads 308 that need to allocate objects must release their locks and wait for garbage collection threads 312 to acquire and release locks from locks 310.
During the phase of acquiring ownership of locks 310 by garbage collection threads 312, it is advantageous for any of threads 308 currently owning at the lock within locks 310 to complete processing as quickly as possible to allow garbage collection threads 312 to acquire locks 310 and begin processing of heap 304. Once garbage collection threads 312 own locks 310, it is advantageous to allow garbage collection threads 312 to execute as fast as possible without interference from threads 308. It is also desirable for threads 308 to stay inactive until garbage collection is completed by garbage collection threads 312.
Some of this type of processing is performed automatically by operating system 314 as a part of normal lock handling processing. The length of time required to perform garbage collection, however, may be longer and require more resources than other types of processing handled by other uses of locks 310. For example, traversing heap 304 accesses more virtual storage. This situation is typical for large multi-gigabyte heaps. As a result, the illustrative embodiments recognize that effective garbage collection by garbage collection threads 312 may be improved through specialized handling.
In these different examples, operating system 314 has garbage collection interface 316. In this example, this garbage collection interface may support registering garbage collection threads in thread registration 318. As a result, when a garbage collection thread within garbage collection threads 312 obtains a lock from locks 310, thread registration 318 may be used to identify the lock as a garbage collection lock. In other words, a garbage collection thread registered in thread registration 318 may be identified when that thread obtains a lock from locks 310.
With this information, operating system 314 may identify a number of different phases for a garbage collection state. In these examples, these phases include starting garbage collection 320, entered garbage collection 322, and completed garbage collection 324. Starting garbage collection 320 may be identified when a garbage collection thread within garbage collection threads 312 obtains a lock from lock 310. Entered garbage collection 322 occurs when all of threads 308 have released any locks from locks 310. Completed garbage collection 324 occurs when garbage collection threads 312 release all of locks 310.
In these examples, when operating system 314 detects starting garbage collection 320, operating system 314 may change the priority of garbage collection threads 312. In particular, the priority of garbage collection threads 312 may be increased. This priority may be increased until any locks obtained by garbage collection threads 312 are released. Once entered garbage collection 322 has occurred, or a lock has been released by a thread within threads 308, the priority of threads 308 may be reduced. In this manner, threads 308 do not contend with garbage collection threads 312 for processor resources. The priorities may be restored after the garbage collection state ends.
In these depicted examples, operating system 314 may change the priority of threads 308 and garbage collection threads 312 by sending priority change 326 to scheduler 328. Scheduler 328 schedules the execution of threads such as threads 308 and garbage collection threads 312.
Additionally, operating system 314 also may perform other operations such as, for example, paging out non-garbage collection threads and paging in garbage collection threads and including expected data area accesses in this paging process. As another example, data areas previously or expected to be used by garbage collection threads may be paged in for use. A processor in a number of processors in a multi-processor data processing system may be assigned to perform the garbage collection.
In an alternative embodiment, the support for garbage collection processing may be performed using profiler 330. Virtual machine 302 may send notification 332 to profiler 330 when a garbage collection state occurs. In this example, virtual machine 302 is used to identify when a garbage collection process occurs as opposed to using operating system 314 as described above. When profiler 330 receives notification 332, profiler 330 may use garbage collection interface 316 to change the priority for garbage collection threads 312. In other examples, profiler 330 may use data collected during previous garbage collection processing to adjust thread priorities and to touch data areas to preload processor caches with heap data.
In these examples, the steps performed by operating system 314 to perform actions to increase the performance of garbage collection may be performed using an operating system process, such as, for example, a device driver or other operating system process within operating system 314.
With this type of embodiment, profiler 330 may notify a device driver such as, for example, device driver 206 in
In this manner, previously collected information may be used to adjust thread priorities and pre-fetch data in heap data areas. In particular, the priorities for threads 308 may be decreased while the priorities for garbage collection threads 312 may be increased while a garbage collection state is present. This thread information may be stored in history 334 for use by profiler 330.
The illustration of the components in
With reference now to
The process begins by identifying a set of garbage collection threads (step 400). Thereafter, the priority of the garbage collection threads is increased (step 402). The process then identifies a set of non-garbage collection threads (step 404). The priority of the set of non-garbage collection threads are decreased (step 406), with the process terminating thereafter.
Of course, other steps also may be performed. For example, data areas previously or expected to be used by garbage collection threads may be paged into memory for use. A processor within a multi-processor system may be assigned to perform garbage collection. The changing of the priority of threads in these examples may be performed by requesting thread priority changes via operating system interfaces.
Of course, various other actions may be performed depending on the condition identified within the operating system. The examples of different conditions and actions that may be initiated are provided for purposes of illustration and not meant to limit the conditions or actions that may be taken. The different illustrative embodiments may monitor for other conditions and perform other actions depending upon the rules within the policy.
With reference now to
The process begins by identifying non-garbage collection threads and/or associated data areas located in primary memory (step 500). In these examples, the primary memory is a random access memory. The process then pages out the identified non-garbage collection threads and/or associated data areas to a secondary memory (step 502). This secondary memory may be, for example, a hard disk.
The process then identifies any garbage collection threads and/or associated data areas that are not in the primary memory (step 504). The associated data areas may be ones that are expected to be used or touched by the garbage collection threads. The process then pages in the identified garbage collection threads and/or associated data areas into primary memory from the secondary memory (step 506) with the process terminating thereafter.
The different steps illustrated in
In this manner, the performance of garbage collection may be improved. This performance may be improved through the placement of garbage collection threads and data areas into the primary memory rather than having those threads being accessed from a secondary memory. In these examples, an operating system may perform other processing such as, for example, the steps described above, to enhance garbage collection processes.
Thus, the different illustrative embodiments provide a computer implemented method, apparatus, and computer program code for managing garbage collection. In the different illustrative embodiments, monitoring may be performed for a garbage collection state in a data processing system. If a garbage collection state is detected, the priority of garbage collection threads may be changed. The garbage collection threads may have their priority increased.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures.
For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed.
The invention can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment containing both hardware and software elements. In a preferred embodiment, the invention is implemented in software, which includes, but is not limited to, firmware, resident software, microcode, etc.
Furthermore, the invention can take the form of a computer program product accessible from a computer usable or computer readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer usable or computer readable medium can be any tangible apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk, and an optical disk. Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W), and DVD.
A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
Input/output or I/O devices (including, but not limited to, keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers.
Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modems, and Ethernet cards are just a few of the currently available types of network adapters.
The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention, the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
This application is a continuation-in-part of patent application U.S. Ser. No. 12/173,107, filed Jul. 15, 2008, now U.S. Pat. No. 8,286,134 entitled: Call Stack Sampling for a Multi-Processor System, which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5305454 | Record et al. | Apr 1994 | A |
5379432 | Orton et al. | Jan 1995 | A |
5404529 | Chernikoff et al. | Apr 1995 | A |
5465328 | Dievendorff et al. | Nov 1995 | A |
5473777 | Moeller et al. | Dec 1995 | A |
5475845 | Orton et al. | Dec 1995 | A |
5544318 | Schmitz et al. | Aug 1996 | A |
5682537 | Davies et al. | Oct 1997 | A |
5751789 | Farris et al. | May 1998 | A |
5764241 | Elliott et al. | Jun 1998 | A |
5768500 | Agrawal et al. | Jun 1998 | A |
5913213 | Wikstrom et al. | Jun 1999 | A |
5930516 | Watts, Jr. et al. | Jul 1999 | A |
6012094 | Leymann et al. | Jan 2000 | A |
6055492 | Alexander, III et al. | Apr 2000 | A |
6108654 | Chan et al. | Aug 2000 | A |
6112225 | Kraft et al. | Aug 2000 | A |
6125363 | Buzzeo et al. | Sep 2000 | A |
6128611 | Doan et al. | Oct 2000 | A |
6158024 | Mandal | Dec 2000 | A |
6178440 | Foster et al. | Jan 2001 | B1 |
6199075 | Ungar et al. | Mar 2001 | B1 |
6233585 | Gupta et al. | May 2001 | B1 |
6338159 | Alexander, III et al. | Jan 2002 | B1 |
6438512 | Miller | Aug 2002 | B1 |
6442572 | Leymann et al. | Aug 2002 | B2 |
6449614 | Marcotte | Sep 2002 | B1 |
6553564 | Alexander, III et al. | Apr 2003 | B1 |
6601233 | Underwood | Jul 2003 | B1 |
6625602 | Meredith et al. | Sep 2003 | B1 |
6633897 | Browning et al. | Oct 2003 | B1 |
6651243 | Berry et al. | Nov 2003 | B1 |
6654948 | Konuru et al. | Nov 2003 | B1 |
6658652 | Alexander, III et al. | Dec 2003 | B1 |
6662358 | Berry et al. | Dec 2003 | B1 |
6662359 | Berry et al. | Dec 2003 | B1 |
6681230 | Blott et al. | Jan 2004 | B1 |
6697802 | Ma et al. | Feb 2004 | B2 |
6697935 | Borkenhagen et al. | Feb 2004 | B1 |
6728955 | Berry et al. | Apr 2004 | B1 |
6728959 | Merkey | Apr 2004 | B1 |
6742016 | Bhoj et al. | May 2004 | B1 |
6751789 | Berry et al. | Jun 2004 | B1 |
6857120 | Arnold et al. | Feb 2005 | B1 |
6874074 | Burton et al. | Mar 2005 | B1 |
6880086 | Kidder et al. | Apr 2005 | B2 |
6904594 | Berry et al. | Jun 2005 | B1 |
6931354 | Jones et al. | Aug 2005 | B2 |
6941552 | Beadle et al. | Sep 2005 | B1 |
6954922 | Liang | Oct 2005 | B2 |
6976263 | Delaney | Dec 2005 | B2 |
6993246 | Pan et al. | Jan 2006 | B1 |
7000047 | Nguyen et al. | Feb 2006 | B2 |
7020696 | Perry et al. | Mar 2006 | B1 |
7028298 | Foote | Apr 2006 | B1 |
7047258 | Balogh et al. | May 2006 | B2 |
7093081 | DeWitt, Jr. et al. | Aug 2006 | B2 |
7114036 | DeWitt, Jr. et al. | Sep 2006 | B2 |
7114150 | Dimpsey et al. | Sep 2006 | B2 |
7162666 | Bono | Jan 2007 | B2 |
7178145 | Bono | Feb 2007 | B2 |
7206848 | Zara et al. | Apr 2007 | B1 |
7222119 | Ghemawat et al. | May 2007 | B1 |
7257657 | DeWitt, Jr. et al. | Aug 2007 | B2 |
7278141 | Accapadi et al. | Oct 2007 | B2 |
7296130 | Dimpsey et al. | Nov 2007 | B2 |
7321965 | Kissell | Jan 2008 | B2 |
7325108 | Tuel | Jan 2008 | B2 |
7398518 | Dichter | Jul 2008 | B2 |
7426730 | Mathews et al. | Sep 2008 | B2 |
7474991 | DeWitt, Jr. et al. | Jan 2009 | B2 |
7496918 | Dice et al. | Feb 2009 | B1 |
7526757 | Levine | Apr 2009 | B2 |
7529914 | Saha et al. | May 2009 | B2 |
7574587 | DeWitt, Jr. et al. | Aug 2009 | B2 |
7584332 | Kogge et al. | Sep 2009 | B2 |
7587364 | Crumbach et al. | Sep 2009 | B2 |
7610585 | Shpeisman et al. | Oct 2009 | B2 |
7624137 | Bacon et al. | Nov 2009 | B2 |
7653895 | James-Roxby et al. | Jan 2010 | B1 |
7688867 | Kizhepat | Mar 2010 | B1 |
7689867 | Rosenbluth et al. | Mar 2010 | B2 |
7716647 | Loh et al. | May 2010 | B2 |
7721268 | Loh et al. | May 2010 | B2 |
7779238 | Kosche et al. | Aug 2010 | B2 |
7788664 | Janakiraman et al. | Aug 2010 | B1 |
7921075 | Herness et al. | Apr 2011 | B2 |
7921875 | Moriiki et al. | Apr 2011 | B2 |
7925473 | DeWitt, Jr. et al. | Apr 2011 | B2 |
7962913 | Accapadi et al. | Jun 2011 | B2 |
7962924 | Kuiper et al. | Jun 2011 | B2 |
7996593 | Blackmore et al. | Aug 2011 | B2 |
7996629 | Wan et al. | Aug 2011 | B2 |
8018845 | Ruello et al. | Sep 2011 | B2 |
8024735 | Rudd et al. | Sep 2011 | B2 |
8117599 | Edmark et al. | Feb 2012 | B2 |
8117618 | Holloway et al. | Feb 2012 | B2 |
8132170 | Kuiper et al. | Mar 2012 | B2 |
8136124 | Kosche et al. | Mar 2012 | B2 |
8141053 | Levine | Mar 2012 | B2 |
8156495 | Chew et al. | Apr 2012 | B2 |
8191049 | Levine et al. | May 2012 | B2 |
8286134 | Jones et al. | Oct 2012 | B2 |
8381215 | Johnson et al. | Feb 2013 | B2 |
8566795 | Dewitt, Jr. et al. | Oct 2013 | B2 |
8799872 | Levine | Aug 2014 | B2 |
8843684 | Jones et al. | Sep 2014 | B2 |
9176783 | Kuiper et al. | Nov 2015 | B2 |
20020007363 | Vaitzblit | Jan 2002 | A1 |
20020016729 | Breitenbach et al. | Feb 2002 | A1 |
20020038332 | Alverson et al. | Mar 2002 | A1 |
20020073103 | Bottomley et al. | Jun 2002 | A1 |
20030004970 | Watts | Jan 2003 | A1 |
20030023655 | Sokolov et al. | Jan 2003 | A1 |
20030061256 | Mathews et al. | Mar 2003 | A1 |
20030083912 | Covington, III et al. | May 2003 | A1 |
20030233394 | Rudd et al. | Dec 2003 | A1 |
20040068501 | McGoveran | Apr 2004 | A1 |
20040093510 | Nurmeia | May 2004 | A1 |
20040142679 | Kearns et al. | Jul 2004 | A1 |
20040148594 | Williams | Jul 2004 | A1 |
20040162741 | Flaxer et al. | Aug 2004 | A1 |
20040163077 | Dimpsey et al. | Aug 2004 | A1 |
20040178454 | Kuroda et al. | Sep 2004 | A1 |
20040193510 | Catahan, Jr. et al. | Sep 2004 | A1 |
20040215614 | Doyle et al. | Oct 2004 | A1 |
20040215768 | Oulu et al. | Oct 2004 | A1 |
20040216112 | Accapadi et al. | Oct 2004 | A1 |
20040220931 | Guthridge et al. | Nov 2004 | A1 |
20040220932 | Seeger et al. | Nov 2004 | A1 |
20040220933 | Walker | Nov 2004 | A1 |
20040268316 | Fisher et al. | Dec 2004 | A1 |
20050021354 | Brendle et al. | Jan 2005 | A1 |
20050080806 | Doganata et al. | Apr 2005 | A1 |
20050086455 | DeWitt, Jr. et al. | Apr 2005 | A1 |
20050091663 | Bagsby | Apr 2005 | A1 |
20050102493 | DeWitt, Jr. et al. | May 2005 | A1 |
20050138443 | Cooper | Jun 2005 | A1 |
20050149585 | Bacon et al. | Jul 2005 | A1 |
20050155018 | DeWitt, Jr. et al. | Jul 2005 | A1 |
20050155019 | Levine et al. | Jul 2005 | A1 |
20050166187 | Das et al. | Jul 2005 | A1 |
20050204349 | Lewis et al. | Sep 2005 | A1 |
20050256961 | Alon et al. | Nov 2005 | A1 |
20050262130 | Mohan | Nov 2005 | A1 |
20050273757 | Anderson | Dec 2005 | A1 |
20050273782 | Shpeisman et al. | Dec 2005 | A1 |
20060004757 | Watts | Jan 2006 | A1 |
20060023642 | Roskowski et al. | Feb 2006 | A1 |
20060031837 | Theurer | Feb 2006 | A1 |
20060059486 | Loh et al. | Mar 2006 | A1 |
20060072563 | Regnier et al. | Apr 2006 | A1 |
20060080486 | Yan | Apr 2006 | A1 |
20060095571 | Gilgen et al. | May 2006 | A1 |
20060130001 | Beuch et al. | Jun 2006 | A1 |
20060136914 | Marascio et al. | Jun 2006 | A1 |
20060149877 | Pearson | Jul 2006 | A1 |
20060167955 | Vertes | Jul 2006 | A1 |
20060184769 | Floyd et al. | Aug 2006 | A1 |
20060212657 | Tuel | Sep 2006 | A1 |
20060218290 | Lin et al. | Sep 2006 | A1 |
20060259911 | Weinrich et al. | Nov 2006 | A1 |
20060282400 | Kalavacharia et al. | Dec 2006 | A1 |
20060282707 | Rosenbluth et al. | Dec 2006 | A1 |
20070006168 | Dimpsey et al. | Jan 2007 | A1 |
20070033589 | Nicholas et al. | Feb 2007 | A1 |
20070150904 | Kim et al. | Jun 2007 | A1 |
20070169003 | Branda et al. | Jul 2007 | A1 |
20070171824 | Ruello et al. | Jul 2007 | A1 |
20070220495 | Chen et al. | Sep 2007 | A1 |
20070220515 | DeWitt, Jr. et al. | Sep 2007 | A1 |
20070226139 | Crumbach et al. | Sep 2007 | A1 |
20080082761 | Herness et al. | Apr 2008 | A1 |
20080082796 | Merten et al. | Apr 2008 | A1 |
20080091679 | Herness et al. | Apr 2008 | A1 |
20080091712 | Daherkar et al. | Apr 2008 | A1 |
20080148240 | Jones et al. | Jun 2008 | A1 |
20080148241 | Jones et al. | Jun 2008 | A1 |
20080148299 | Daherkar et al. | Jun 2008 | A1 |
20080177756 | Kosche et al. | Jul 2008 | A1 |
20080189687 | Levine et al. | Aug 2008 | A1 |
20080196030 | Buros et al. | Aug 2008 | A1 |
20080263325 | Kudva et al. | Oct 2008 | A1 |
20080307441 | Kuiper et al. | Dec 2008 | A1 |
20090007075 | Edmark et al. | Jan 2009 | A1 |
20090044198 | Kuiper et al. | Feb 2009 | A1 |
20090083002 | DeWitt et al. | Mar 2009 | A1 |
20090100432 | Holloway et al. | Apr 2009 | A1 |
20090106762 | Accapadi et al. | Apr 2009 | A1 |
20090178036 | Levine | Jul 2009 | A1 |
20090187909 | Russell et al. | Jul 2009 | A1 |
20090187915 | Chew et al. | Jul 2009 | A1 |
20090204978 | Lee et al. | Aug 2009 | A1 |
20090210649 | Wan et al. | Aug 2009 | A1 |
20090235247 | Cho et al. | Sep 2009 | A1 |
20090235262 | Ceze et al. | Sep 2009 | A1 |
20090241095 | Jones et al. | Sep 2009 | A1 |
20090271549 | Blackmore et al. | Oct 2009 | A1 |
20090292846 | Park et al. | Nov 2009 | A1 |
20090300224 | Duffy et al. | Dec 2009 | A1 |
20100017581 | Clift et al. | Jan 2010 | A1 |
20100017583 | Kuiper et al. | Jan 2010 | A1 |
20100017584 | Jones et al. | Jan 2010 | A1 |
20100017789 | Dewitt, Jr. et al. | Jan 2010 | A1 |
20100017804 | Gupta et al. | Jan 2010 | A1 |
20100036981 | Ganesh et al. | Feb 2010 | A1 |
20100333071 | Kuiper et al. | Dec 2010 | A1 |
20110289361 | Kuiper et al. | Nov 2011 | A1 |
20110307640 | Jones et al. | Dec 2011 | A1 |
20110320173 | Levine | Dec 2011 | A1 |
20120191893 | Kuiper et al. | Jul 2012 | A1 |
Number | Date | Country |
---|---|---|
1614555 | May 2005 | CN |
0649084 | Apr 1995 | EP |
0689141 | Dec 1995 | EP |
1603307 | Dec 2005 | EP |
H11327951 | Nov 1999 | JP |
2002055848 | Feb 2002 | JP |
2004199330 | Jul 2004 | JP |
2005141392 | Jun 2005 | JP |
2008257287 | Oct 2008 | JP |
2009098500 | Sep 2009 | KR |
WO2009014868 | Jan 2009 | WO |
Entry |
---|
Sun Java Real-Time System 2.0—01, Garbage Collection Guide, Nov. 21, 2007, http://download.oracle.com/javase/realtime/doc—2.0—u1/release/JavaRTSGarbageCollection.html. |
AIX Versions 3.2 and 4 Performance Tuning Guide, Performance Overview of the Virtual Memory Manager (VMM), Apr. 1997, http://nfosolutions.com/doc—link/C/a—doc—lib/aixbman/prftungd/vmmov.htm. |
Froyd et al., “Low-Overhead Call Path Profiling of Unmodified, Optimized Code”, ACM, ICS'05 Cambridge, Massachusetts, pp. 81-90. |
Chanda et al., “Whodunit: Transactional Profiling for Multi-Tier Applications”, ACM, EuroSys'07, Mar. 2007 Lisboa, Portugal, pp. 17-30. |
Binder, “Portable and Accurate Sampling Profiling for Java”, Software—Practice and Experience, vol. 36, Issue 6, May 2006, pp. 615-650. |
Dunlavey, “Performance Tuning with Instruction-Level Cost Derived from Call-Stack Sampling”, ACM SIGPLAN Notices, vol. 42(8), Aug. 2007, pp. 4-8. |
Abdel-Shafi et al., “Efficient User-Level Thread Migration and Checkpointing on Windows NT Clusters,” Proceedings of the 3rd USENIX Windows NT Symposium, Jul. 1999, 11 pages. |
Alexander et al., “A Unifying Approach to Performance Analysis in the Java environment,” IBM Systems Journal, vol. 39, No. 1, Jan. 2000, pp. 118-134. |
Alkalaj et al., “Performance of Multi-Threaded Execution in a Shared-Memory Multiprocessor,” Proceedings in the 3rd IEEE Symposium on Parallel and Distributed Processing, Dec. 1991, pp. 330-333. |
Arpaci-Dusseau, “Implicit Coscheduling: Coordinated Scheduling with Implicit Information in Distributed Systems,” ACM Transactions on Computer Systems, vol. 19, No. 3, Aug. 2011, pp. 283-331. |
Asokan et al., “Providing Time- and Space- Efficient Procedure Calls for Asynchronous Software Thread Integration,” Proceedings of the 2004 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES '04), Sep. 2004, pp. 167-178. |
Barcia et al., “Building SOA solutions with the Service Component Architecture—Part 1,” IBM WebSphere Developer Technical Journal, Oct. 26, 2005, 46 pages. |
Cao et al., “A Study of Java Virtual Machine Scalability Issues on SMP Systems,” Proceedings of the 2005 IEEE International Symposium on Workload Characterization, Oct. 2005, pp. 119-128. |
Cerami, “Web Services Essentials: Distributed Applications with XML-RPC, SOAP, UDDI & WSDL,” First Edition, Feb. 2002, 286 pages. |
Chen et al., “Resource Allocation in a Middleware for Streaming Data,” Middleware 2004 Companion, 2nd Workshop on Middleware for Grid Computing, Oct. 2004, pp. 5-10. |
Choi et al., “Deterministic Replay of Java Mulithreaded Applications,” ACM Sigmetrics Symposium on Parallel and Distributed Tools (SPDT), Aug. 1998, 12 pages. |
Foong et al., “Architectural Characterization of Processor Affinity in Network Processing,” Proceedings of the IEEE International Symposium on Performance Analysis of Systems and Software, Mar. 2005, 12 pages. |
Graham et al., “gprof: a Call Graph Execution Profiler,” Proceedings of the 1982 SIGPLAN Symposium on Compiler Construction, Jun. 1982, pp. 120-126. |
Harkema et al., “Performance Monitoring of JAVA Applications,” Proceedings of the 3rd International Workshop on Software and Performance, Jul. 2002, pp. 114-127. |
International Business Machines Corporation, “Pacing support for Time Based Context Sampling,” IP.com Prior Art Database Technical Disclosure No. IPCOM000178312D, Jan. 22, 2009, 2 pages. |
International Business Machines Corporation, “Process and Thread Sampling—Target Selection in Interrupt Mode,” IP.com Prior Art Database Technical Disclosure No. IPCOM000172856D, Jul. 16, 2008, 2 pages. |
International Search Report, dated Sep. 3, 2010, regarding Application No. PCT/EP2010/058486, 3 pages. |
International Search Report and Written Opinion, dated Aug. 2, 2011, regarding Application No. PCT/EP2011/057574, 9 pages. |
Korochkin et al., “Experimental Performance Analysis of the Ada95 and Java Parallel Program on SMP Systems,” ACM SIGAda Ada Letters, vol. 23, No. 1, Mar. 2003, pp. 53-56. |
Mansouri-Samani et al., “A Configurable Event Service for Distributed Systems,” Proceedings of the Third International Conference on Configurable Distributed Systems, May 1996, pp. 210-217. |
Meyer et al., “The Devolution of Functional Analysis,” ACM SIGMOD Record, vol. 13, No. 3, Apr. 1983, pp. 65-91. |
Milton, “Thread Migration in Distributed Memory Multicomputers,” The Australian National University, Joint Computer Science Technical Report Series, Feb. 1998, 14 pages. |
Mohanty et al., “A Hierarchical Approach for Energy Efficient Application Design Using Heterogeneous Embedded Systems,” Proceedings of the 2003 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems, Oct. 2003, pp. 243-254. |
Purser et al., “A Study of Slipstream Processors,” Proceedings of the 33rd Annual ACM/IEEE International Symposium on Microarchitecture, Dec. 2000, pp. 269-280. |
Rinard et al., “Eliminating Synchronization Bottlenecks Using Adaptive Replication,” ACM Transactions on Programming Languages and Systems, vol. 25, No. 3, May 2003, pp. 316-359. |
Tam et al., “Thread Clustering: Sharing-Aware Scheduling on SMP-CMP-SMT Multiprocessors,” Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on Computer Systems, Mar. 2007, pp. 47-58. |
Tidwell et al., “Programming Web Services with SOAP,” First Edition, Dec. 2001, 225 pages. |
Tullsen et al., “Handling Long-Latency Loads in a Simultaneous Multithreading Processor,” Proceedings of the 34th International Symposium on Microarchitecture, Dec. 2001, pp. 318-327. |
von Behren et al., “Capriccio: Scalable Threads for Internet Services,” ACM SIGOPS Operating Systems Review, vol. 37, No. 5, Oct. 2003, 268-281. |
“Processing Events in a Sequence,” Websphere 6.0.2, copyright 2005, 2007, IBM Corporation, 9 pages. Accessed May 12, 2011, http://publib.boulder.ibm.com/infocenter/dmndhelp/v6rxrmx/topic/com.ibm.wbit.help.wirin . . . >. |
Whaley, “A Portable Sampling-Based Profiler for Java Virtual Machines,” Proceedings of the ACM 2000 Conference on Java Grande (JAVA '00), Jun. 2000, 10 pages. |
US 8,589,928, 11/2013, Kuiper et al. (withdrawn). |
Office Action, dated Dec. 19, 2011, regarding U.S. Appl. No. 12/173,053, 15 pages. |
Final Office Action, dated May 2, 2012, regarding U.S. Appl. No. 12/173,053, 16 pages. |
Notice of Allowance, dated Jun. 13, 2013, regarding U.S. Appl. No. 12/173,053, 13 pages. |
Office Action, dated Nov. 21, 2011, regarding U.S. Appl. No. 12/173,047, 15 pages. |
Final Office Action, dated Jun. 14, 2012, regarding U.S. Appl. No. 12/173,047, 19 pages. |
Office Action, dated Apr. 25, 2013, regarding U.S. Appl. No. 12/173,047, 37 pages. |
Notice of Allowance, dated Sep. 11, 2013, regarding U.S. Appl. No. 12/173,047, 22 pages. |
Notice of Allowance, dated Nov. 20, 2013, regarding U.S. Appl. No. 12/173,047, 21 pages. |
Office Action, dated Jan. 5, 2012, regarding U.S. Appl. No. 12/173,107, 21 pages. |
Notice of Allowance, dated Jun. 6, 2012, regarding U.S. Appl. No. 12/173,107, 10 pages. |
Office Action, dated Jun. 27, 2012, regarding U.S. Appl. No. 12/494,469, 21 pages. |
Final Office Action, dated Nov. 6, 2012, regarding U.S Appl. No. 12/494,469, 10 pages. |
Notice of Allowance, dated Jan. 17, 2013, regarding U.S. Appl. No. 12/494,469, 7 pages. |
Notice of Allowance, dated Nov. 12, 2013, regarding U.S. Appl. No. 12/494,469, 14 pages. |
Office Action, dated Sep. 19, 2012, regarding U.S. Appl. No. 12/786,381, 26 pages. |
Final Office Action, dated Apr. 3, 2013, regarding U.S. Appl. No. 12/786,381, 55 pages. |
Notice of Allowance, dated Aug. 28, 2013, regarding U.S. Appl. No. 12/786,381, 17 pages. |
Notice of Allowance, dated Dec. 19, 2013, regarding U.S. Appl. No. 12/786,381, 10 pages. |
Office Action, dated Apr. 14, 2014, regarding U.S. Appl. No. 12/786,381, 16 pages. |
Final Office Action, dated Sep. 5, 2014, regarding U.S. Appl. No. 12/786,381, 33 pages. |
Office Action, dated Feb. 6, 2015, regarding U.S. Appl. No. 12/786,381, 13 pages. |
Notice of Allowance, dated Jun. 11, 2015, regarding U.S. Appl. No. 12/786,381, 13 pages. |
Office Action, dated Mar. 2, 2012, regarding U.S. Appl. No. 12/813,706, 21 pages. |
Notice of Allowance, dated Aug. 20, 2012, regarding U.S. Appl. No. 12/813,706, 5 pages. |
Notice of Allowance, dated May 24, 2013, regarding U.S. Appl. No. 12/813,706, 14 pages. |
Notice of Allowance, dated Sep. 13, 2013, regarding U.S. Appl. No. 12/813,706, 17 pages. |
Notice of Allowance, dated Jan. 17, 2014, regarding U.S. Appl. No. 12/813,706, 13 pages. |
Notice of Allowance, dated Apr. 24, 2014, regarding U.S. Appl. No. 12/813,706, 10 pages. |
Office Action, dated Oct. 18, 2012, regarding U.S. Appl. No. 12/824,217, 23 pages. |
Final Office Action, dated Mar. 20, 2013, regarding U.S. Appl. No. 12/824,217, 35 pages. |
Notice of Allowance, dated Sep. 27, 2013, regarding U.S. Appl. No. 12/824,217, 22 pages. |
Office Action, dated Oct. 9, 2012, regarding U.S. Appl. No. 13/011,621, 25 pages. |
Final Office Action, dated Mar. 18, 2013, regarding U.S. Appl. No. 13/011,621, 30 pages. |
Notice of Allowance, dated Jul. 9, 2013, regarding U.S. Appl. No. 13/011,621, 11 pages. |
Number | Date | Country | |
---|---|---|---|
20100017447 A1 | Jan 2010 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12173107 | Jul 2008 | US |
Child | 12235302 | US |