Virtual machine monitoring using shared memory

Information

  • Patent Grant
  • 7552153
  • Patent Number
    7,552,153
  • Date Filed
    Tuesday, December 28, 2004
    19 years ago
  • Date Issued
    Tuesday, June 23, 2009
    15 years ago
Abstract
A system and method to monitor a virtual machine VM. The VM executes one or more applications. During executing of the one or more applications, local objects are created and stored within an internal heap maintained by the VM. Status data of the internal heap is published to monitoring memory external to the VM.
Description
TECHNICAL FIELD

This disclosure relates generally to virtual machines, and in particular but not exclusively, relates to monitoring java virtual machines.


BACKGROUND INFORMATION

Enterprise software has transformed the way diverse enterprises, large and small a like, transact and manage day-to-day operations. Businesses use enterprise software (e.g., web based application servers) to control production planning, purchasing and logistics, warehouse management and inventory management, production, vendor management, customer service, finance, personnel management, and other basic business activities. As the enterprise software industry continues to mature, the various application and hardware resources enlisted to facilitate this diverse set of tasks are being amalgamated into robust, highly integrated solutions (e.g., SAP NetWeaver, SAP xAPPs, mySAP Business Suite, etc.).


To integrate diverse hardware and software resources, developers of enterprise software have leveraged cross platform engines capable of minimizing or even severing platform dependencies from the enterprise solution. The Java 2 Platform, Enterprise Edition™ (“J2EE”) (e.g., J2EE Specification, Version 1.4) is a Java based solution supported by the Java Virtual Machine (“JVM”) engine. J2EE simplifies application development and decreases the need for programming and programmer training by creating standardized and reusable modular components. The popularity of Java based solutions is evident as the Information Technology (“IT”) world has gravitated to the Java language.


As enterprise software is woven into the fabric of modern business, failure of an enterprise solution may no longer be a mere nuisance, but has the potential to wreak catastrophic havoc on a business. As such, robust, reliable software is evermore critical. The enterprise software industry is marching toward the ultimate goal of self-healing software capable of sustainable, uninterrupted operation, without human intervention. In pursuit of this goal, IT technicians can benefit from convenient tools capable of monitoring the health of their enterprise software. With appropriate monitoring tools, IT technicians can take appropriate action in a timely manner to ensure a healthful state of their software or to spot delinquent applications and prevent repeat offenders. Currently, JVMs do not provide adequate tools to monitor their internal operation on a real-time basis.


SUMMARY OF INVENTION

A system and method to monitor a virtual machine (“VM”) is described. The VM executes one or more applications. During executing of the one or more applications, local objects are created and stored within an internal heap maintained by the VM. Status data of the internal heap is published to monitoring memory external to the VM. In one embodiment, the VM is a Java VM (“JVM”).


When memory of the internal heap becomes scarce, one or more of the local objects may be garbage collected. Garbage collecting data can be copied into the monitoring memory, as one type of the status data.


In one embodiment, multiple JVMs may each execute one or more Java applications. Shared objects created during execution of these Java applications may be stored into a shared heap that is maintained external to the multiple JVMs. Shared status data regarding the shared heap may also be copied into the monitoring memory.


In an embodiment with multiple JVMs, shared classes may be loaded during execution of the Java applications and stored within the shared heap. These shared classes may be used to instantiate the shared objects.


In one embodiment, the status data stored in the monitoring memory may be retrieved in response to receiving a status query, and the status data transmitted to a monitoring console to display the status data.


Embodiments of the invention may include all or some of the above described features. The above features can be implemented using a computer program, a method, a system or apparatus, or any combination of computer programs, methods, or systems. These and other details of one or more embodiments of the invention are set forth in the accompanying drawings and in the description below.





BRIEF DESCRIPTION OF THE DRAWINGS

Non-limiting and non-exhaustive embodiments of the invention are described with reference to the following figures, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified.



FIG. 1 is a block diagram illustrating a software system for monitoring the health of Java worker nodes from a monitoring console, in accordance with an embodiment of the invention.



FIG. 2 is a block diagram illustrating a software environment of an application server instance implemented with shared monitoring memory for monitoring Java virtual machines, in accordance with an embodiment of the invention.



FIG. 3 is a flow chart illustrating a process for generating status data for one or more Java virtual machines and storing the status data within shared monitoring memory, in accordance with an embodiment of the invention.



FIG. 4 is a block diagram illustrating a monitoring console for displaying status data of Java virtual machines communicated from an application server instance, in accordance with an embodiment of the invention.



FIG. 5 is a flow chart illustrating a process for communicating status data stored within shared monitoring memory to a monitoring console, in accordance with an embodiment of the invention.



FIG. 6 is a block diagram illustrating a demonstrative enterprise environment for implementing embodiments of the invention.



FIG. 7 illustrates a demonstrative processing system for implementing embodiments of the invention.





DETAILED DESCRIPTION

Embodiments of a system and method for monitoring java virtual machines (“JVMs”) using shared monitoring memory are described herein. In the following description numerous specific details are set forth to provide a thorough understanding of the embodiments. One skilled in the relevant art will recognize, however, that the techniques described herein can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring certain aspects.


Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.



FIG. 1 is a block diagram illustrating a software system 100 for monitoring the health of Java worker nodes 105 from a monitoring console 110, in accordance with an embodiment of the invention. In the illustrated embodiment, software system 100 includes an application server (“AS”) instance 115 and monitoring console 110. AS instance 115 includes one or more Java worker nodes 105 each providing the runtime environment for a Java virtual machine (“JVM”) 120, which in turn interprets/executes Java applications 125. Although the embodiments illustrated herein are described in connection with the Java programming language and JVMs, it should be appreciated that these techniques may be extended to other types of virtual machines and interpreted languages by those of ordinary skill in the art having the benefit of the instant disclosure.


Collectively, Java applications 125 may provide the logic for implementing various sub-layers (e.g., business layer, integration layer, presentation layer, etc.) of AS instance 115. In one embodiment, AS instance 115 is a web application server, such as Web AS by SAP, .NET by Microsoft, or the like. It should be appreciated that various components of AS instance 115 have been excluded from FIG. 1 for the sake of clarity and so as not to obscure the invention.


In one embodiment, Java applications 125 include compiled bytecode to be verified and interpreted by JVMs 105. For example, Java applications 125 may be servlets providing server-side logic to generate graphical user interfaces (“GUIs”) on remote clients and may further include JavaServer Page (“JSP”) extensions for providing dynamic content within the GUI. Java applications 125 may include business applications providing the business logic of an Enterprise JavaBean (“EJB”), applets providing client side logic, and the like.


During execution of Java applications 125, Java worker nodes 105 publish status data 130 to shared monitoring memory 135. Status data 130 includes operational health data of the internal workings of JVMs 120. This operational health data may include statistical data detailing heap utilization, garbage collecting activity, and the like. Once status data 130 is published to shared monitoring memory 135, monitoring console 110 can query shared monitoring memory 135 to display status data 130 for review by an Information Technology (“IT”) technician. Monitoring console 110 may be located locally on the same hardware machine executing AS instance 115, or advantageously, executed on a remote machine coupled to a network. Monitoring console 110 may further monitor an entire cluster of AS instances 115, all from a single remote machine.



FIG. 2 is a block diagram illustrating a software environment 200 of AS instance 115 implemented with shared monitoring memory 135 for monitoring the internal workings of JVMs 120, in accordance with an embodiment of the invention. The illustrated embodiment of software environment 200 includes native runtime processes 205, shared heap 210, shared monitoring memory 135, web service interface 215, and JVM control unit 220. Each native runtime process 205 provides the runtime environment for a single JVM 120. Together, a native runtime process 205 and a JVM 120 form a Java worker node 105. In a J2EE environment, JVM control unit 220 is often referred to as “JControl.”


The components of software environment 200 interact as follows. In one embodiment, web service interface 215 is loaded first. Web service interface 215 provides interface capabilities for the components of software environment 200 to communicate across an attached network. In one embodiment, web service interface 215 can be launched remotely from a command console. In a Java 2 Platform, Enterprise Edition (“J2EE”) environment, web service interface 215 is known as the WebService Based Start Service.


Once loaded and operating, web service interface 215 launches JVM control unit 220. In turn, JVM control unit 220 reserves and allocates memory to establish shared monitoring memory 135. Subsequently, JVM control unit 220 launches each native runtime process 205 to provide the runtime environments for JVMs 120. JVM control unit 220 is responsible for the life cycles of each native runtime process 205. JVM control unit 220 can launch a new native runtime process 205, terminate an existing native runtime process 205 at an end of its useful life cycle, or restart a hung, or otherwise problematic, native runtime process 205.


In one embodiment, web service interface 215, JVM control unit 220, and native runtime processes 205 are operating system (“OS”) runtime processes managed by the OS runtime environment. In one embodiment, web service interface 215, JVM control unit 220, and native runtime processes 205 are native machine code, such as compiled C++.


Upon commencement of a new native runtime process 205, the new native runtime process 205 will establish a new JVM 120 therein. During operation, each Java worker node 105 is assigned user requests by a dispatcher (not illustrated), services user sessions, and executes/interprets Java applications 125 on JVMs 120. Each JVM 120 establishes an internal heap 225 as a pre-reserved memory pool for future use by Java applications 125 as they are loaded. Internal heaps 225 are managed by each JVM 120 allocating and deallocating memory as is required by Java applications 125.


Java applications 125 include objects and classes. Objects and classes that are local or private only to a particular Java worker node 105 and not shared with other Java worker nodes 105 within AS instance 115 are called local classes 230 and local objects 235. When Java applications 125 are loaded and executed by JVM 120, local classes 230 and local objects 235 are stored within internal heaps 225 for use by Java applications 125 running on a single JVM 120. Classes include methods that perform tasks and return information when they complete those tasks. Objects are essentially reusable software components that model pieces of software programs in terms of properties and behaviors. Classes are used to instantiate an object with these properties and behaviors. In other words, local objects 235 inherit their properties and behaviors from the particular local class 230 used to instantiate (e.g., create) the particular local object 235.


As Java applications 125 consume internal heaps 225 by filling them with local classes 230 and local objects 235, memory within internal heaps 225 available to accept new local classes or new local objects may become scarce. As such, each JVM 120 includes a garbage collector 240 to implement a disciplined procedure for returning consumed resources back to the particular internal heap 225. In one embodiment, garbage collector 240 is a thread automatically executed by JVM 120 to reclaim dynamically allocated memory without explicit instructions to do so by the programmer of Java applications 125. When there are no more references to a local object 235 within internal heap 225, the particular local object 235 is marked for garbage collection. The memory consumed by the marked local object 235 is then reclaimed when garbage collector 240 executes. Performing regular garbage collection when available memory within internal heap 225 becomes scarce helps avoid resource leaks.


However, when available memory within internal heap 225 becomes scarce, performance of the particular Java worker node 105 suffers due to the garbage collecting activities. In practice, if internal heap 225 exceeds 80% capacity, garbage collecting activities of garbage collector 240 may result in the productive computing output of the particular Java worker node 105 grinding to a near halt. Although garbage collector 240 is relatively good at deleting unreferenced local objects 235 to reclaim consumed memory, not all idle local objects 235 are reclaimed for a variety of reasons. As such, the older a particular Java worker node 105 happens to be, the more likely that Java worker node 105 is to suffer from chronic garbage collecting activity.


Not all objects utilized by Java applications 125 are local to the particular Java worker node 105. In one embodiment, shared heap 210 stores shared classes 250 and shared objects 255 utilized by Java applications 125 executing on multiple Java worker nodes 105. Shared heap 210 is a memory pool external to JVMs 120 and accessible by Java applications 125 executing on multiple JVMs 120. In this embodiment, the first Java application 125 to instantiate and use a shared object 255, places the new shared object 255 into shared heap 210. Subsequently, other Java worker nodes 105 and Java applications 125 can use the shared object 255 without expensing time and computing resources to create the particular shared object 255. Sharing classes and objects within shared heap 210 not only saves computing time that would otherwise be consumed to create the same object multiple times, it also conserves available memory within internal heaps 225, since a single instance of a shared object 255 can replace multiple instances of the very same local object within multiple internal heaps 225.


The processes explained below are described in terms of computer software and hardware. The techniques described may constitute machine-executable instructions embodied within a machine (e.g., computer) readable medium, that when executed by a machine will cause the machine to perform the operations described. Additionally, the processes may be embodied within hardware, such as an application specific integrated circuit (“ASIC”) or the like. The order in which some or all of the process blocks appear in each process should not be deemed limiting. Rather, one of ordinary skill in the art having the benefit of the present disclosure will understand that some of the process blocks may be executed in a variety of orders not illustrated.



FIG. 3 is a flow chart illustrating a process 300 for generating status data 130 for one or more JVMs 120 and storing status data 130 within shared monitoring memory 135, in accordance with an embodiment of the invention. In a process block 305, Java applications 125 cause local classes 230 and shared classes 250 to be loaded into internal heaps 225 and shared heap 210, respectively. In a process block 310, local objects 235 are instantiated with local classes 230 and stored into internal heaps 225, while shared objects 255 are instantiated with shared classes 250 and stored into shared heap 210.


During operating of Java applications 125, shared monitoring memory 135 is updated with status data 130. Status data 130 may include a variety of information to monitor the internal workings of each JVM 120 in real-time (e.g., heap utilization statistics, garbage collecting statistics, etc.).


In a process block 315, a virtual machine (“VM”) monitor 260 updates shared monitoring memory 135 with heap utilization data. In the illustrated embodiment, VM monitor 260 is a sub-component of native runtime process 205. As such, in the illustrated embodiment, VM monitor 260 is external to JVM 120 and not executed/interpreted thereon. To gain access to the internal structures of JVM 120, a native application programming interface (“API”) 265 is provided by native runtime process 205. In one embodiment, native API 265 includes functions that can retrieve data from within JVM 120, such as from internal heap 225. VM monitor 260 calls the functions of native API 265, which return the requested data. In one embodiment, the functions may return utilization statistics of internal heap 225 (e.g., “hit rate” of various local objects, number of local objects, amount of internal heap consumed, available memory within internal heap, etc.). In response, VM monitor 260 copies/publishes the heap utilization data into shared monitoring memory 135. In one embodiment, VM monitor 260 may format/organize the heap utilization data prior to publishing it into shared monitoring memory 135. In one embodiment, VM monitor 260 updates shared monitoring memory 135 on a periodic basis (e.g., every 5 seconds or so).


As discussed above, during execution of Java applications 125, available memory within internal heap 225 may become scarce. If internal heap 225 approaches capacity (decision block 320), then process 300 continues to a process block 325. In process block 325, garbage collector 240 performs automatic garbage collection to delete unreferenced local objects 235 and reclaim the consumed memory within internal heap 225.


In a process block 330, shared monitoring memory 135 is updated with garbage collecting data in response to the garbage collection event in process block 325. In one embodiment, a callback function of native API 265 updates shared monitoring memory 135 with the garbage collection status data. Whenever a garbage collecting event occurs, JVM 120 invokes the callback function, which in turn makes a note of the garbage collecting event within shared monitoring memory 135. In yet another embodiment, the callback function transfers status data of the garbage collecting event to VM monitor 260, which in turn registers the garbage collecting event into shared monitoring memory 135. A history of garbage collecting events and related monitoring data may be saved concurrently within shared monitoring memory 135.


As discussed above, shared monitoring memory 135 is external to Java worker nodes 105. As such, shared monitoring memory 135 is insulated from Java worker nodes 105 in the event one or more of them crashes, hangs (e.g., enters an infinite loop), or otherwise fails. Accordingly, even if a JVM 120 crashes, the latest status data 130 just prior to the faulty JVM 120 going down is still available within shared monitoring memory 135. Vital information may be quickly obtained from shared monitoring memory 135 to determine the source of the error. In fact, this postmortem status data may already be displayed on monitoring console 110 for inspection by an IT technician without any additional effort to obtain it.



FIG. 4 is a block diagram illustrating monitoring console 110 for displaying status data 130 retrieved from JVMs 120, in accordance with an embodiment of the invention. As illustrated, web service interface 215 is communicatively coupled to retrieve status data 130 from shared monitoring memory 135 and transmits this data across a network 305 (or other communication medium) to monitoring console 110. It should be appreciated that monitoring console 110 may execute on the same physical hardware as AS instance 105, in which case status data 130 would not need to be transmitted across network 305.



FIG. 5 is a flow chart illustrating a process 500 for communicating status data 130 from shared monitoring memory 135 to monitoring console 110, in accordance with an embodiment of the invention. In a process block 505, monitoring console 110 transmits a status query to AS instance 105. The status query may be transmitted to AS instance 105 automatically on a periodic basis, in response to a specified event, or in response to a screen refresh request by an IT technician.


In the illustrated embodiment, monitoring console 110 can display both garbage collection statistics and heap utilization statistics. The garbage collecting activities of multiple JVMs 120 can be displayed at once. The garbage collecting monitoring data displayed may include a recent history for each JVM 120, while outputting long term records to log files. The garbage collecting monitoring data may include: amount of currently available memory within each internal heap 225, percentage of currently available memory, amount of consumed memory within internal heap 225, percentage of consumed memory within internal heap 225, absolute amount of available memory, start and stop time/date of each garbage collecting event, duration of each garbage collecting event, and the like.


The heap utilization monitoring data displayed may include: shared class utilization, shared object utilization, local class utilization, and local object utilization for each JVM 120. In one embodiment, the heap utilization data may simply be a snap shot of the most recent utilization statistics or include a recent history of the utilization statistics. The utilization data displayed may include hit rates, number of objects/classes currently residing in each heap, last time each object/class was utilized, and the like. The utilization statistics may also be output to a log file.


Furthermore, monitoring console 110 may display and monitor multiple AS instances 105 coupled to network 305. In this embodiment, monitoring console 110 may include multiple panels, tabs, or windows associated with each AS instance 105 and output long term records to separate log files for each AS instance 105.


In a process block 510, web service interface 215 receives the status query and parses shared monitoring memory 135 to retrieve the requested status data. In response, the retrieved status data is transmitted to monitoring console 110 (process block 515) and displayed by monitoring console 110 to a screen for inspection by a user, such as an IT technician or the like (process block 520).


In one embodiment, web service interface 215 and monitoring console 110 may negotiate a reporting contract dictating that web service interface 215 is to periodically update monitoring console 110 with status data 130 without need of first transmitting the status query (process blocks 505 and 510). In this case, web service interface 215 pushes status data 130 to monitoring console 110, as opposed to monitoring console 110 pulling status data 130 from web service interface 215.



FIG. 6 is a block diagram illustrating a demonstrative enterprise environment 600 for implementing embodiments of the invention. The illustrated embodiment of enterprise environment 600 includes a cluster 605 coupled to service requests from client nodes 610. Cluster 605 may include one or more server nodes 615 each supporting one or more AS instances 105, a message server node 620 supporting a message server 622, a database node 625 supporting a database 627, and a web dispatcher 630.


Web dispatcher 630 implements a load-balancing mechanism distributing service requests from client nodes 610 among server nodes 615 within cluster 605. For example, web dispatcher 630 may implement a round-robin load-balancing mechanism or the like. Web dispatcher 630 may be one of server nodes 615 having the task of dispatching service requests among server nodes 615 of cluster 605 or a stand alone hardware node. The service requests are processed by server nodes 615 and subsequently provided to database nodes 625. Database nodes 625 offer up the requested data to server nodes 615, which in turn process and format the results for display on client nodes 610.


Eash AS instance 105 may further include its own dispatcher mechanism to distribute the requests assigned to it among its individual Java worker nodes 105. In one embodiment, Java worker nodes 105 are based on J2EE. AS instances 105 may further include other types of worker nodes including those based on the Microsoft .NET standard, the Advanced Business Application Programming (“ABAP”) standard developed by SAP AG, and the like.


One of client nodes 610 may execute monitoring console 110 to provide remote monitoring of AS instances 115, and in particular, remote monitoring of each worker node (including Java worker nodes 105, .NET worker nodes, and ABAP worker nodes). If an IT technician notices that one of the worker nodes has a low heap utilization, overactive garbage collection activity, or the like, the IT technician can take appropriate action including resetting the problematic worker node. Alternatively, scripts may run in concert with monitoring console 110 on a client node 610 to automatically address a problematic worker node based on a predefined response policy.



FIG. 7 is a block diagram illustrating a demonstrative processing system 700 for executing any of AS instance 115, software environment 200, monitoring console 110, or implementing any of client nodes 610, server nodes 615, message server node 620, or database node 625. The illustrated embodiment of processing system 700 includes one or more processors (or central processing units) 705, system memory 710, nonvolatile (“NV”) memory 715, a DSU 720, a communication link 725, and a chipset 730. The illustrated processing system 700 may represent any computing system including a desktop computer, a notebook computer, a workstation, a handheld computer, a server, a blade server, or the like.


The elements of processing system 700 are interconnected as follows. Processor(s) 705 is communicatively coupled to system memory 710, NV memory 715, DSU 720, and communication link 725, via chipset 730 to send and to receive instructions or data thereto/therefrom. In one embodiment, NV memory 715 is a flash memory device. In other embodiments, NV memory 715 includes any one of read only memory (“ROM”), programmable ROM, erasable programmable ROM, electrically erasable programmable ROM, or the like. In one embodiment, system memory 710 includes random access memory (“RAM”), such as dynamic RAM (“DRAM”), synchronous DRAM, (“SDRAM”), double data rate SDRAM (“DDR SDRAM”) static RAM (“SRAM”), and the like. DSU 720 represents any storage device for software data, applications, and/or operating systems, but will most typically be a nonvolatile storage device. DSU 720 may optionally include one or more of an integrated drive electronic (“IDE”) hard disk, an enhanced IDE (“EIDE”) hard disk, a redundant array of independent disks (“RAID”), a small computer system interface (“SCSI”) hard disk, and the like. Although DSU 720 is illustrated as internal to processing system 700, DSU 720 may be externally coupled to processing system 700. Communication link 725 may couple processing system 700 to a network (e.g., network 305) such that processing system 700 may communicate over the network with one or more other computers. Communication link 725 may include a modem, an Ethernet card, a Gigabit Ethernet card, Universal Serial Bus (“USB”) port, a wireless network interface card, a fiber optic interface, or the like.


It should be appreciated that various other elements of processing system 700 have been excluded from FIG. 7 and this discussion for the purposes of clarity. For example, processing system 700 may further include a graphics card, additional DSUs, other persistent data storage devices (e.g., tape drive), and the like. Chipset 730 may also include a system bus and various other data buses for interconnecting subcomponents, such as a memory controller hub and an input/output (“I/O”) controller hub, as well as, include data buses (e.g., peripheral component interconnect bus) for connecting peripheral devices to chipset 730. Correspondingly, processing system 700 may operate without one or more of the elements illustrated. For example, processing system 700 need not include DSU 720.


The above description of illustrated embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize.


These modifications can be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification and the claims. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.

Claims
  • 1. A computer-implemented method, comprising: implementing at a server node in a network of nodes an application server instance having a virtual machine control unit;launching with the virtual machine control unit at least two worker nodes within the application server instance, each of the at least two worker nodes providing a respective run-time environment;in each of the respective run-time environments of the at least two worker nodes, executing a respective virtual machine (“VM”) and a respective virtual machine monitor for the respective VM, andrunning by the respective VM a respective application , including storing a local object within a respective heap internal to and maintained by the VM;one of the virtual machines (“VMs”) instantiating a data object in a shared heap of the application server instance external to the at least two worker nodes and accessible by the VMs of the at least two worker nodes; andpublishing status data to a monitoring memory of the application server instance external to and shared by the at least two worker nodes and the shared heap, wherein the monitoring memory is accessible to a monitoring console configured to display the published status data, the published status data including data specifying a utilization of the shared heap, andfor each of the plurality of worker nodes, data specifying a utilization of the internal heap of the respective VM, anddata specifying a history of garbage collecting events of the internal heap of the respective VM.
  • 2. The method of claim 1, wherein the respective VMs of the at least two worker nodes comprise a Java VM (“JVM”) and wherein the application run by the JVM includes a Java application.
  • 3. The method of claim 1, wherein publishing the status data comprises: accessing an internal heap through a native application programming interface (“API”);retrieving the status data of the accessed internal heap by a VM monitor executed within the respective worker node of the accessed internal heap; andcopying the status data retrieved by the VM monitor to the monitoring memory.
  • 4. The method of claim 3, wherein one of the VM monitors of the at least two worker nodes comprises native machine code.
  • 5. The method of claim 2, further comprising: garbage collecting at least one of the local objects maintained within the internal heap of the JVM when available memory of the internal heap is scarce; andregistering the garbage collecting with the monitoring memory.
  • 6. The method of claim 1, wherein the shared status data includes at least one of shared object utilization data, shared class utilization data, and shared heap garbage collecting activity.
  • 7. The method of claim 1, further comprising: retrieving the status data from the monitoring memory; andtransmitting the status data to the monitoring console to display the status data.
  • 8. The method of claim 1, wherein the monitoring memory is isolated from the respective VMs of the at least two worker nodes to provide acccss to the status data published to the monitoring memory in the event one of the VMs fails.
  • 9. A computer-readable storage medium having stored thereon instructions that, if executed by a machine, will cause the machine to perform operations comprising: implementing at a server node in a network of nodes an application server instance having a virtual machine control unit;launching with the virtual machine control unit at least two worker nodes within the application server instance, each of the at least two worker nodes providing a respective run-time environment;in each of the respective run-time environments of the at least two worker nodes, executing a respective java virtual machine (“JVM”) and a respective virtual machine monitor for the respective JVM, andrunning by the respective JVM a respective application , including storing local objects within a corresponding internal heap internal to and maintained by the corresponding JVM;one of the java virtual machines (“JVMs”) instantiating a data object in a shared heap of the application server instance external to the at least two worker nodes and accessible by the JVMs of the at least two worker nodes; andpublishing status data to a monitoring memory of the application server instance external to and shared by the at least two worker nodes and the shared heap, wherein the monitoring memory is accessible to a monitoring console configured to display the published status data, the published status data including data specifying a utilization of the shared heap, andfor each of the plurality of worker nodes, data specifying a utilization of the internal heap of the respective JVM, anddata specifying a history of garbage collecting events of the internal heap of the respective JVM.
  • 10. The computer-readable storage medium of claim 9, wherein publishing the status data comprises: accessing an internal heap through a native application programming interface (“API”);retrieving status data of the accessed internal heap by a VM monitor executed within the respective worker node of the accessed internal heap; andcopying the status data retrieved by the VM monitor to the shared monitoring memory.
  • 11. The computer-readable storage medium of claim 9, further providing instructions that, if executed by the machine, will cause the machine to perform further operations, comprising: garbage collecting at least one of the local objects maintained within the internal heap of one of the plurality of JVMs when available memory of the internal heap is scarce; andin response to the garbage collecting, publishing garbage collecting monitoring data to the shared monitoring memory.
  • 12. The computer-readable storage medium of claim 9, wherein the status data comprises at least one of shared object utilization data, shared class utilization data, and shared heap garbage collecting data.
  • 13. A system, comprising: a server node to execute an application server (“AS”) instance, the AS instance including logic executable by a processor of the server node to: launch with a virtual machine control unit at least two worker nodes within the AS instance, each of the at least two worker nodes providing a run-time environment;in each of the respective run-time enviromnents of the at least two worker nodes, execute a respective java virtual machine (“JVM”) and a respective virtual machine monitor for the respective JVM, andrun by the respective JVM a respective application, including storing local objects within a corresponding internal heap internal to and maintained by the corresponding JVM;instantiate with one of the java virtual machines (“JVMs”) a data object in a shared heap of the AS instance external to the at least two worker nodes and accessible by the JVMs of the at least two worker nodes; andpublish status data to a monitoring memory of the application server instance external to and shared by the at least two worker nodes and the shared heap, wherein the monitoring memory is accessible to a monitoring console configured to display the published status data, the published status data includingdata specifying a utilization of the shared heap, andfor each of the plurality of worker nodes, data specifying a utilization of the internal heap of the respective JVM, anddata specifying a history of garbage collecting events of the internal heap of the respective JVM.
  • 14. The system of claim 13, wherein the logic is further to: garbage collect at least one of the local objects maintained within the internal heap of one of the plurality of JVMs when available memory of the internal heap is scarce; andwherein the publishing the status data to the shared monitoring memory is in response to the garbage collecting event.
  • 15. The system of claim 14, further comprising: a client node to execute the monitoring console, the client node communicatively coupled to the server node, the monitoring console including logic executable by a processor of the client node to:receive the status data from the server node; anddisplay the status data to a screen of the client node.
  • 16. The system of claim 15, wherein the monitoring console further includes logic executable by the processor of the client node to monitor a cluster of AS instances.
US Referenced Citations (153)
Number Name Date Kind
5331318 Montgomery Jul 1994 A
5553242 Russell et al. Sep 1996 A
5566302 Khalidi et al. Oct 1996 A
5566315 Milillo et al. Oct 1996 A
5617570 Russell et al. Apr 1997 A
5682328 Roeber et al. Oct 1997 A
5692193 Jagannathan et al. Nov 1997 A
5710909 Brown et al. Jan 1998 A
5745778 Alfieri Apr 1998 A
5905868 Baghai et al. May 1999 A
5909540 Carter et al. Jun 1999 A
5926834 Carlson et al. Jul 1999 A
5944781 Murray Aug 1999 A
5951643 Shelton et al. Sep 1999 A
5961584 Wolf Oct 1999 A
5974566 Ault et al. Oct 1999 A
6038571 Numajiri et al. Mar 2000 A
6065006 deCarmo et al. May 2000 A
6092171 Relph Jul 2000 A
6115712 Islam et al. Sep 2000 A
6115721 Nagy Sep 2000 A
6167449 Arnold et al. Dec 2000 A
6199179 Kauffman et al. Mar 2001 B1
6216212 Challenger et al. Apr 2001 B1
6256712 Challenger et al. Jul 2001 B1
6272598 Arlitt et al. Aug 2001 B1
6295582 Spencer Sep 2001 B1
6330709 Maynard et al. Dec 2001 B1
6336170 Dean et al. Jan 2002 B1
6349344 Sauntry et al. Feb 2002 B1
6356529 Zarom Mar 2002 B1
6356946 Clegg et al. Mar 2002 B1
6385643 Jacobs et al. May 2002 B1
6412045 DeKoning et al. Jun 2002 B1
6415364 Bauman et al. Jul 2002 B1
6438560 Loen Aug 2002 B1
6438654 Elko et al. Aug 2002 B1
6467052 Kaler et al. Oct 2002 B1
6519594 Li Feb 2003 B1
6587937 Jensen et al. Jul 2003 B1
6591347 Tischler et al. Jul 2003 B2
6615253 Bowman-Amuah Sep 2003 B1
6640244 Bowman-Amuah Oct 2003 B1
6651080 Liang et al. Nov 2003 B1
6687702 Vaitheeswaran et al. Feb 2004 B2
6738977 Berry et al. May 2004 B1
6760911 Ye Jul 2004 B1
6766419 Zahir et al. Jul 2004 B1
6769022 DeKoning et al. Jul 2004 B1
6772409 Chawla et al. Aug 2004 B1
6795856 Bunch Sep 2004 B1
6879995 Chinta et al. Apr 2005 B1
6970925 Springmeyer et al. Nov 2005 B1
7003770 Pang et al. Feb 2006 B1
7024512 Franaszek et al. Apr 2006 B1
7089566 Johnson Aug 2006 B1
7111300 Salas et al. Sep 2006 B1
7124170 Sibert Oct 2006 B1
7127472 Enokida et al. Oct 2006 B1
7149741 Burkey et al. Dec 2006 B2
7155512 Lean et al. Dec 2006 B2
7165239 Hejlsberg et al. Jan 2007 B2
7191170 Ganguly et al. Mar 2007 B2
7194761 Champagne Mar 2007 B1
7237140 Nakamura et al. Jun 2007 B2
7246167 Kalmuk et al. Jul 2007 B2
7296267 Cota-Robles et al. Nov 2007 B2
7302423 De Bellis Nov 2007 B2
7418560 Wintergerst Aug 2008 B2
20010029520 Miyazaki et al. Oct 2001 A1
20020046325 Cai et al. Apr 2002 A1
20020052914 Zalewski et al. May 2002 A1
20020073283 Lewis et al. Jun 2002 A1
20020078060 Garst et al. Jun 2002 A1
20020083118 Sim Jun 2002 A1
20020083166 Dugan et al. Jun 2002 A1
20020093487 Rosenberg Jul 2002 A1
20020147888 Trevathan Oct 2002 A1
20020169926 Pinckney et al. Nov 2002 A1
20020174097 Rusch et al. Nov 2002 A1
20020181307 Fifield et al. Dec 2002 A1
20030009533 Shuster Jan 2003 A1
20030014521 Elson et al. Jan 2003 A1
20030014552 Vaitheeswaran et al. Jan 2003 A1
20030023827 Palanca et al. Jan 2003 A1
20030028671 Mehta et al. Feb 2003 A1
20030037178 Vessey et al. Feb 2003 A1
20030084248 Gaither et al. May 2003 A1
20030084251 Gaither et al. May 2003 A1
20030088604 Kuck et al. May 2003 A1
20030093420 Ramme May 2003 A1
20030093487 Czajkowski et al. May 2003 A1
20030097360 McGuire et al. May 2003 A1
20030105887 Cox et al. Jun 2003 A1
20030115190 Soderstrom et al. Jun 2003 A1
20030131010 Redpath Jul 2003 A1
20030131286 Kaler et al. Jul 2003 A1
20030177382 Ofek et al. Sep 2003 A1
20030191795 Bernardin et al. Oct 2003 A1
20030196136 Haynes et al. Oct 2003 A1
20030208563 Acree et al. Nov 2003 A1
20030212654 Harper et al. Nov 2003 A1
20030229760 Doyle et al. Dec 2003 A1
20040024610 Fradkov et al. Feb 2004 A1
20040024881 Elving et al. Feb 2004 A1
20040024971 Bogin et al. Feb 2004 A1
20040045014 Radhakrishnan Mar 2004 A1
20040117441 Liu et al. Jun 2004 A1
20040128370 Kortright Jul 2004 A1
20040168029 Civlin Aug 2004 A1
20040181537 Chawla et al. Sep 2004 A1
20040187140 Aigner et al. Sep 2004 A1
20040205299 Bearden Oct 2004 A1
20040215703 Song et al. Oct 2004 A1
20040215883 Bamford et al. Oct 2004 A1
20050021917 Mathur et al. Jan 2005 A1
20050027943 Steere et al. Feb 2005 A1
20050044301 Vasilevsky et al. Feb 2005 A1
20050055686 Buban et al. Mar 2005 A1
20050060704 Bulson et al. Mar 2005 A1
20050071459 Costa-Requena et al. Mar 2005 A1
20050086656 Whitlock et al. Apr 2005 A1
20050086662 Monnie et al. Apr 2005 A1
20050125503 Iyengar et al. Jun 2005 A1
20050138193 Encarnacion et al. Jun 2005 A1
20050160396 Chadzynski Jul 2005 A1
20050180429 Ghahremani et al. Aug 2005 A1
20050198199 Dowling Sep 2005 A1
20050216502 Kaura et al. Sep 2005 A1
20050262181 Schmidt et al. Nov 2005 A1
20050262512 Schmidt et al. Nov 2005 A1
20050268294 Petev et al. Dec 2005 A1
20050278274 Kovachka-Dimitrova et al. Dec 2005 A1
20050278346 Shang et al. Dec 2005 A1
20060053112 Chitkara et al. Mar 2006 A1
20060059453 Kuck et al. Mar 2006 A1
20060064545 Wintergerst Mar 2006 A1
20060064549 Wintergerst Mar 2006 A1
20060070051 Kuck et al. Mar 2006 A1
20060092165 Abdalla et al. May 2006 A1
20060094351 Nowak et al. May 2006 A1
20060143389 Kilian et al. Jun 2006 A1
20060143392 Petev et al. Jun 2006 A1
20060150197 Werner Jul 2006 A1
20060159197 Kraut et al. Jul 2006 A1
20060167980 Werner Jul 2006 A1
20060168646 Werner Jul 2006 A1
20060168846 Juan Aug 2006 A1
20060206856 Breeden et al. Sep 2006 A1
20060253558 Acree et al. Nov 2006 A1
20060294253 Linderman Dec 2006 A1
20070050768 Brown et al. Mar 2007 A1
20070266305 Cong et al. Nov 2007 A1
Foreign Referenced Citations (4)
Number Date Country
0459931 Dec 1991 EP
2365553 Feb 2002 GB
WO-0023898 Apr 2000 WO
WO-03073204 Sep 2003 WO
Related Publications (1)
Number Date Country
20060143595 A1 Jun 2006 US