The present invention relates to improving performance of computer systems, and in particular, to dynamically monitoring and managing resource usages of processes in computer systems.
In a multi-node system, nodes may appear as a single system to application servers and user applications. Each node may handle its share of the workload during the normal operation when all the nodes in the multi-node system supposed to be up are in fact up. When one of the nodes fails (or is out of service for whatever reason), a particular node may be required to take over some, or all, of the failed node's share of the workload.
Unfortunately, the takeover (or failover) node may have used its capacity for its own share of the workload to such an extent that the node can hardly take over the failed node's share of the workload. For example, the takeover node may already use 60% of CPU time for processing its own share of the workload. Servicing the failed node's share of the workload may require more than 40% of additional CPU time. Thus, when the failed node's share of the workload is over flown to the takeover node, the takeover node does not have sufficient CPU time for processing both its own share and the failed node's share of the workload. This may cause the takeover node to fail.
This situation may be worsened, because the application servers and user applications that initiate the workload may not be aware of the fact that one or more nodes of the multi-node system are out of service. In fact, it may appear to the application servers and user applications that the multi-node system is handling an ever smaller number of transactions than before. The application servers and user applications may increase the number of requests sent to the multi-node system. As a result, more nodes in the multi-node system may fail.
As clearly shown, techniques are needed for dynamically monitoring and managing resource usages of processes in computer systems.
The approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section.
The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
Techniques for dynamically monitoring and managing resource usages of processes in a computer system are described. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention.
Overview
Techniques are provided for dynamically monitoring and managing resource usages of processes on a node of a multi-node system. In an embodiment, a resource control mechanism monitors resource usages on the node, using a variety of process information generated on the node. Based on a plurality of corresponding thresholds for the resource usages, the resource control mechanism determines whether one or more resource usages are high (for example, exceeding corresponding thresholds for the one or more resource usages). If that is the case, the resource control mechanism implements a number of resource usage reduction policies to promptly reduce the resources usages that are high. These resource usage reduction policies may include, but are not limited to, rejecting or throttling requests for new database connections to be established on the node in the multi-node system, prioritizing processes based on whether execution of a process will likely result in a reduction of resource usages on the node. Under these resource usage reduction policies, if a process likely generates new resource usage requirements, that process will be assigned a relatively low priority. Conversely, if a process likely releases resources, that process will be assigned a relatively high priority.
Other resource usage reduction policies such as batching up a plurality of messages in a single physical message may also be implemented when the node has high resource usages.
Example Database System
A database comprises database data and metadata that is stored on a persistent memory mechanism, such as a set of hard disks. Database data may be stored in one or more data containers represented on the persistent memory mechanism. Each container contains records. The data within each record is organized into one or more fields. In relational database management systems, the data containers are referred to as tables, the records are referred to as rows, and the fields are referred to as columns. In object-oriented databases, the data containers are referred to as object classes, the records are referred to as objects, and the fields are referred to as attributes. Other database architectures may use other terminology.
A database management system (“DBMS”) manages a database. A database management system may comprise one or more database servers. A multi-node system mentioned above may be used to implement the database management system. Each node in the multi-node system may host a database server. A server, such as a database server, is a combination of integrated software components and an allocation of computational resources, such as memory, a node, and processes on the node for executing the integrated software components on a processor, the combination of the software and computational resources being dedicated to performing a particular function on behalf of one or more clients.
User applications as database clients interact with a database server by submitting to the database server commands that cause the database server to perform operations on data stored in a database. A database command may be in the form of a database statement that conforms to a database language. One non-limiting database language supported by many database servers is SQL, including proprietary forms of SQL supported by such database servers as Oracle, (e.g. Oracle Database 10 g). SQL data definition language (“DDL”) instructions are issued to a database server to create or configure database objects, such as tables, views, or complex data types.
Example Multi-Node System
According to an embodiment of the present invention, the techniques may be performed by a multi-node system 102 as illustrated in
Each node 104 provides a plurality of resources to processes running on the node. As used herein, a resource may be a physical resource such as CPU time, main memory space, network I/O bandwidth, disk I/O usage, cache size, etc. A resource may also be a logical resource such as latches, semaphores, shared memory, or special data structures, etc.
For the purpose of illustration only, the node 104-1 comprises three resources (108-1 through 108-3). For example, the resource 108-1 may be CPU time, the resource 108-2 may be RAM space, and the resource 108-3 may be latches for shared data blocks of the database 106.
In some embodiments, node 104-1 is a database instance on which a number of database processes and non-database processes run. These processes may have different life spans and run for different time periods. Each of these processes may evolve in different stages that use different combinations of resources and different amounts of the resources. For example, a process that communicates messages between nodes may use CPU time and RAM space, but may not use latches for shared data blocks of the database 106, while another process that performs database checkpoint operations may use CPU time, RAM space, and, at some points of time, latches. In some embodiments, a resource control mechanism (e.g., 208 of
As used herein, the term “a process uses or incurs a resource” means that a certain amount of the resource is incurred (or used) by the process to the exclusion of other processes, regardless of whether the process is actively using any, or all, of that amount of the resource or not. The term “a process frees a resource” means that a certain amount of the resource previously incurred (or used) by the process has been made available on the node from a particular point of time (e.g., when the operating system or the database system carries out a free resource function call).
In some instances, a resource is automatically incurred by a process. For example, CPU time may be automatically incurred when a process is scheduled into an executing state on the node. An initial amount of memory space may also be automatically incurred by a process for storing program code and data when the process starts up on the node. Likewise, a resource may be automatically freed by a process, for example, when the process terminates on the node.
In contrast, a resource may also be incurred by a process if the process makes a request for the resource and if the request for the resource is granted by the resource control mechanism. For example, when a process needs additional heap memory in the middle of running, the process may use a memory allocation call such as “malloc( )” to make a request for a certain amount of additional memory. When the request is granted by the resource control mechanism, a certain additional amount of memory is incurred by the process from that point on until the process releases some, or all, of that amount of memory.
In some instances, a request for a resource needs not to be explicit. For example, when a process wishes to exclusively access a shared data block of the database 106 by making a call “retreiveDataBlockforReadWrite( )”, a request for a latch for exclusive write access to the shared data block may be implicitly made, even though the call is only explicitly requesting the shared data block. When the call returns successfully, the latch for exclusive write access implicitly requested is granted by the resource control mechanism.
In some embodiments, a certain amount of a resource, as required by the process during its lifecycle, may be incurred by a process at once. In some other embodiments, a certain amount of a resource may be gradually or incrementally incurred by a process. Similarly, in some embodiments, a certain amount of a resource may be freed by a process at once. In some other embodiments, a certain amount of a resource may be gradually or incrementally freed by a process. It should be noted that incurring a certain amount of a resource by a process may or may not be symmetric or correlated with freeing the same amount of the resource by the process.
Example Resource Control Mechanism
As used herein, the term “resource usage” refers to an aggregated number, an aggregated amount, an aggregated percentage, or otherwise an aggregated measure that indicates how much of a resource has been incurred by all processes running on the node 104-1. Upon determining a resource usage for a resource, the resource control mechanism may use other information at its disposal (for example, system configuration information) to further determine how much of the resource remains available. For example, a resource usage for CPU time at a particular time may be determined as 40%, which indicates that 40% of CPU time as provided by one or more processors on the node 104-1 has been incurred by the processes on the node 104-1 at the particular time. The resource control mechanism determines therefore that 60% of CPU time remains available to serve new request for the resource.
Example Normal Mode
The resource control mechanism 208 may operate in two different modes depending on current resource usages on the node 104. In the first operational mode (or simply normal mode), the resource control mechanism 208 monitors a group of resources 108 as shown in
Threshold for a resource 108 may be pre-configured and/or reconfigured manually or programmatically. In some embodiments, other configuration data on the node 104-1 may be used to determine thresholds for various resources on the node 104-1. For example, if the node 104-1 is responsible for taking over entire work from another node 104 in the multi-node system 102, thresholds for resource usages may be set at various values around 40%, allowing some room for any unexpected usages on the node 104-1. Thus, when the other node fails, the node 104-1 is still able to take over all the work without causing itself out-of-service. In alternative configurations, the node 104-1 may not be assigned any responsibility for taking over another failed node, or may be assigned with only a portion of work of another failed node. Thresholds for various resources may be set accordingly based on these and other factors.
In some embodiments, in the normal mode, the resource control mechanism 208 allows resources to be incurred so long as the resources are still in the normal regions. In some embodiments, a total usable amount of a resource is not fixed (unlike CPU time, for example, whose total usable amount is 100%). In these embodiments, the resource control mechanism 208 may increase or decrease the total usable amount depending on actual resource usage of the resource. For example, a buffer cache on a node 104 that caches previously retrieved data blocks may be increased or decreased to certain extents by the resource control mechanism 208 depending on actual resource usages of the buffer cache. In some embodiments, for a resource of which the resource control mechanism 208 can increase and decrease a total usable amount, a determination that resource usage of a resource is in a high-usage region occurs after the resource control mechanism 208 has increased the total usable amount of the resource to a maximum.
In some embodiments, node-wise resource usage information (shown as 202-1 of
As illustrated in
Example Safe Mode
When one or more of the resources that are monitored by the resource control mechanism 208 cross corresponding thresholds 206 from normal regions to high-usage regions, the resource control mechanism 208 may transition from the normal mode to a second operational mode (or simply safe mode) to distribute resources on the node 104-1 intelligently, to protect the node 104-1 from further deterioration in terms of resource usages and, and to reduce high resource usages on the node 104-1 so that all resource usages on the node 104-1 return to normal regions. In the safe mode, the resource control mechanism 208 implements one or more resource usage reduction policies to help restore the node 104-1 into the normal mode (in which all the resource usages will be in normal regions). In addition, the resource control mechanism 208 continues to monitor resource usages of the resources to determine whether the usages have indeed been restored into the normal regions. If so, the resource control mechanism 208 resumes operating in the normal mode.
Denying Requests for New Database Connections
In some embodiments, in the database system implemented by the multi-node system 102, when a user application on an application server (which may be remotely located from the multi-node system 102) needs to perform one or more database operations, the user application first requests a connection (or to be attached) with a session process on a node (e.g., 104-1) of the multi-node system. This session process may be one of many such processes in a session process pool. Once connected/attached to the session process (i.e., a new session is started), the user application may issue database commands (e.g., SQL statements) to the session process. The session process in turn secures necessary resources on the node 104-1 to carry out corresponding database operations as instructed by the database commands from the user application. In some embodiments, to carry out these database operations, not only direct resources that are necessary to carry out the operations are needed, but also secondary operations (e.g., logging), hence additional resources, may be incurred.
In some embodiments, when the user application finishes and disconnects (or is detached; hence the existing session is ended) from the session process, any resources still held by the session process for serving the user application are freed. Thus, during a finite period between the attachment and the detachment of the user application, the session process incurs a number of resources. These resources are incurred if and when a session process is allowed to be connected with a user application to process the latter's database commands.
In some embodiments, in the safe mode, the resource control mechanism 208 is operable to deny (or cause to deny) requests for new database connections. Thus, resources that could be incurred by new user applications can be avoided. Instead, resources may be used for existing connections that have been previously allowed. As a result, session processes that serve the existing connections can complete their respective operations and to free the incurred resources at the completion of the operations, relatively promptly, thereby helping the node 104-1 return to the normal mode.
In some embodiments, in the safe mode, instead of denying all requests for new database connections as previously described, the resource control mechanism 208 is operable to allow (or cause to allow) only a small number of requests (say five per minute instead of a higher number per minute) for new database connections.
Prioritizing Processes
In some embodiments, in the safe mode, processes with higher priority levels may be allowed to continue their operations as usual. In some embodiments, in the safe mode, the resource control mechanism 208 is operable to prioritize requests for resources that may or may not be in high-usage regions. As used herein, the term “prioritize” means assigning values to a priority level attribute that is used by the node to determine whether, when, and what resources should be granted to a process. An example of a priority level attribute may be an operating system priority. Generally speaking, the higher a process's priority level, the more likely the process is to be granted access to resources. Particularly, a process that is of a higher priority level may be allowed to proceed before a lower priority level. A process that uses no or little resources whose usages are in high-usage regions may be allowed to proceed before other processes with the same priority level. A process that is holding a resource for which many other processes are waiting may be re-assigned with a high priority level so that the resource can be quickly released to avoid deadlock situations. Conversely, a process that is holding resources for which no other, or very few, processes are waiting may be downgraded to a low priority level, or alternatively maintain its relatively low priority level.
For example, requests for new database connections may be given a relatively low priority level so that processes associated with the requests are allowed at a relatively slow rate on the node 104-1, as compared with that in the normal mode.
On the other hand, a process that has secured some, or all, of the needed resources may be given a higher priority level by the resource control mechanism 208 so that the process may finish its operation and release the resources the process has incurred. This process may have already held latches or other resources that are being waited by other processes before the operational mode transitions from the normal mode to the safe mode. When the process that has secured a relatively large amount of resources is given a high priority level to finish its work in the safe mode, likelihood of deadlocks on the resources may be avoided or significantly reduced.
A process that serves a critical or important function on the node 104-1 may be given high priority levels and allowed to proceed before other processes. For example, a background process (e.g., a process that determines which process obtains what type of latches for which shared data block of the database 106) on which many foreground processes (e.g., a session process to which a user application sends database commands) depend may be given a priority level so that the important background process is able to incur needed resources more readily than the foreground processes. Priority levels of these processes may be manually or programmatically provided on the node 104-1. Priority levels of these processes may also be determined based in part on runtime information.
In some embodiments, database-specific resource usage information 202-2 may identify which process currently holds a resource such as a latch and which other processes currently wait for the held resource. Based on this runtime information, the resource control mechanism 208 may prioritize the processes such that the process currently holding the resource is allowed to proceed with a higher priority level than those of the waiting processes.
Terminating Processes
In some embodiments, in the safe mode, the resource control mechanism 208 may determine that out of all processes that are running on the node 104-1, some processes are non-critical. Examples of non-critical processes include, but are not limited to garbage collection processes, informational event generation processes, etc. In some embodiments, these non-critical processes may be terminated in order to free up resources currently incurred by the processes.
In some situations, even if a process is not non-critical, nevertheless the process may be terminated. For example, session processes that have started but are still in initial stages of waiting for or incurring resources may be terminated by the resource control mechanism 208 in order to free up resources currently incurred by the processes and to prevent further resources from being incurred. In some embodiments, termination of processes on the node 104-1 may cause errors to be returned to user applications. In some embodiments, the user application may be programmed to retry the same requests with the multi-node system 102. These retried requests may overflow to other nodes 104 in the multi-node system 102, instead of the node 104-1, which is presently operating in the safe mode. For example, software middleware (for example, clusterware) may be deployed in the multi-node system 102 to dispatch requests among the nodes 104 in the system 102. When received by the multi-node system 102, a retried request may be redirected by the clusterware to another node 104, other than node 104-1.
Reducing Input/Output Operations
In some embodiments, in the safe mode, the resource control mechanism 208 may be operable to reduce, or cause to reduce, the number of physical messages that are sent between processes on the same node (i.e., 104-1) or different nodes 104. For example, instead of immediately sending a message in a function call issued by a process on the node 104-1, which would cause a separate I/O operation for each such message, the resource control mechanism may place the message in a message buffer. When the message buffer exceeds a certain size or (alternatively and/or optionally) when a certain time period has elapsed, messages in the message buffer may be sent in a single physical message that may only involve minimum I/O operations.
In some embodiments, in the safe mode, the resource control mechanism 208 may be operable to reduce, or cause to reduce, the number of checkpoints. When a checkpoint is issued, dirty blocks in the buffer cache are written to datafiles (which may comprise a number of data blocks) of the database 106 and the latest commit data is also updated in the datafiles of the database 106. Since a checkpoint may cause a number of I/O operations and need large amounts of resources to process, the reduction of checkpoints in the safe mode alleviate resource usages of the respective resources that are needed to process the checkpoint.
In the safe mode, the resource control mechanism 208 continues to monitor the group of resources 108 as shown in
Example Process
In block 320, the resource control mechanism 208 determines whether one or more resource usages (e.g., 204-1) in the plurality of resource usages (e.g., 204-1 through 204-3) are high (i.e., in high-usage regions). For example, initially, the resource control mechanism 208 may operate in a normal mode, as previously described, as all the monitored resource usages may be normal (i.e., in normal regions). Once any of the resource usages moves into a high-usage region, the resource control mechanism 208 may transition from the normal mode to a safe mode, as previously described. In the safe mode, the resource control mechanism 208 implements a plurality of resource usage reduction policies to help restore the node 104-1 into the normal mode. One resource usage reduction policy may be to reject requests for new database connections. In some embodiments, if a request for a new database connection were granted, the new database connection requested would be established between a user application that made the request and a session process in a session process pool on the node 104-1. In turn, various amounts of resources would be incurred by the user application and the session process to carry out further operations in connection with the user application. As described previously, various resource usage reduction policies may be implemented by the resource control mechanism 208 to speed up the transition from the safe mode to the normal mode on the node 104-1.
In block 330, in response to determining that one or more resource usages in the plurality of resource usages 204 are high, the resource control mechanism 208 transitions the operational mode from the normal mode to the safe mode, and implements one or more resource usage reduction policies for the purpose of restoring the node to the normal node. In some embodiments, some resource usage reduction policies may be implemented by the resource control mechanism 208 first. If the node 104-1 continues to experience high resource usages, more resource usage reduction policies may be implemented by the resource control mechanism 208.
In some embodiments, in the safe mode, the resource control mechanism 208 rejects at least one request for a new database connection. By rejecting such a request, the resource control mechanism 208 helps other existing database connections finish their work faster and hence release incurred resources faster than otherwise. In some embodiments, the rejected request may be re-routed by cluster-wide software (such as the above discussed clusterware) deployed in the multi-node system 102 or by the user application to a different node 104.
In some embodiments, the resource control mechanism 208 may continuously monitor and influence resource usages incurred by individual processes, a type of processes, a collection of processes, and/or a particular subsystem on the node 104-1.
Hardware Overview
Computer system 400 may be coupled via bus 402 to a display 412, such as a cathode ray tube (CRT), for displaying information to a computer user. An input device 414, including alphanumeric and other keys, is coupled to bus 402 for communicating information and command selections to processor 404. Another type of user input device is cursor control 416, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 404 and for controlling cursor movement on display 412. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
The invention is related to the use of computer system 400 for implementing the techniques described herein. According to an embodiment of the invention, those techniques are performed by computer system 400 in response to processor 404 executing one or more sequences of one or more instructions contained in main memory 406. Such instructions may be read into main memory 406 from another computer-readable medium, such as storage device 410. Execution of the sequences of instructions contained in main memory 406 causes processor 404 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.
The term “computer-readable medium” as used herein refers to any medium that participates in providing instructions to processor 404 for execution. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 410. Volatile media includes dynamic memory, such as main memory 406.
Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer can read.
Various forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to processor 404 for execution. For example, the instructions may initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system 400 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 402. Bus 402 carries the data to main memory 406, from which processor 404 retrieves and executes the instructions. The instructions received by main memory 406 may optionally be stored on storage device 410 either before or after execution by processor 404.
Computer system 400 also includes a communication interface 418 coupled to bus 402. Communication interface 418 provides a two-way data communication coupling to a network link 420 that is connected to a local network 422. For example, communication interface 418 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 418 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 418 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
Network link 420 typically provides data communication through one or more networks to other data devices. For example, network link 420 may provide a connection through local network 422 to a host computer 424 or to data equipment operated by an Internet Service Provider (ISP) 426. ISP 426 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 428. Local network 422 and Internet 428 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 420 and through communication interface 418, which carry the digital data to and from computer system 400, are exemplary forms of carrier waves transporting the information.
Computer system 400 can send messages and receive data, including program code, through the network(s), network link 420 and communication interface 418. In the Internet example, a server 430 might transmit a requested code for an application program through Internet 428, ISP 426, local network 422 and communication interface 418.
The received code may be executed by processor 404 as it is received, and/or stored in storage device 410, or other non-volatile storage for later execution. In this manner, computer system 400 may obtain application code in the form of a carrier wave.
In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. Thus, the sole and exclusive indicator of what is the invention, and is intended by the applicants to be the invention, is the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. Any definitions expressly set forth herein for terms contained in such claims shall govern the meaning of such terms as used in the claims. Hence, no limitation, element, property, feature, advantage or attribute that is not expressly recited in a claim should limit the scope of such claim in any way. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Number | Name | Date | Kind |
---|---|---|---|
4318182 | Bachman et al. | Mar 1982 | A |
5113522 | Dinwiddie et al. | May 1992 | A |
5222217 | Blount et al. | Jun 1993 | A |
5283856 | Gross et al. | Feb 1994 | A |
5347632 | Filepp et al. | Sep 1994 | A |
5357612 | Alaiwan | Oct 1994 | A |
5465328 | Dievendorff et al. | Nov 1995 | A |
5627764 | Schutzman et al. | May 1997 | A |
5649102 | Yamauchi et al. | Jul 1997 | A |
5721825 | Lawson et al. | Feb 1998 | A |
5754841 | Carino, Jr. | May 1998 | A |
5774668 | Choquier et al. | Jun 1998 | A |
5790807 | Fishier et al. | Aug 1998 | A |
5802253 | Gross et al. | Sep 1998 | A |
5828835 | Isfeld et al. | Oct 1998 | A |
5852818 | Guay et al. | Dec 1998 | A |
5862325 | Reed et al. | Jan 1999 | A |
5867665 | Butman et al. | Feb 1999 | A |
5867667 | Butman et al. | Feb 1999 | A |
5870559 | Leshem et al. | Feb 1999 | A |
5870562 | Butman et al. | Feb 1999 | A |
5878056 | Black et al. | Mar 1999 | A |
5884035 | Butman et al. | Mar 1999 | A |
5890167 | Bridge, Jr. et al. | Mar 1999 | A |
5918059 | Tavallaei et al. | Jun 1999 | A |
5933604 | Inakoshi | Aug 1999 | A |
5940839 | Chen et al. | Aug 1999 | A |
5951694 | Choquier et al. | Sep 1999 | A |
5970439 | Levine et al. | Oct 1999 | A |
5995980 | Olson et al. | Nov 1999 | A |
5999931 | Breitbart et al. | Dec 1999 | A |
6026430 | Butman et al. | Feb 2000 | A |
6029205 | Alferness et al. | Feb 2000 | A |
6035379 | Raju et al. | Mar 2000 | A |
6041357 | Kunzelman et al. | Mar 2000 | A |
6058389 | Chandra et al. | May 2000 | A |
6067540 | Ozbutun et al. | May 2000 | A |
6073129 | Levine et al. | Jun 2000 | A |
6088728 | Bellemore et al. | Jul 2000 | A |
6178529 | Short et al. | Jan 2001 | B1 |
6182086 | Lomet et al. | Jan 2001 | B1 |
6185555 | Sprenger et al. | Feb 2001 | B1 |
6188699 | Lang et al. | Feb 2001 | B1 |
6192378 | Abrams et al. | Feb 2001 | B1 |
6222840 | Walker et al. | Apr 2001 | B1 |
6243751 | Chatterjee et al. | Jun 2001 | B1 |
6247017 | Martin | Jun 2001 | B1 |
6304882 | Strellis et al. | Oct 2001 | B1 |
6327622 | Jindal et al. | Dec 2001 | B1 |
6334114 | Jacobs et al. | Dec 2001 | B1 |
6338074 | Poindexter et al. | Jan 2002 | B1 |
6393423 | Goedken | May 2002 | B1 |
6427146 | Chu | Jul 2002 | B1 |
6442568 | Velasco et al. | Aug 2002 | B1 |
6466950 | Ono | Oct 2002 | B1 |
6473794 | Guheen et al. | Oct 2002 | B1 |
6490574 | Bennett et al. | Dec 2002 | B1 |
6493826 | Schofield et al. | Dec 2002 | B1 |
6515968 | Combar et al. | Feb 2003 | B1 |
6519571 | Guheen et al. | Feb 2003 | B1 |
6529932 | Dadiomov et al. | Mar 2003 | B1 |
6536037 | Guheen et al. | Mar 2003 | B1 |
6539381 | Prasad et al. | Mar 2003 | B1 |
6549922 | Srivastava et al. | Apr 2003 | B1 |
6556659 | Bowman-Amuah | Apr 2003 | B1 |
6560592 | Reid et al. | May 2003 | B1 |
6587866 | Modi et al. | Jul 2003 | B1 |
6601083 | Reznak | Jul 2003 | B1 |
6601101 | Lee et al. | Jul 2003 | B1 |
6621083 | Cole | Sep 2003 | B2 |
6647514 | Umberger et al. | Nov 2003 | B1 |
6651012 | Bechhoefer | Nov 2003 | B1 |
6654907 | Stanfill et al. | Nov 2003 | B2 |
6658596 | Owen | Dec 2003 | B1 |
6691155 | Gottfried | Feb 2004 | B2 |
6697791 | Hellerstein et al. | Feb 2004 | B2 |
6704831 | Avery | Mar 2004 | B1 |
6704886 | Gill et al. | Mar 2004 | B1 |
6728748 | Mangipudi et al. | Apr 2004 | B1 |
6757710 | Reed | Jun 2004 | B2 |
6769074 | Vaitzblit | Jul 2004 | B2 |
6793625 | Cavallaro et al. | Sep 2004 | B2 |
6802003 | Gross et al. | Oct 2004 | B1 |
6816907 | Mei et al. | Nov 2004 | B1 |
6826182 | Parthasarathy | Nov 2004 | B1 |
6826579 | Leymann et al. | Nov 2004 | B1 |
6850893 | Lipkin et al. | Feb 2005 | B2 |
6868413 | Grindrod et al. | Mar 2005 | B1 |
6882994 | Yoshimura et al. | Apr 2005 | B2 |
6889231 | Souder et al. | May 2005 | B1 |
6917946 | Corl, Jr. et al. | Jul 2005 | B2 |
6925476 | Multer et al. | Aug 2005 | B1 |
6980988 | Demers et al. | Dec 2005 | B1 |
7003531 | Holenstein et al. | Feb 2006 | B2 |
7031974 | Subramaniam | Apr 2006 | B1 |
7058622 | Tedesco | Jun 2006 | B1 |
7058957 | Nguyen | Jun 2006 | B1 |
7065537 | Cha et al. | Jun 2006 | B2 |
7080382 | Sexton et al. | Jul 2006 | B2 |
7089228 | Arnold et al. | Aug 2006 | B2 |
7095871 | Jones et al. | Aug 2006 | B2 |
7149738 | Kumar et al. | Dec 2006 | B2 |
7165252 | Xu | Jan 2007 | B1 |
7174379 | Agarwal et al. | Feb 2007 | B2 |
7177866 | Holenstein et al. | Feb 2007 | B2 |
7178050 | Fung et al. | Feb 2007 | B2 |
7243256 | Kaiya et al. | Jul 2007 | B2 |
7263590 | Todd et al. | Aug 2007 | B1 |
7269157 | Klinker et al. | Sep 2007 | B2 |
7359910 | Wu et al. | Apr 2008 | B2 |
7424396 | Dodeja et al. | Sep 2008 | B2 |
7506215 | Maw et al. | Mar 2009 | B1 |
7590746 | Slater et al. | Sep 2009 | B2 |
7617257 | Sathyanarayan et al. | Nov 2009 | B2 |
7627618 | Honigfort | Dec 2009 | B2 |
8117505 | Sridharan et al. | Feb 2012 | B2 |
8321478 | Fong | Nov 2012 | B2 |
8555274 | Chawla et al. | Oct 2013 | B1 |
20010032137 | Bennett et al. | Oct 2001 | A1 |
20010047270 | Gusick et al. | Nov 2001 | A1 |
20010052137 | Klein | Dec 2001 | A1 |
20010056493 | Mineo | Dec 2001 | A1 |
20020049845 | Sreenivasan et al. | Apr 2002 | A1 |
20020052885 | Levy | May 2002 | A1 |
20020073019 | Deaton | Jun 2002 | A1 |
20020073139 | Hawkins et al. | Jun 2002 | A1 |
20020091685 | Feldman et al. | Jul 2002 | A1 |
20020112008 | Christenson et al. | Aug 2002 | A1 |
20020116457 | Eshleman et al. | Aug 2002 | A1 |
20020129157 | Varsano | Sep 2002 | A1 |
20020133507 | Holenstein et al. | Sep 2002 | A1 |
20020138582 | Chandra et al. | Sep 2002 | A1 |
20020144010 | Younis et al. | Oct 2002 | A1 |
20020152305 | Jackson et al. | Oct 2002 | A1 |
20020161896 | Wen et al. | Oct 2002 | A1 |
20020194015 | Gordon et al. | Dec 2002 | A1 |
20020194081 | Perkowski | Dec 2002 | A1 |
20030005028 | Dritschler et al. | Jan 2003 | A1 |
20030007497 | March et al. | Jan 2003 | A1 |
20030014523 | Teloh et al. | Jan 2003 | A1 |
20030037029 | Holenstein et al. | Feb 2003 | A1 |
20030037146 | O'Neill | Feb 2003 | A1 |
20030039212 | Lloyd et al. | Feb 2003 | A1 |
20030046421 | Horvitz et al. | Mar 2003 | A1 |
20030061260 | Rajkumar | Mar 2003 | A1 |
20030088671 | Klinker et al. | May 2003 | A1 |
20030108052 | Inoue et al. | Jun 2003 | A1 |
20030110085 | Murren et al. | Jun 2003 | A1 |
20030135523 | Brodersen et al. | Jul 2003 | A1 |
20030135609 | Carlson et al. | Jul 2003 | A1 |
20030161468 | Iwagaki et al. | Aug 2003 | A1 |
20030177187 | Levine et al. | Sep 2003 | A1 |
20030208523 | Gopalan et al. | Nov 2003 | A1 |
20030212657 | Kaluskar et al. | Nov 2003 | A1 |
20030212670 | Yalamanchi et al. | Nov 2003 | A1 |
20030229804 | Srivastava et al. | Dec 2003 | A1 |
20030236834 | Gottfried | Dec 2003 | A1 |
20040024771 | Jain et al. | Feb 2004 | A1 |
20040024774 | Jain et al. | Feb 2004 | A1 |
20040024794 | Jain et al. | Feb 2004 | A1 |
20040024979 | Kaminsky et al. | Feb 2004 | A1 |
20040034640 | Jain et al. | Feb 2004 | A1 |
20040034664 | Jain et al. | Feb 2004 | A1 |
20040064548 | Adams et al. | Apr 2004 | A1 |
20040093512 | Sample | May 2004 | A1 |
20040103195 | Chalasani et al. | May 2004 | A1 |
20040107125 | Guheen et al. | Jun 2004 | A1 |
20040111506 | Kundu et al. | Jun 2004 | A1 |
20040117794 | Kundu | Jun 2004 | A1 |
20040133591 | Holenstein et al. | Jul 2004 | A1 |
20040172385 | Dayal | Sep 2004 | A1 |
20040176996 | Powers et al. | Sep 2004 | A1 |
20040181476 | Smith et al. | Sep 2004 | A1 |
20040215858 | Armstrong et al. | Oct 2004 | A1 |
20040236860 | Logston et al. | Nov 2004 | A1 |
20040268357 | Joy et al. | Dec 2004 | A1 |
20050010545 | Joseph | Jan 2005 | A1 |
20050021567 | Holenstein et al. | Jan 2005 | A1 |
20050021771 | Kaehn et al. | Jan 2005 | A1 |
20050033809 | McCarthy et al. | Feb 2005 | A1 |
20050125371 | Bhide et al. | Jun 2005 | A1 |
20050131875 | Riccardi et al. | Jun 2005 | A1 |
20050165925 | Dan et al. | Jul 2005 | A1 |
20050183072 | Horning et al. | Aug 2005 | A1 |
20050193024 | Beyer et al. | Sep 2005 | A1 |
20050228828 | Chandrasekar et al. | Oct 2005 | A1 |
20050239476 | Betrabet et al. | Oct 2005 | A1 |
20050240649 | Elkington et al. | Oct 2005 | A1 |
20050262205 | Nikoloy et al. | Nov 2005 | A1 |
20050267965 | Heller | Dec 2005 | A1 |
20050289175 | Krishnaprasad et al. | Dec 2005 | A1 |
20060036617 | Bastawala et al. | Feb 2006 | A1 |
20060112135 | Warshawsky | May 2006 | A1 |
20070100793 | Brown et al. | May 2007 | A1 |
20070162260 | Nordstrom | Jul 2007 | A1 |
20070226323 | Halpern | Sep 2007 | A1 |
20080147614 | Tam et al. | Jun 2008 | A1 |
20080155641 | Beavin et al. | Jun 2008 | A1 |
20080201383 | Honigfort | Aug 2008 | A1 |
20080215878 | Gemmo | Sep 2008 | A1 |
20090112809 | Wolff et al. | Apr 2009 | A1 |
20090157722 | Liu et al. | Jun 2009 | A1 |
20090239480 | Rofougaran et al. | Sep 2009 | A1 |
20100082300 | Hollingsworth et al. | Apr 2010 | A1 |
20100145929 | Burger et al. | Jun 2010 | A1 |
20120072780 | Kini et al. | Mar 2012 | A1 |
20120143919 | Idicula | Jun 2012 | A1 |
20120221732 | Waldspurger | Aug 2012 | A1 |
20120271594 | Yan et al. | Oct 2012 | A1 |
Entry |
---|
Ravi Kokku et al., “Half-pipe Anchoring: An Efficient Technique for Multiple Connection Handoff,” Proceedings 10th International Conference on Network Protocols, Nov. 12, 2002, XP010632563, 10 pages. |
Ying-Dar Lin et al.,—Direct Web Switch Routing with State Migration, TCP Masquerade, and Cookie Name Rewriting, Globecom 2003, IEEE Global Telecommunications Conference, Dec. 1, 2003, IEEE, CPO 10677300, pp. 3663-3667. |
Chase, Jeffrey S., et al., “Dynamic Virtual Clusters in a Grid Site Manager,” Proceedings of the 12th IEEE International Symposium on High Performance Distributed Computing, 2003, XP-010643715, 12 pgs. |
Shiva, S.G., et al., “Modular Description/Simulation/Synthesis Using DDL,” 19th Design Automation Conference 1982, IEEE Press, pp. 321-329. |
Skow, Eric, et al., “A Security Architecture for Application Session Handoff,” 2002, IEEE International Conference Proceedings, Apr. 28-May 2, 2002, vol. 1 of 5, pp. 2058-2063, XP010589848. |
Song, Henry, et al., “Browser State Repository Service,” Lecture Notes in Computer Science, vol. 2414, 2002, pp. 1-14, XP002904339. |
Spiegler, Israel, “Automating Database Construction,” ACM SIGMIS Database, vol. 14, Issue 3, Spring 1983, pp. 21-29. |
Kei Kurakawa et al., “Life Cycle Design Support Based on Environmental Information Sharing,” IEEE, Feb. 1-3, 1999, Proceedings EcoDesign '99, First International Symposium, pp. 138-142. |
Gunther, Oliver et al., “MMM: A Web-Based System for Sharing Statistical Computing Modules,” IEEE, May-Jun. 1997, vol. 1, Issue 3, pp. 59-68. |
U.S. Appl. No. 10/918,054, filed Aug. 12, 2004, Notice of Allowance, Sep. 20, 2012. |
Zhang et al., Binary XML Storage Query Processing in Racle 11g, Dated Aug. 24-28, 2009 dated, Lyon, France, 12 pages. |
Bremer et al., “Integrating Document and Data Retrieval Based on XML”, dated Aug. 12, 2005, 31 pages. |
Pal et al., “Indexing XML Data Stored in a Relational Database”, Proceedings of the 30th VLDB Conference, Toronto, Canada, dated 2004, 12 pages. |
U.S. Appl. No. 11/736,132, filed Apr. 17, 2007, Office Action, Sep. 9, 2013. |
U.S. Appl. No. 12/961,394, filed Dec. 6, 2010, Final Office Action, Sep. 23, 2013. |
U.S. Appl. No. 12/961,394, filed Dec. 6, 2010, Interview Summary, Nov. 6, 2013. |
U.S. Appl. No. 12/961,394, filed Dec. 6, 2010, Advisory Action, Dec. 3, 2013. |
Number | Date | Country | |
---|---|---|---|
20100211681 A1 | Aug 2010 | US |