Automatic clustering for self-organizing grids

Information

  • Patent Grant
  • 11522952
  • Patent Number
    11,522,952
  • Date Filed
    Friday, June 26, 2020
    4 years ago
  • Date Issued
    Tuesday, December 6, 2022
    a year ago
Abstract
A cluster of nodes, comprising: a plurality of nodes, each having a security policy, and being associated task processing resources; a registration agent configured to register a node and issue a node certificate to the respective node; a communication network configured to communicate certificates to authorize access to computing resources, in accordance with the respective security policy; and a processor configured to automatically dynamically partition the plurality of nodes into subnets, based on at least a distance function of at least one node characteristic, each subnet designating a communication node for communicating control information and task data with other communication nodes, and to communicate control information between each node within the subnet and the communication node of the other subnets.
Description
BACKGROUND OF THE INVENTION

This application expressly incorporates by reference in its entirety the Doctoral Dissertation of Weishuai Yang, entitled: “Scalable And Effective Clustering, Scheduling And Monitoring Of Self-Organizing Grids”, Graduate School of Binghamton University State University of New York, September, 2008.


Although a number of computational grids have begun to appear, truly large-scale “open” grids have not yet emerged or been successfully deployed. Current production grids comprise tens, rather than hundreds or thousands, of sites [1, 3]. The primary reason is that existing grids require resources to be organized in a structured and carefully managed way, one that requires significant administrative overhead to add and manage resources. This overhead is a significant barrier to participation, and results in grids comprising only large clusters and specialized resources; manually adding individual resources-especially if those resources are only intermittently available-becomes infeasible and unworthy of the effort required to do so.


An alternative model for constructing grids [4] lowers the barrier for resource and user participation by reducing various administrative requirements. In this Self-Organizing Grids (SOGs) model, resource owners would directly and dynamically add their resources to the grid. These resources may include conventional clusters that permanently participate in the grid, or that are donated by providers during off-peak hours. In addition, users may provide individual resources in much the same way that they add them to peer-to-peer networks and public resource computing projects such as SETI@home [2]. The grid would then consist of the currently participating resources. SOGs might contain different tiers of resources, ranging from always connected large clusters, to individual PCs in homes, down to small-scale sensors and embedded devices. Thus, SOGs represent the intersection of peer-to-peer computing, grid computing, and autonomic computing, and can potentially offer the desirable characteristics of each of these models.


Constructing grid services that can operate in, let alone take advantage of, such an environment requires overcoming a number of challenges and requires different algorithms and designs [4]. One of the primary challenges, namely how to automatically discover efficient clusters within SOGs, to enable effective scheduling of applications to resources in the grid has not been adequately addressed in the prior art.


A candidate collection of SOG nodes may not necessarily be a physical cluster of co-located machines under a single administrative domain connected by a high-speed network; but the nodes' proximity to one another—in terms of network connection performance characteristics—may allow them to serve as an ad hoc cluster to support some applications. A brute force approach to the problem of discovering ad hoc clusters would periodically test network performance characteristics between all pairs of resources in the grid. Clearly, this approach is not feasible for a large scale system; more scalable approaches are needed.


The need for clustering arises in P2P environments, where it has received significant research attention [8, 13, 5, 9]. In P2P environments, clusters are needed for scalability of document search and exchange. Clusters are created and maintained in a large and dynamic network, where neither the node characteristics nor the network topology and properties (such as bandwidth and delay of edges) are known a priori. To improve performance, cluster nodes must be close enough to one another, and must typically fulfill additional requirements such as load balancing, fault tolerance and semantic proximity. Some of these properties are also desirable for SOGs. However, the emphasis on proximity is much more important to SOGs, since the computational nature of grid applications may require close coupling. Further, to allow flexible application mapping, variable size clusters must be extractable; in contrast, the emphasis in P2P networks is usually on finding clusters of a single size.


Clustering in SOGs is more complicated than classical dominating set and p-center problems from graph theory, which are themselves known to be NP-complete. Simple strategies such as off-line decisions with global knowledge do not work because of the large scale and dynamic nature of the environment. Further, the importance of cluster performance (because of its intended use), along with the requirement to create variable size clusters, suggest the need for different solutions. An optimal solution that measures the quality of connections between all pairs of nodes, and that then attempts to extract the optimal partition of a given size, requires O(n2) overhead in the number of messages to measure the connections, and an NP-complete optimal clustering solution. Further, the dynamic nature of the problem in terms of the network graph and processor and network loads requires lighter weight heuristic solutions.


To support general large-scale parallel processing applications, SOGs must self-organize in a way that allows effective scheduling of offered load to available resources. When an application request is made for a set of nodes, SOGs should be able to dynamically extract a set of resources to match the request. Since these resources are often added separately and possibly by multiple providers, SOGs should be able to identify and track relationships between nodes. In addition, to support effective scheduling, the state of resources in the grid must be tracked at appropriate granularity and with appropriate frequency.


An important initial question is “What represents an effective cluster?” Clearly, the capabilities of the individual nodes are important. However, the influence of communication often has a defining effect on the performance of parallel applications in a computational cluster. Moreover, it is straightforward to filter node selection based on node capabilities, but it is much more challenging to do so based on communication performance, which is a function of two or more nodes.


Highways [8] presents a basic solution for creating clusters through a beacon-based distributed network coordinate system. Such an approach is frequently used as the basis for other P2P clustering systems. Beacons define a multidimensional space with the coordinates of each node being the minimum hop-count from each beacon (computed by a distance vector approach or a periodic beacon flood). Distances between nodes are measured as Cartesian distances between coordinates. Highways serves as the basis for several other clustering approaches. Shortcomings include the fact that the distance in the multi-dimensional space may not correspond to communication performance, that markers must be provided and maintained, and need to centrally derive the desired node clustering.


Agrawal and Casanova [5] describe a pro-active algorithm for clustering in P2P networks. They use distance maps (multi-dimensional coordinate space) to obtain the coordinates of each peer, and then use a marker space (not the same concept as in Highway) as the cluster leader by using the K-means clustering algorithm. The algorithm chooses the first marker (leader) randomly, then repeatedly finds a host of distance at least D from all current markers, and adds it into the marker set. Nodes nearest to the same marker are clustered together, and are split if the diameter becomes too large. This strategy results in message flooding and its associated high overhead.


Zheng et. al. [13] present T-closure and hierarchical node clustering algorithms. The T-closure algorithm is a controlled depth-first search for the shortest paths, based on link delay. Each node learns all shortest paths starting from itself, with distance not larger than T. The hierarchical clustering algorithm uses nomination to select a supernode within some specified distance. These two strategies require high overhead and do not support node departure.


Xu and Subhlok describe automatic clustering of grid nodes [9] by separating the clustering problem into two different cases. Their approach uses multi-dimensional virtual coordinates to cluster inter-domain nodes, and uses ri direct measures to cluster intra-domain nodes. This strategy can classify existing nodes into clusters according to physical location, but cannot extract variable sized clusters according to user requirements.


SUMMARY AND OBJECTS OF THE INVENTION

In order to address the issue of co-scheduling of specific resources to quantify the relationship (i.e. distances in terms of link delay) among different resources within a computational infrastructure network or a set of computational or infrastructure resources, especially those that span multiple administrative domains, an automated system is provided for assessing the quality of multiple heterogeneous resources available collectively to support a given job. This can be done on demand or proactively. The problem is complicated because the number of resource sets of interest is exponential, making brute-force approaches to extracting their state impractical. It's almost impossible to have all the information collected in advance. On the other hand, it's also impractical to search nodes purely on demand, especially from the scalability point of view.


A scalable solution for organizing a set of resources is provided that preferably adopts a link-delay sensitive overlay structure (MDTree) to organize nodes based on their proximity to one another, with only a small number of delay experiments. Both proactive information collection and on-demand confirmation are combined. This overlay provides a variable-size set of promising candidate nodes that can then be used as a cluster, or tested further to improve the selection. The system may be centrally controlled, subject to distributed or ad hoc control (e.g., a self-organizing system), or some hybrid approach with both dedicated or specialized control structures and control functions implemented proximate to the resources which seek to interoperate. The resources may be processing, memory, communications, or other elements capable of providing other functions necessary or desirable to complete a task.


To support effective scheduling, not only the quality but also the changing state of resources in the Grid system should be tracked at appropriate granularity and frequency. The difficulty comes from the fact of distributed computing: Since every node may only have some incomplete information of the system, even obtaining the global view of the system is not easy. Furthermore, a self-organizing grid should gracefully tolerate a significant portion of its participating resources to be dynamically added or removed, even those that are being used by active computations. Such tolerance imposes an additional burden on the state tracking system. One aspect of the technology provides that the topology is concurrently available to accomplish tasks which are partitioned to various nodes, and also subject to maintenance in the event that the underlying resources change in availability or use. Thus, while a particular subtask allocated to a particular resource need not be tolerant to a change in that particular resource, the distributed task being jointly performed by the set of nodes is generally tolerant of such changes.


A structure for efficient resource discovery and monitoring is provided. On the one hand, resource information storage is distributed on nodes and aggregated hierarchically; queries only go through pruned branches. The aggregated resource information is structured in a relational model on each node. On the other hand, the adoption of the relational model provides efficient support for range queries in addition to equality queries, and the hierarchical distributed architecture provides efficiency, scalability and fault tolerance.


Based on the MDTree overlay and resource aggregation, a Group-Scheduling framework for self organizing grid architecture is provided to allow scalable and effective scheduling of parallel jobs, each of which employs multiple resources, to available resources that can support each job most efficiently. In addition to tracking the capabilities of resources and their dynamic loads, the framework takes into account the link delay requirements of parallel jobs. Group-scheduling uses the aggregated resource information to find “virtual clusters”, a group of nearby suitable resources based on variable resource evaluation strategies.


Security is provided by providing as separate processes user registration and node participation. On the one hand, the participation of a new node does not mean that all the users on that node gain access to the self organizing grid. On the other hand, a single user should not have to be authenticated more than once for using resources in the system.


A distributed authorization architecture is preferably provided for the self organizing grid environment. Its characteristics include distributed attribute repository, role-based authorization, a reusable access token, and flexible access control with distributed decision making. A particular feature is its great emphasis on the autonomy of resource access control.


Automatic Grid structure discovery mechanisms are therefore provided to allow resources to self-organize efficiently into groups, with little or no administrative intervention. Without such mechanisms, on-demand discovery of mutually suitable resources is difficult. Thus, automatically discovering Grid structure and identifying virtual clusters (nodes that are close one another and able to sustain communicating applications effectively) of varying sizes at a reasonable overhead is the first step to be achieved.


Schedulers are permitted to make effective placement decisions based on up-to-date information, and to better balance Grid load and satisfy application requests for resources, by providing lightweight adaptive monitoring of the dynamic state of a potentially massive number of resources across a Grid. Likewise, since the architecture supports tracking of a large number of resources, the system can effectively subdivide physical systems into a number of logical components.


Effective resource monitoring, resource quality evaluation, and dependent parallel task dispatching to suitable resources are achieved in a scalable fashion.


One aspect of the system provides a variable size automatic node clustering based on link delay.


Another aspect provides distributed hierarchical resource management with efficient updating and range query capability.


A further aspect provides efficient group-scheduling for parallel applications with inter-dependent tasks with communication costs taken into consideration.


A still further aspect provides distributed resource authorization for SOGs. We use the phrase self-organizing grid (SOG) to describe a set of networked computing nodes that may be part of some logical organization, and that have some ability to “self-organize”, perhaps only by virtue of the approaches described herein. The essential characteristics of the underlying environment are that the computing nodes can communicate with one another, and that they can be used to solve some computing problem. Therefore, the technology has broader application than to the “self-organizing grids” described in the art, and is not limited to application in or to such systems.


A new simulation framework is provided, capable of accurate modeling and efficient simulation for peer-to-peer and self-organizing grid environments.


To support general large-scale parallel processing applications, self organizing grids (SOGs) must self-organize in a way that allows effective scheduling of offered load to available resources. To achieve the best performance for the whole Grid system and also for each individual application and dispatched job, the resources need to be effectively allocated. Unlike classical parallel and distributed scheduling formulations, which most commonly consider the issue of scheduling one job to one resource, the resource allocation problem in a SOG context means allocating a set of resources to a job. When an application makes a request for a set of resources, SOGs should be able to efficiently identify relationships between available resources and select the most suitable resources. This essentially can be considered as allocating a set of Grid nodes to a job based on the criteria for resources. Various of criteria for selecting nodes can be used. It could be based on link delay among one another, CPU frequency, memory space, etc, depending on different needs.


Nevertheless, link delay is usually one of the most important criteria, since the Grid is designed for collaboration. For parallel jobs where different processes need to communicate, resource allocation is critically influenced by the tightness of coupling and communication characteristics among the allocated nodes. This is especially true in wide-area Grids where the delays between different nodes can vary widely. Since such jobs are of considerable interest to Grid systems, the scheduling framework must allow the extraction of resources that are mutually compatible. As a result, to be able to efficiently extract variable size mutually compatible virtual clusters, the system needs to monitor not only the individual resources, but also their relationship to each other.


Thus, to achieve best performance, it's very important for SOG to dynamically extract the underlying topology of the network in a scalable way to enable the scheduler to extract variable size “virtual clusters” that are mutually close to one another.


In determining an optimal clustering of nodes both the capability and location (in a relevant space, according to an appropriate metric) of respective nodes may be important. For example, a distant or inaccessible node with substantial capabilities may be less useful than one or more proximate or low latency communications nodes. Indeed, the issue is a bit more complex when one considers a computing cluster as part of a larger grid or self-organizing network. Ideally, the nodes within a group interoperate efficiently. For example, where communications between nodes within a group are low latency and high bandwidth, the capabilities of each of the nodes in the group may be scaled to provide increased performance in parallel applications in a computational cluster. As the linkage between nodes becomes slower, less capable, or less reliable, the scalability typically diminishes.


To support effective scheduling, the relationship and state of nodes in the Grid system must be tracked at appropriate granularity and with appropriate frequency. Scheduling of parallel applications often takes into account both the underlying connectivity and the application topology. Even though in custom parallel machines, and perhaps small size networks, it is reasonable to assume that the infrastructure topology is static and transparent to the scheduler/application, this is clearly not the case in wide-area Grids, especially the ones with dynamic membership such as SOGs. Scheduling with the knowledge of application topology allows for more precise and effective scheduling, but places an extra burden on the programmer to expose this topology to the system.


One aspect of automatic clustering challenge is to extract the structure of the SOG from a performance perspective. Difficulties may be presented by two aspects: (1) measurements to determine the all-pair network properties between nodes (O(n2) to measure all links); and (2) a graph clustering algorithm that extracts candidate virtual clusters of the desired size, which is NP-complete in terms of computational complexity.


A related issue arises when a distributed control system is employed, in which the overlay that exposes the structure is constructed and used by distributed algorithms to organize the nodes into leaders and peers according to some performance-related criteria without global knowledge.


Simple strategies for the establishment of grids, that might otherwise be applied, such as off-line decisions with global knowledge do not work because of the large scale and dynamic nature of the environment. Further, the importance of cluster performance (because of its intended use), along with the requirement to create variable size clusters, suggests the need for different solutions.


One embodiment of the present system provides a scalable solution to automatic clustering in SOGs. A flexible overlay structure, called a Minimum-delay Dynamic Tree (MDTree), is built and maintained to allow an initial sorting of the nodes based on a small number of delay experiments for every joining node. The MDTree organizes nodes as they join, keeping nearby nodes close together in the tree. As nodes join, a grouped set of nodes may exceed the group size threshold and the group must be split. Obviously, effective partitioning when splits occur is critical to the performance of the approach, since the problem is NP-complete. A genetic algorithm may be used for hi-partitioning.


Peer resources (nodes) are arranged hierarchically in tiers, using a dynamic control system which permits changes in the architecture during task processing. That is, the network remains available to accept and process tasks as the network availability of nodes changes, with the logical relationships of available nodes also changing as necessary or desirable. The nodes include processors, and these processors may be used to complete portions of a task as well as to provide distributed control capability. In a symmetric case, each node has sufficient capability and stored program instructions to implement most or all portions of a control algorithm, and therefore the loss of any one node or communication pathway will not block the ability of the SOG to remain operational. In an asymmetric case, various nodes have specialization in function, though there remain sufficient nodes available with capability to maintain operation of the SOG distributed throughout the network of nodes to provide fault tolerance and self-organizing capabilities.


The hierarchical tree of subsets of nodes is maintained dynamically as nodes join and leave. To better balance the tree, a genetic algorithm may be used to partition groups of nodes under a common parent (i.e. neighborhoods of a super-node). This enables the tree to maintain relatively small groups of mutually close nodes.


Embodiments of the present invention provides systems and methods which, for example, focus on cluster selection in an SOG based on communication performance. Of course, other metrics may be employed analogously. In order to simplify the analysis, it is assumed that all SOG nodes are capable of participating in clusters, and for example have similar capabilities. It is understood that this presumption is not a limitation of the invention, and that the analysis may be extended to address this inhomogeneity.


The automatic clustering challenge is to extract the structure of the SOG from a performance perspective; out of the unorganized or partially organized set of SOG resources, how can the structure that is available to conventional grids be dynamically and automatically discovered? A preferred solution according to the present invention to provide a scalable solution to automatic clustering in SOGs is to create a hierarchy within the system and to localize most of the interactions to a small number of nearby nodes. The base problem in constructing the overlay that exposes structure is how to use distributed algorithms to organize the nodes into leaders and peers according to some performance-related criteria, without global knowledge.


A flexible overlay structure, called a Minimum-delay Dynamic Tree (MDTree), is built and maintained to allow an initial sorting of the nodes based on a small number of delay experiments for every joining node. The MDTree organizes nodes as they join, keeping nearby nodes close together in the tree. As nodes join, a grouped set of nodes may exceed the allowed threshold as nodes and the group must be split. Effective partitioning when splits occur is critical to the performance of the approach; because the problem is NP-complete, and a genetic algorithm is preferably used for hi-partitioning. The MDTree overlay structure is then used when users generate requests for clusters, to identify effective clusters of a given size efficiently. As a result, it becomes possible to find clusters of specified sizes with low average delay among the nodes.


Simulation of the performance of this approach shows favorable results. By using an MDTree, the message overhead for finding a cluster can be kept linear with respect to cluster size, and the average link delay within the formed cluster is close to optimal.


Traditional computational grids that comprise multiple physical clusters may still benefit from an embodiment of the present automatic clustering approach. In particular, when a large-scale application requires a set of machines that exceeds the size of the largest available cluster, the present approach will consider the delay between nodes at different sites, and can help identify a large multi-organizational collection of machines to support the application.


The nodes may be pre-clustered using an overlay organization called a Minimum-Delay Tree (MDTree). Since nearby nodes in this structure have small delay to each other, variable size on-demand clustering considers only a small subset of the nodes. Each level in an MDTree consists of a neighborhood in which each node is a representative of another neighborhood at a lower level, recursively down to the leaf nodes. Inter-node delays among nodes within the same neighborhood are relatively small.


An MDTree makes it easier to find a specified number of nodes with minimum average delay. By using a hierarchical tree overlay structure, MDTree controls the complexity of node joins and cluster extraction to O(Log kN), where K is the size of neighborhood on each layer in the tree and N is the number of nodes.


An MDTree employs a hierarchically layered tree structure. Nodes on the same branch of the tree are organized so that they are close to one another in terms of link delay. This structure helps to satisfy requests with clusters that have small internal average link delays.


An MDTree employs a hierarchically layered tree structure. Nodes on the same branch of the tree are organized such that they are close to one another in terms of link delay. More generally, the node are represented within a space, and the nodes are clustered based on a metric appropriate for that space which provides an optimum performance. When the nodes are employed for parallel computation, the link delay between respective nodes provides a rational basis for co-clustering within a subset, since inter-node communications speed is an important determinant of performance, especially of the nodes have similar capability and are on a single type of network. This structure helps requests to be satisfied with clusters that have small internal average link delays.


On each level, a super-node within each subset keeps information of the number of nodes it is controlling and the number of nodes controlled by each of its peer-nodes. This information is very useful for formatting clusters on demand. Clearly, super-nodes and regular peer-nodes have different levels of responsibilities in MDTrees. A super-node is a leader on all layers from 1 to the second highest layer it joins. Each super-node must participate in query and information exchange on all the neighborhoods it joins, which can make it heavily burdened. However, if higher layer super-nodes did not appear within neighborhoods at lower layer neighborhoods, it would be inefficient to pass information down to neighborhoods at lower layers.


Overlay Pre-Clustering with Minimum delay Dynamic Trees


Clustering algorithms may be classified into two categories: pro-active and on-demand. Most existing algorithms are pro-active; that is, given a set of nodes that join dynamically, the goal is to organize them into clusters as they join. On-demand systems do not maintain clusters in advance but construct them from scratch when required. SOGs may be supported whose diverse applications may lead to users requesting clusters of various sizes. Therefore, either different size clusters must be built pro-actively (significantly increasing the overhead), or an on-demand approach must be employed. A purely pro-active system results in high overhead and inflexibility, whereas a purely on-demand system requires significant dynamic overhead that can introduce scheduling delay. A preferred embodiment of the present system and method pro-actively organizes the nodes into an overlay that makes on-demand construction of variable size clusters efficient.


The problem of finding an optimal variable size cluster is NP-complete [13]; O(n2) delay experiments (ping messages) are needed to collect the full graph edge information. Therefore, an objective is to find an approximation of the optimal solution. Thus, adaptive heuristic approaches that can provide efficient solutions with more acceptable overhead in terms of communication and runtime are preferred.


Banerjee et. al. [6] provides for hierarchically arranging peers in tiers. According to one embodiment, the present system and method extends this technique for more effective operation with respect to computational clustering, and to enable dynamic cluster formation. The tree is maintained dynamically as nodes join and leave. To better balance the tree, a genetic algorithm may be employed to partition groups of nodes under a common parent (i.e. neighborhoods of a super-node). This enables the tree to maintain relatively small groups of mutually close nodes. A preferred approach is to pre-cluster the nodes using an overlay organization that is called a Minimum-Delay Tree (MDTree). Nearby nodes in the tree have small delay to each other; thus, on-demand variable size clustering considers only a small subset of the nodes. Each level in an MDTree consists of a neighborhood in which each node is a representative of another neighborhood at a lower level, recursively down to the leaf nodes. Inter-node delays among nodes within the same neighborhood are relatively small.


An MDTree makes it easier to find a specified number of nodes with minimum average delay. By using a hierarchical tree overlay structure, MDTree controls the complexity of node joins and cluster extraction to OLog, where Kis the size of neighborhood on each layer in the tree and Nis the number of nodes.


MDTree Architecture


An MDTree employs a hierarchically layered tree structure. Nodes on the same branch of the tree are organized such that they are close to one another in terms of link delay. This structure helps requests to be satisfied with clusters that have small internal average link delays. The terminology used herein is described as follows:

    • MDTree: All the SOGs nodes are organized in a structure that facilitates resource sharing, information exchange, and cluster formation. This structure is the MDTree.
    • Layer: All nodes at distance edges from the root of the MDTree are said to be at layer L(H-J) of the tree, where His the height of the tree. Total number of layers in an MDTree is approximately O(LogKN), where N is the total number of nodes in the tree, and K is predefined neighborhood size, which is defined below.
    • Peer-node: Any participating node is a peer-node.
    • Super-node: A super-node is the leader of a neighborhood. “Super-node” and “peer-node” are relative concepts. A node can be a peer-node on one layer, and a super-node on another. The super-node of a lower layer neighborhood is also a participant in the neighborhood of the above layer. In other words, every node on layer L, is a super-node on layer L,1. On the other side, a super-node of layer L,+1, must be a super-node for exactly one neighborhood in layers L1 through L. Super-nodes are key nodes in the structure; they control peer-nodes in their neighborhood, and they are the gateway to the outside of the neighborhood.
    • Neighborhood: A neighborhood consists of a supernode and all other controlled nodes on a specified layer. Numerous neighborhoods controlled by different super-nodes exist on a specified layer. Lower layers communicate through a respective supernode of a neighborhood in the layer above them. On each layer, nodes within the same neighborhood exchange information with each other, which helps electing new super-node when the current super-node is missing. However, nodes on the same layer but under the control of different super-node, i.e., belonging to different neighborhoods, do not directly communicate, and they do not know the existence of one another.
    • Community: A community consists of a super-node and the subtree comprising all the neighborhoods on lower layers controlled by that super-node.
    • Entry Point: A special super-node used to direct new joining nodes to the neighborhood on the highest layer. The entry point is the super-node on the highest layer, and the only participant in this layer.
    • K: Each neighborhood has a pre-set maximum number of nodes that it can contain; this maximum value, K, is currently a constant of the overlay. Once a neighborhood on layer L, grows to contain Knodes, the neighborhood eventually splits into two, and the newly generated super-node is promoted into layer L,+1. A split may happen immediately after a neighborhood grows to contain Knodes, or at a specified interval, depending on the implementation.



FIG. 1 depicts an example of an MDTree consisting of 16 nodes with K=4. A Super-node keeps information of number of nodes it is controlling on each level, as is shown in the figure, and, in its neighborhood on each layer, the number of nodes controlled by each of its peer-nodes. This information is generally useful for cluster formation.


MDTree construction and maintenance consist of four components: (1) the Node Join Protocol governs how nodes join the tree; (2) Neighborhood Splitting splits a neighborhood into two neighborhoods, when its size exceeds K; (3) a Tree Adjustment process allows nodes to move to more appropriate layers if they get misplaced by the neighborhood splitting process (or otherwise, for example, as nodes leave); and (4) Tree Maintenance mechanisms maintain the tree as nodes leave, by promoting nodes if their super-node leader disappears.


Node Join Protocol


To join the MDTree structure, a new node first queries the Entry Point, which replies with a complete list of top layer nodes. Then the node pings each node in the returned list. As a result of the pings, it finds out the closest node and sends a query to it. From this node, it gets a list of its neighborhood at the lower level. The process is repeated recursively until a layer L1 node is found; the joining node then attaches itself on layer La to the found node. When nodes join the system, they are always initially attached to layer La. Once a neighborhood consists of Knodes, it must eventually be split. Higher layer nodes result from layer splitting.


Neighborhood Splitting


An MDTree's layer structure is dynamic, with layers and super-nodes potentially changing roles and positions when nodes join and leave. When one super-node's number of children reaches K, this neighborhood is split into two. The layer splitting algorithm has significant impact on the performance of the tree; a random split may cause ineffective partitioning, as relatively distant nodes get placed in the same layer. The effect is compounded as additional splits occur. Ideally, when a split occurs, the minimum delay criteria of the tree would be preserved. In other words, the average link delay for each new neighborhood should be minimized.


Because of previous information exchange within the neighborhood, the super-node has all the information about its peer-nodes, including their distance to each other; the presence of this information allows the super-node to effectively partition the neighborhood. Effective partitioning of the neighborhood is critical to the performance of the MDTree. However, optimal hi-partitioning is known to be NP-complete, and it is impractical to enumerate all the combinations and calculate average link delays for each of them when Kis relatively large. For this reason, an optimized genetic partitioning algorithm is preferably employed to achieve effective partitioning. However, any heuristic that can efficiently and effectively partition the neighborhood may be used here.


After nodes are partitioned into two new smaller neighborhoods, just for simplicity and avoiding update of this node on all above layers, the super-node JY, of the neighborhood at layer L, remains the super-node of the neighborhood to which it belongs after the split. Here JY, continues to be a super-node because it may reside on a higher layer. After splitting, if JY, is found to be not the best suitable supernode on that layer, it can be replaced with the best fit node.


In the newly generated neighborhood, Nb, the node having the minimum link delay to all other nodes, is appointed to be the super-node of layer Li. (However, for perfection, both super-nodes can be selected like the newly generated neighborhood.) Now both Na and Nb participate in the same neighborhood on layer Lj+1 under the same supernode Nc Now Nb the new super-node, becomes its sibling and a new peer-node of Nc Na informs all related nodes about the change of leadership. Upon receiving the split message, the new super-node Nb requests to attach to layer L,+1 and join Na's neighborhood. Nc the super-node of Na now becomesthecommon super-node of both Na and node Nb. While the minimum link delay is a preferred metric, any other suitable metric may be used, and indeed a multiparameter cost function may be employed, without departing from the scope of the invention. Such metrics or parameters may include economic cost, speed (which may be distinct from delay), power consumption or availability, predicted stability, etc.


Such a split reduces the number of nodes on layer L, but increases the number of nodes on layer Li+1. If the number of nodes in a neighborhood on layer L,+1 reaches K, that neighborhood splits.


Tree Adjustment


In general, heuristic approaches do not necessarily consider the full solution space, and can therefore result in suboptimal configurations. For example, a node may unluckily get placed in the wrong branch of a tree due to an early split. Further, neighborhood splitting results in MDTree structure changes, and nodes being promoted to higher layers. However, this may separate nearby nodes into different neighborhoods, and they may eventually migrate away from each other in the tree. Heuristics may allow nodes to recover from such unfortunate placement. For example, a node can through its super-node at layer L, discover the supernode's neighborhood on L,+1. The node can then ping all the nodes in that neighborhood at a fixed infrequent interval to check for a peer of lower link delay, and move itself into that neighborhood (and merge all of its community into the new community). Another possible solution with larger range reposition is contacting the entry node at a fixed interval to get a global reposition. However, too frequent reposition may affect the stability of the MDTree.


Tree Maintenance


It is important to recover from node and super-node failure (or more commonly, departure from the SOG). In a SOG, most nodes may be well behaved and announce their intent to depart. This may allow soft reconfiguration of the tree, by removing the peer-node and electing an alternate super-node for the layers where it serves this duty. The tree provides an efficient structure for multicasting such messages. However, since failures and unannounced departures are possible, nodes in the same neighborhood exchange heartbeat messages. A node is considered absent if it fails to respond to some predefined number of consecutive heartbeat messages; this can trigger tree reconfiguration. Recovery from peer-node departures is handled differently from recovery from super-node departures, as described below.

    • Peer-node Departure: The departure of a peer-node P simply results that the super-node and other peer nodes in P's neighborhood remove P from their records. If the number of nodes in the neighborhood falls below a predefined threshold, the super-node of layer Li may try to demote itself to a peer-node on layer L, and join its entire community into that of another super-node on layer L. This approach can keep the tree structure balanced.
    • Super-node Departure: Because all MDTree structure information is broadcast within the neighborhood, all peer nodes have the knowledge of the neighborhood. Thus, a new super-node can be elected directly from the neighborhood and promoted in place of the departed super-node.


Cluster Formation


When user on a node requests a cluster of size R, it checks if the number of nodes it controls is larger than the requested size multiplied by a predefined candidate scale factor S, where S>100% so that the requester may select the Rmost suitable nodes from among a set of more than Rnodes should it decide to do so. If it cannot satisfy the request, the request is forwarded recursively to supernodes at higher and higher layers without a DETERMINED flag, until it arrives at a super-node that controls a community that contains more than R*S nodes. This super-node then decides which part of the community under its control should join the cluster, and forwards the request, with DETERMINED flag being set, to those nodes. A cluster request message with DETERMINED flag requires the receiver and all the nodes controlled by the receiver to respond to the original requester with no further judgment. After receiving enough responses from cluster candidates, the requester can then ping each responder and select the closest Rnodes; or, the cluster can choose to select a random subset of Rnodes, or the first R responders. The structure of MDTree makes the responded nodes be close to each other, and the second selection among the responses provides more flexibility.


The original requester knows the link delay between itself and the responders, but not the delay among the responders. This is a sacrifice of optimality for performance; a perfect selection would require a solution to the NP-complete clustering problem and O(n2) tests (However, here n reflects the size of the cluster, not the much larger size of the SOG).


The MDTree structure thus preferably organizes nodes based on the link delay between node pairs. This structure makes automatic clustering distributed, scalable, efficient, and effective. The MDTree can also be applied as the foundation for group scheduling using criteria other than link delay. Traditional computational Grids that comprise multiple physical clusters may still benefit from an automatic clustering approach similar to that discussed. In particular, when a large-scale application requires a set of machines that exceed the size of the largest available cluster, the present approach will consider the delay between nodes at different sites, and help identify a large multi-organizational collection of machines to support the application.


In a prototype implementation, a default value of S=180% is set, so requesters receive 1.8 times as many candidate nodes for their cluster as they request, and the requester picks top R responders with least link delay to itself, thus leading to a solution favoring minimum diameter. The original requester only knows the link delay between itself and responders, but not the delay among responders. This is another sacrifice of optimality for performance; a perfect selection would require a solution to the NP-complete clustering problem, and Ori) tests (here, however, n reflects the size of the cluster, not the much larger size of the SOG). Of course, alternative heuristics may be employed for final selection of the cluster from among candidate nodes.


It may be assumed that the requester is interested in a nearby cluster, which reduces the application launch delay, and acts as a crude geographical load-balancing technique. However, alternative approaches for cluster formation can be directly supported on top of an MDTree which do not mandate this presumption. For example, the tree can track the load at a coarse granularity, and map the request to a lightly loaded portion of the SOG.


The underlying logic of MDTree is the structure of computer networks. In other words, if node A is close to node B, and node Cis also close to node B, then very likely node A will be close to node C.


Scheduling means allocating resources for jobs. It is a fundamental issue in achieving high performance in any Grid system. Effective scheduling policies can maximize system and individual application performance, as well as reduce running time of jobs. Scheduling is in essence a resource allocation problem, i.e. the problem of mapping jobs to the available physical resources according to some criteria.


While a single resource and a single job are matched in conventional bipartite scheduling systems, the group-scheduling strategy matches concurrent jobs, consisting of multiple tasks and requires multiple resources, to multiple available resources that can support them efficiently. The selected resources must be both individually efficient and load balanced (to reduce execution time), and mutually close (to reduce communication time).


Single match making scheduling algorithms for distributed environments typically ignore the impact of communication. This approach greatly simplifies the scheduling problem because it simply tracks the individual node characteristics, rather than tracks the mutual relationship among sets of nodes (which are exponentially large in the number of available nodes). For parallel applications, where each node runs independently, and communication costs do not play a role such an approach is sufficient.


However, SOGs are intended to run computationally intensive parallel multi-task jobs. SOGs therefore target an environment where general parallel applications may be supported. So the communication cost among candidate groups of nodes being considered for supporting a task must be factored in the scheduling decisions. Thus the selected resources have to be mutually close.


Effective scheduling in large-scale computational Grids is challenging due to a need to address a variety of metrics and constraints (resource utilization, response time, global and local allocation policies) while dealing with multiple, potentially independent sources of jobs and a large number of storage, computing, and network resources.


Group-scheduling needs to take into consideration the interaction among tasks, and is even harder. The problem is how to dispatch (or schedule) jobs among processing elements to achieve performance goals, such as minimizing executing time, minimizing communication delays, and maximizing resource utilization. Here a job is a set of tasks that will be carried out on a set of resources, and a task is an atomic operation to be performed on a resource.


To achieve the goal of selecting best resources for parallel tasks, the main foci are:


1. How the resource information is managed.


2. How the scheduling requests are processed.


Several challenges are involved in resource information management. For one thing, effective use of SOGs requires up-to-date information about widely distributed available resources. This is a challenging problem even for general large scale distributed systems particularly taking into account the continuously changing state of the resources. Especially, in SOG environment where nodes may join and leave frequently, discovering dynamic resources must be scalable in number of resources and users and hence, as much as possible, fully decentralized. Effective resource information collection, summarization, and update are important, but difficult.


One difficulty lies in maintaining low storage and update overhead for the whole system and for each individual node in a dynamic and distributed environment. Making scheduling decision requires up-to-date resource status information. However, in a hierarchical model, complete information without aggregation results in high storage and update overhead. In other words, too much aggregation results in inaccurate scheduling decision; on the other hand, too little aggregation results in inefficiency and redundancy. These two aspects need to be balanced.


Another difficulty comes from summarizing resource information to provide accurate results with minimum queries. Resource information should be easy to update and query. It is clear that resource information needs to be aggregated using an effective summarization method. This summarization method should keep important information and filter out unimportant information with low computational and storage overhead.


Scheduling request processing also contains several challenges. One challenge is the difficulty to keep request processing efficient and scalable in a dynamic environment. To be efficient, query messages cannot be passed through too many intermediate nodes. To be scalable, query messages cannot be flooded to a large range of nodes. Another challenge is the difficulty to filter out the best resources when there are more resources satisfying the criteria. Besides clearly stated criteria in a request, implied criteria, such as link delay also need to be considered to select the best resources from more than required candidates.


Both resource management and scheduling request processing can have centralized or distributed solutions. A centralized solution may store all the resource information in a database, or process all the scheduling requests at a central node. Centralized solutions do not scale, especially to the scales expected with the inclusion of desktop resources. In this case, all the resource updates or scheduling requests go to a few dedicated servers. The benefit of centralized solutions is that resource information maintenance is easy and query is efficient since all the information is in the same database. On the other hand, when the scale of the system exceeds the servers' capability, these centralized servers could become the bottleneck of the whole system. In addition, the single point failure problem usually comes with centralized solutions. Thus, centralized solutions do not optimally satisfy the requirements of an SOG environment.


Distributed solutions include purely distributed solutions and hierarchical solutions. These models apply to both resource information management and scheduling requests processing. Distributed solutions usually bear higher maintenance costs overall, but this disadvantage is offset by sharing the costs among all the participating nodes. Purely distributed solutions evenly distribute resource information on all the nodes. Thus system overheads are shared by each node and the solutions are scalable. The nodes are connected together through mechanisms such as a distributed hash table.


In cases addressed by the present technology, the problem of purely distributed solutions is that it is almost impossible for a purely distributed systems to directly support multiple condition matching, or range query, due to the properties of Distributed Hash Tables (DHT).


Hierarchical solutions combine the advantages of both centralized solutions and purely distributed solutions, and thus are more flexible. Hierarchical solutions can be one layer or multi-layer hierarchical. Higher layer nodes store duplicated resource data of lower layer nodes or summary of that information. When it comes to request processing, higher layer nodes forward requests to their appropriate children until nodes at the lowest layer are reached. Hierarchical solutions combine the benefits of both centralized solutions and purely distributed solutions.


Hierarchical solutions are therefore preferred. Since MDTree itself is multi-layer hierarchical in terms of overlay topology. It is easier to implement multilayer hierarchical resource management.


In hierarchical model, for requests to be processed, higher layer nodes can either directly respond to the requester or forward requests to appropriate children. Higher layer nodes need to either know accurate information or know who has accurate information. In other words, higher layer nodes need to store either complete information of all the subordinate nodes to make decision or the summary of that information to forward requests. As to resource information management, the respective states of the different resources should be monitored in an efficient and scalable way. Factors considered include system overhead, information collection frequency, and information accuracy.


Based on the MDTree structure, forwarding scheduling requests down to leaf nodes to make final decision is more scalable than making responses on upper nodes, and requires less resource information to be stored on upper layer. It is clear that insufficient summarization or storing complete information at upper layer nodes leads to impaired scalability. On the other hand, too much abstraction means inaccuracy. To achieve the best performance, updating overhead and easiness and accuracy of query need to be balanced. Status update can be propagated in push mode, pull mode or more complicated adaptive mode. Such mechanisms could also be combined.


Obviously, storing only summary of lower layer information reduces the load of upper layer nodes. Preferably, a summary method is employed that keeps most information, reduces resource records, and still deals with complex matching query. The basic requirement is that based on the resource information summary, upper layer nodes need to know which child controls resources satisfying the request criteria. Thus the summary needs to be a vector having at least as many dimensions as the query has.


The relational data model has well known advantages with respect to data independence that lead to simplifying storage and retrieval. Resource data can be stored in relational database or similar data structure. By using the relational data model, flexible general queries (including exclusion) can be formed and answered. In relational data model, records can be aggregated only if values of the corresponding fields are equal. Some attributes have continuous numerical values. In that case, their value ranges need to be quantized into a predefined number of buckets. If a value falls into a bucket, it is summarized by incrementing the count of resources for the corresponding bucket. Aggregating resources this way, we sacrifice some precision to achieve a great reduction in the record number.


For better performance, scheduling requests are normally forwarded to more candidates than the task number. Although these candidates may be already mutually close due to the initial sorting of MDTree, the strategy of further selecting best quality resources still plays an important role.


Effective scheduling depends on efficient resource management. The resource discovery methods in Grid systems fall into two categories: centralized solutions and distributed solutions. Distributed solutions can be further classified, in a way similar to peer-to-peer (P2P) search algorithms, as unstructured and structured, according to their membership and storage organizations. Additional differentiation in the resource management problem lies in whether the resource information is replicated or not, and how it is tracked by the schedulers (push, pull or hybrid).


The majority of the distributed solutions use a variation of either flooding or Distributed Hash Tables (DHT). In general, flooding based solutions incur high overhead, while DHT based solutions cannot readily support complex queries such as multi-attribute and range queries. Furthermore, it is difficult to target resources that are both near to the request initiator and near to each other.


A preferred process of matching appropriate resources in response to a job scheduling request is now described. When a node receives a job scheduling request, it first checks whether it directly or indirectly controls sufficient resources to satisfy the criteria. If the criteria cannot be satisfied, the request is forwarded up to the super node in the neighborhood. The super node, as a peer node at the upper layer, checks the resources it controls, and recursively forwards the request if necessary. Eventually, a super node that controls the desired number of resources is found (or alternatively, the root is reached and the request fails). At that point, the super node that can satisfy the scheduling request criteria puts on a MATCHED flag and forwards the scheduling request down to its children that it identifies as holding relevant resources. The matched scheduling request messages are then passed down in the pruned tree to the leaf nodes. Those branches obviously not matching the criteria are skipped. Since some information is lost in the process of aggregation, the super node only compares the aggregated values. When the leaf nodes with resources receive the matched scheduling request, they check the job attributes and criteria, and then finally decide whether or not to respond to the request initiator. Responses do not have to be routed in the opposite direction with respect to queries; they are directly sent to the job initiator.


Authentication is the act of identifying an individual computer user, while authorization typically refers to the process of determining the eligibility of a properly authenticated identifier (e.g., person) to access a resource. Authentication is the process in which a real-world entity is verified to be who (e.g., person) or what (e.g. compute node, remote instrument) its identifier (e.g., username, certificate subject, etc.) claims. Authorization mechanisms are devised to implement the policies to govern and manage the use of computing resources. So authentication is the basis of Authorization.


For authentication, Grid Security Infrastructure (GSI), which provides the security functionality of the Globus Toolkit, uses public-key authentication infrastructure PKI X.509 proxy certificates to provide credentials for users and to allow for delegation and single sign-on. In GSI, two-party mutual authentication involves straightforward applications of the standard SSL authentication.


In SOG environment, user registration and node participation are separate processes. Participant of a new node does not mean all the users on that node gain access to the SOG. In other word, registered user number is not affected by nodes joining or leaving, and vice versa. It is possible that an SOG user do not have account on any host. On the other hand, an owner of resource may not be an SOG user. User registration relies on registration agents(RAs). These are individuals who are likely to know the persons, who are requesting certificates, firsthand or secondhand. The policies for establishing member identities should be published by each RA, and the procedures for verifying the identities and certificate requests should be consistent among all the RAs and approved by the Certificate Authority(CA). A Grid CA is defined as a CA that is independent of any single organization and whose purpose is to sign certificates for individuals who may be allowed access to the Grid resources, hosts or services running on a single host. On the contrary, node joining and leaving are much more flexible. Nodes don't have to be registered in advance if application are allowed to run on untrusted hosts. Otherwise, new node should at least present a certificate signed by a well known CA.


When a new user joins, an SOG administrator may assign one or more roles to this user, sign this information and save it in the distributed attributes repository. This information can be updated with administrator's signature. Administrator is also a role. When new roles are added, every node is notified to make sure it has corresponding policy for the new role. When new node joins the SOG system, resource owner is required to specify access policy for each existing role, on the joining node. These local policies as well as global policies are stored directly on the node. It is resource owners' responsibility to make sure that resources are not abused. To conform to the view of Service Oriented Architectures (SOA), we presume that resources are accessed through service.


The scalability of data location and query in distributed systems is of paramount concern. It should be possible to extend the system with new resources at a reasonable cost and there should be no performance bottlenecks. P2P overlays are therefore adopted as the basis for attributes repository. Distributed Hash Table (DHT) based structured P2P architecture, such as Chord or CAN can be used. Data lookup takes O(log N) hops, where N is the total number of the nodes in the system. User name is used as the key. Role information related to user is saved in the repository. This Repository mainly used to store role information and global policies. Local policies can also be saved there if they are shared by some resources.


A role-based policy describes a privilege typically consists of a three-tuple attribute, resource, action and the attribute is a two-tuple subject, attribute. This method is more flexible than the discretionary approach. It separates the assignment of privileges into the resource specific definition of access rights (by a policy authority) from the resource agnostic assignment of attributes to subjects (by an attribute authority) and thus allows for the distribution of these tasks to separate authorities. Furthermore, the grouping of subjects into roles enables more scalable management than the direct assignment of rights to subjects allows for. Hierarchical role schemes extend this concept even more by allowing for access right inheritance from less privileged to more privileged roles. In autonomous authorization, the action for the three-tuple is flexibly defined by resource owner.


With the merging of Grid technologies and Web Service-based technologies in OGSA, eXtensible Access Control Markup Language (XACML) is a good choice for specifying access control policies and the associated request/response formats. It allows use and definition of combining algorithms which provide a composite decision over policies governing the access requirements of a resource.


An access token is the evidence that user proxy send to resource proxy to prove its eligibility for service. The access token includes proxy certificate and role information. In order to use grid resources, the user has to be authenticated first. After authentication, a short-lived proxy certificate is generated, which includes the user's identity information. Before request for the service of a specified resource, the user proxy has to retrieve role information from the attributes repository. Then a suitable role is selected from all the roles bound to this user. Proxy certificate and role information together generate an access token. Then user proxy presents the access token to the resource proxy, which uses them in making policy decisions. This last step may be repeated many times using the same access token as long as the proxy certificate and role information do not expire.


It is therefore an object to provide a method for clustering of nodes for a distributed task, comprising automatically partitioning a set of nodes into a branched hierarchy of subsets based at least on a relative proximity according to at least one node characteristic metric, each subset having a supernode selected based on an automatic ranking of nodes within the same subset, each node within the subset being adapted to communication control information with the supernode, and the supernodes of respective subnets which are hierarchically linked being adapted to communicate control information with each other; and outputting a set of preferred nodes for allocation of portions of a distributed task, wherein the output set of preferred nodes is dependent on the hierarchy and the distributed task.


It is a further object to provide a cluster of nodes adapted to perform a distributed task, comprising: a branched hierarchy of nodes, partitioned into subsets of nodes based at least on a relative proximity according to at least one node characteristic metric, each subset each having a supernode selected based on an automatic ranking of nodes within the same subset, each node within the subset being adapted to communication control information with the supernode, and the supernodes of respective subnets which are hierarchically linked being adapted to communicate control information with each other; at least one processor adapted to determine a set of preferred nodes for allocation of portions of a distributed task, wherein the set of preferred nodes is dependent on the hierarchy and the distributed task.


It is another object to provide a computer readable medium, storing instructions for controlling a programmable processor to output a set of preferred nodes for allocation of portions of a distributed task, wherein the output set of preferred nodes is dependent on a branched hierarchy of nodes and the distributed task, wherein the branched hierarchy of nodes is formed by automatically partitioning a set of nodes into a branched hierarchy of subsets based at least on a relative proximity according to at least one node characteristic metric, each subset having a supernode selected based on an automatic ranking of nodes within the same subset, each node within the subset being adapted to communication control information with the supernode, and the supernodes of respective subnets which are hierarchically linked being adapted to communicate control information with each other.


The nodes may be partitioned into the branched hierarchy based on a link delay metric. For example, the at least one node characteristic metric comprises a pair-wise communication latency between respective nodes. The hierarchy may be established based at least in part on proactive communications. The automatic partitioning may be initiated prior to allocating portions of the task, and wherein the hierarchy is modified based on dynamically changing conditions by proactive communications. The proactively communicating may comprise a transmitted heartbeat signal. Preferably, the heartbeat signal is provided as part of a communication between respective nodes provided for at least one other purpose. The automatic partitioning may occurs dynamically while a distributed task is in progress. Likewise, a supernode status may be selected dynamically. A genetic algorithm may be employed to controls the proactive communications to estimate a network state representing the set of nodes, substantially without testing each potential communication link therein. A new node may be placed within the hierarchy or removed from the hierarchy while the distributed task is in progress, and the new node allocated a portion of the distributed task, or a portion of the distributed task formerly performed by the removed node undertaken by another node. A subset (neighborhood) of the hierarchy containing nodes performing a portion of the distributed task may be split into a plurality of subsets, each subset having a node selected to be a supernode, while the distributed task is in progress or otherwise. The preferred number of nodes within a subset (neighborhood) may be dependent on a threshold number, and as the actual number deviates, the hierarchy may be reconfigured accordingly. A node may be moved from one subset to another subset while the node is allocated a portion of the distributed task, wherein a respective supernode for the node is also changed. A node within a subset allocated a portion of the distributed task may be promoted to a supernode if a respective previous supernode is unavailable, wherein said promoting occurs automatically without communications with the previous supernode while the distributed task is in progress. The set of nodes may comprise at least a portion of a grid of computing resources. The grid of computing resources may, in turn, be wholly or partially self-organizing.


The at least one processor may comprise a distributed control system. The at least one processor may comprises a plurality of processors which are part of respective nodes, wherein the allocation of portions of the distributed task to the at least a portion of the nodes is tolerant to a loss of at least one of said processors from the set of nodes. At least one node may have an associated processor which executes a genetic algorithm which controls proactive communications between nodes to estimate a network state representing the set of nodes, substantially without testing each potential communication link therein. The processor may be adapted to place a new node within the hierarchy while the distributed task is in progress, split a subset containing nodes performing a portion of the distributed task into a plurality of subsets, move a node from one subset to another subset while the node is allocated a portion of the distributed task, wherein a respective supernode for the node is changed, and/or promote a node within a subset allocated a portion of the distributed task to a supernode if a respective previous supernode is unavailable. The set of nodes may comprise at least a portion of a grid of computing resources, wherein the grid of computing resources is self-organizing based on logic executed by a respective processor associated with each node.


Further objects will be apparent from a review hereof.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows an MDTree of 16 nodes with maximum neighborhood size of K=4. Each super-node is shown in bold, and is labeled with the number of nodes it controls.



FIG. 2 shows the average Link Delay in a cluster.



FIG. 3 shows a Maximum Link Delay to the cluster requester.



FIG. 4 shows a Cluster Diameter.



FIG. 5 shows Cluster Requesting Overhead, messages include requests, responses, pings, and cluster confirms.



FIG. 6 shows a comparison of Genetic Split and Random Split.





EXPERIMENTAL EVALUATION

Simulation experiments were conducted to evaluate the present approach using the GPS simulation framework [10, 11] and Transit-Stub networks generated from the GT-ITM topology generator [12]. The GPS was extended to model MDTrees, and to support the cluster formation algorithm discussed herein. The topology studied consists of 600 nodes (due to run-time and memory usage considerations). Link delay within a stub is 5 milliseconds (ms), between stubs and transits it is 10 ms, and between transits is set to 30 ms. Cluster requests of sizes 8, 16, 32, 64, 128, and 256 nodes were evaluated. Pings are used to determine the link delay between node pairs. The value of Kwas set at 25 for the MDTree, and 180% as candidate scale factorS. The following metrics were used to measure the quality of the cluster that an MDTree helps discover:

    • Average link delay among nodes within the cluster: The average link delay is likely to be the most important criterion for the quality of the clustering, especially for fine-grained applications. Such applications require frequent communication among nodes within the cluster and their performance is bound by the latency of communication.
    • Maximum link delay to the cluster requester: This criterion is important for clusters in which the most frequent communication is between the cluster requester and the other nodes.
    • Cluster diameter: The largest link delay between any pair of nodes in the cluster.
    • Cluster Formation Overhead: The overlay performance is measured by the number of messages sent during the process of requesting a cluster. These messages include cluster request messages, cluster responses, pings, and cluster confirmation. Since the MDTree is constructed pro-actively, the cost is amortized over all the requests generated for clusters; it can be considered as a fixed cost. New node joining only cost approximately O(Logk N) messages, which includes attach queries and pings, where, again, N is the number of nodes in the SOG, and K is the maximum number of nodes in any neighborhood.
    • Maintenance Overhead: The overlay performance is measured by the number of messages transmitted in the MDTree.



FIG. 2 shows the average link delay in the extracted cluster, compared to the optimal cluster for the topology (found through exhaustive search). The average delay, in general, is quite good compared to the optimal available. However, especially at small size clusters (smaller than the layer size), the quality of the solution can be improved. This argues for supporting mechanisms to allow nodes to change their location in the tree if they are not placed well. That is, it is preferred that the system and method support a determination of placement quality, and the communication protocol between nodes support communications which both support the determination of quality of placement and restructuring of the network in case of poor placement, even if this imposes some inefficiency on the operation of well-placed nodes. At 256 nodes, the large size of the cluster relative to the topology size may contribute to the two graphs converging.



FIGS. 3 and 4 show the maximum link delay to the cluster requester, and the cluster diameter respectively. These figures contain a similar result to that of FIG. 2. In general, the results show that the present approach performs well with respect to the optimal solution according to all three metrics. The complexity of the clustering stage (i.e. the messages that are exchanged after a cluster is requested, as opposed to the pro-active MDTree setup costs associated with join messages) depends on the options used in representing the clusters and the MDTree structure (e.g. the values of Sand % described earlier).



FIG. 5 shows that the overhead for requesting a cluster appears to be linear with the size of the requested cluster. Building and maintaining the MDTree structure also requires overhead. Node joining costs approximately O(LogkN) messages and pings. However, the main overhead comes from the periodic heartbeat messages, since it broadcasts to each node in the neighborhood. This overhead can be reduced by piggybacking and merging update messages. Therefore, it is preferred than an independent heartbeat message only be sent if no other communication conveying similar or corresponding information is not sent within a predetermined period. Of course, the heartbeat may also be adaptive, in which case the frequency of heartbeat messages is dependent on a predicted dynamic change of the network. If the network is generally stable, the heartbeat messages may be infrequent, while if instability is predicted, the heartbeats may be sufficiently frequent to optimize network availability. Instability may be predicted, for example, based on a past history of the communications network or SOG performance, or based on an explicit message.


In some cases, the communication network may be shared with other tasks, in which case the overhead of the heartbeat messages may impact other systems, and an increase in heartbeat messages will not only reduce efficiency of the SOG, but also consume limited bandwidth and adversely impact other systems, which in turn may themselves respond by increasing overhead and network utilization. Therefore, in such a case, it may be desired to determine existence of such a condition, and back off from unnecessary network utilization. For example, a genetic algorithm or other testing protocol may be used to test the communication network, to determine its characteristics.


Clearly, super-nodes and regular peer-nodes have different levels of responsibilities in MDTrees. A super-node is a leader on all layers from 1 to the second highest layer it joins. Each super-node must participate in query and information exchange on all the neighborhoods it joins, which can make it heavily burdened. However, if higher layer super-nodes did not appear within neighborhoods at lower layer neighborhoods, it would be inefficient to pass information down to neighborhoods at lower layers.


Graph hi-partitioning is known to be NP-complete [7]. In an MDTree, genetic algorithms may be used for neighborhood splitting. A preferred algorithm generates approximately optimal partitioning results within hundreds or thousands of generations, which is a small number of computations compared to the NP-complete optimal solution (and these computations take place locally within a super-node, requiring no internode messages). Various other known heuristics may be used to bi-partition the nodes. Since MDTree tries to sort close nodes into the same branch, a genetic algorithm is preferable to a random split algorithm, especially for transit-stub topology.



FIG. 6 shows that the genetic algorithm (or any other effective hi-partitioning heuristic) has a significant impact on the quality of the solution when compared to random partitioning for neighborhood splitting.


The present invention provides an efficient data structure and algorithm for implementing automatic node clustering for self-organizing grids, which will contain clusters of high performance “permanent” machines alongside individual intermittently available computing nodes. Users can ask for an “ad hoc” cluster of size N, and the preferred algorithm will return one whose latency characteristics (or other performance characteristic) come close to those of the optimal such cluster. Automatic clustering is an important service for SOGs, but is also of interest for more traditional grids, whose resource states and network characteristics are dynamic (limiting the effectiveness of static cluster information), and whose applications may require node sets that must span multiple organizations.


The MDTree structure organizes nodes based on the link delay between node pairs. The preferred approach is distributed, scalable, efficient, and effective. A genetic algorithm is used for neighborhood splitting to improve the efficiency and effectiveness of partitioning.


In addition, the system and method according to the present invention may provide tree optimization to revisit placement decisions. Likewise, the invention may determine the effect of node departure on clustering. Further, the invention may provide re-balancing to recover from incorrect placement decisions. As discussed above, the minimum link delay criterion is but one possible metric, and the method may employ multiple criteria to identify candidate cluster nodes, instead of just inter-node delay. For example, computing capabilities and current load, and the measured bandwidth (total and/or available) between nodes may be employed.


Tiered SOG resources, ranging from conventional clusters that are stable and constantly available, to user desktops that may be donated when they are not in use may be implemented.


This variation in the nature of these resources can be accounted for, both in the construction of the MDTrees (e.g., by associating super-nodes with stable nodes) and during the extraction of clusters (e.g., by taking advantage of known structure information like the presence of clusters, instead of trying to automatically derive all structure).


The present invention may also provide resource monitoring for co-scheduling in SOGs. Resource monitoring and co-scheduling have significant overlap with automatic clustering, and therefore a joint optimization may be employed. Effective SOG operation also requires service and application deployment, fault tolerance, and security.


REFERENCES

[1] Enabling grids fore-science (EGEE). http://public.euegee. org.


[2] Seti @home. http://setiathome.berkeley.edu.


[3] Teragrid. http://www.teragrid.org.


[4] N. Abu-Ghazaleh and M. J. Lewis. Short paper: Toward self-organizing grids. In Proceedings of the IEEE International Symposium on High Performance Distributed Computing (HPDC-15), pages 324-327, June 2006. Hot Topics Session.


[5] A. Agrawal and H. Casanova. Clustering hosts in p2p and global computing platforms. In The Workshop on Global and Peer-to-Peer Computing on Large Scale Distributed Systems, Tokyo, Japan, April 2003.


[6] S. Banerjee, C. Kommareddy, and B. Bhattacharjee. Scalable peer finding on the internet. In Global Telecommunications Conference, 2002. GLOBECOM '02, volume 3, pages 2205-2209, November 2002.


[7] M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman & Co., New York, N.Y., USA, 1979.


[8] E. K. Lua, J. Crowcroft, and M. Pias. Highways: Proximity clustering for scalable peer-to-peer network. In 4th International Conference on Peer-to-Peer Computing (P2P 2004), Zurich, Switzerland, 2004. IEEE Computer Society.


[9] Q. Xu and J. Subhlok. Automatic clustering of grid nodes. In 6th IEEEIACM International Workshop on Grid Computing, Seattle, Wash., November 2005.


[10] W. Yang. General p2p simulator. http://www.cs.binghamton.edurwyang/gps.


[11] W. Yang and N. Abu-Ghazaleh. GPS: A general peer-to-peer simulator and its use for modeling bittorrent. In Proceedings of 13th Annual Meeting of the IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS '05, pages 425-432, Atlanta, Ga., September 2005.


[12] E. W. Zegura, K. L. Calvert, and S. Bhattacharjee. How to model an internetwork. In IEEE Infocom, volume 2, pages 594-602, San Francisco, Calif., March 1996. IEEE.


[13] W. Zheng, S. Zhang, Y. Ouyang, F. Makedon, and J. Ford. Node clustering based on link delay in p2p networks. In 2005 ACM Symposium on Applied Computing, 2005.

Claims
  • 1. A non-transitory computer-readable medium storing executable instructions that, in response to execution, cause a processor of a first node device within a first subnet to perform operations comprising: receiving, by the first node device, a node device certificate in response to a successful registration by a registration agent;using the node device certificate to retrieve role information;generating an access token from the node device certificate and retrieved role information;communicating, by the first node device, the access token to a second node device within a second subnet to authorize access to computing resources of the second node device in accordance with a security policy of the second node device provided that the access token has not expired, wherein the first subnet comprises a plurality of node devices based on a distance function of a node device characteristic, and wherein the second subnet comprises a plurality of node devices different from the node devices comprising the first subnet based on the distance function of the node device characteristic; andcommunicating, by the first node device, control information and task data to the second node device.
  • 2. The non-transitory computer-readable medium of claim 1, further comprising instructions that, in response to execution, cause the processor of the first node device to perform operations further comprising: designating a set of preferred node devices for allocation of portions of a task, wherein the second node device is included in the preferred node devices.
  • 3. The non-transitory computer-readable medium of claim 1, further comprising instructions that, in response to execution, cause the processor of the first node device to perform operations further comprising: designating a set of preferred node devices for allocating portions of a task, wherein the designated set is based on both the task and a partitioning algorithm based on the distance function of the node device characteristic.
  • 4. The non-transitory computer-readable medium of claim 3, wherein the node device characteristic includes a pairwise communication latency between respective node devices.
  • 5. The non-transitory computer-readable medium of claim 1, wherein the second node device controls each node device within the second subnet.
  • 6. The non-transitory computer-readable medium of claim 1, wherein the second node device communicates control information between each node device within the second subnet and the plurality of node devices of the plurality of subnets.
  • 7. The non-transitory computer-readable medium of claim 1, wherein the node device characteristic comprises a link delay metric.
  • 8. The non-transitory computer-readable medium of claim 7, wherein the first subnet and the second subnet are dynamically control led based on current conditions that are determined at least in part by proactive communications that include a heartbeat message.
  • 9. The non-transitory computer-readable medium of claim 1, further comprising instructions that, in response to execution, cause the processor of the first node device to perform operations further comprising: partitioning the plurality of node devices in the first subnet into two new subnets in response to a failure of one or more of the plurality of node devices to respond to a predetermined number of consecutive heartbeat messages.
  • 10. A method for clustering node devices for accomplishing a task, comprising: receiving, by a first node device within a first subnet, a node device certificate in response to a successful registration by a registration agent;using the node device certificate to retrieve role information;generating an access token from the node device certificate and retrieved role information;communicating, by the first node device, the access token to a second node device within a second subnet to authorize access to computing resources of the second node device in accordance with a security policy of the second node device provided that the access token has not expired, wherein the first subnet comprises a plurality of node devices based on a distance function of a node device characteristic, and wherein the second subnet comprises a plurality of node devices different from the node devices comprising the first subnet based on the distance function of the node device characteristic; andcommunicating, by the first node device, control information and task data to the second node device of the second subnet; anddesignating a set of preferred node devices for allocating portions of a task, wherein the designated set is based on the task and a partitioning algorithm based on the distance function of the node device characteristic.
  • 11. The method of claim 10, wherein the second node device is included in the set of preferred node devices.
  • 12. The method of claim 10, wherein the node device characteristic includes a pairwise communication latency between respective node devices.
  • 13. The method of claim 10, wherein the second node device controls each node device within the second subnet.
  • 14. The method of claim 10, wherein the second node device communicates control information between each node device within the second subnet and the plurality of node devices of the plurality of subnets.
  • 15. The method of claim 10, wherein the node device characteristic comprises a link delay metric.
  • 16. The method of claim 10, wherein the first subnet and the second subnet are dynamically controlled based on current conditions that are determined at least in part by proactive communications that include a heartbeat message.
  • 17. The method of claim 10, wherein the heartbeat message includes merged update messages.
  • 18. The method of claim 10, further comprising: partitioning the plurality of node devices in the first subnet into two new subnets in response to a failure of one or more of the plurality of node devices to respond to a predetermined number of consecutive heartbeat messages.
  • 19. A system comprising: a memory; anda processor configured to: receive, by the first node device, a node device certificate in response to a successful registration by a registration agent;use the node device certificate to retrieve role information;generate an access token from the node device certificate and retrieved role information;communicate, by the first node device, the access token to a second node device within a second subnet to authorize access to computing resources of the second node device in accordance with a security policy of the second node device provided that the access token has not expired, wherein the first subnet comprises a plurality of node devices based on a distance function of a node device characteristic, and wherein the second subnet comprises a plurality of node devices different from the node devices comprising the first subnet based on the distance function of the node device characteristic; andcommunicate, by the first node device, control information and task data to the second node device.
  • 20. The system of claim 19, wherein the processor is further configured to: designate a set of preferred node devices for allocation of portions of a task, wherein the second node device is included in the preferred node devices.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. application Ser. No. 15/463,542, filed Mar. 20, 2017, which is a continuation of U.S. application Ser. No. 13/770,798, filed Feb. 19, 2013 now U.S. Pat. No. 9,602,573, which is a continuation of U.S. application Ser. No. 13/243,125, filed Sep. 23, 2011, now U.S. Pat. No. 8,380,846, which is a continuation of U.S. application Ser. No. 12/236,396, filed Sep. 23, 2008, now U.S. Pat. No. 8,041,773, which is a non-provisional of U.S. Provisional Patent Application Ser. No. 60/974,834, filed Sep. 24, 2007, the entirety of each of which are expressly incorporated herein by reference.

GOVERNMENT SPONSORSHIP

This invention was made with government support under Grant ACI-0133838, CNS-0454298 awarded by the National Science Foundation, and under contract FA8750-04-1-0054 awarded by the Air Force Research Laboratories. The Government has certain rights in the invention.

US Referenced Citations (1764)
Number Name Date Kind
4215406 Gomola et al. Jul 1980 A
4412288 Herman Oct 1983 A
4525780 Bratt et al. Jun 1985 A
4532893 Day et al. Aug 1985 A
4542458 Kitajima Sep 1985 A
4553202 Trufyn Nov 1985 A
4677614 Circo Jun 1987 A
4850891 Walkup et al. Jul 1989 A
4852001 Tsushima et al. Jul 1989 A
4943932 Lark et al. Jul 1990 A
5146561 Carey et al. Sep 1992 A
5168441 Onarheim Dec 1992 A
5175800 Galis et al. Dec 1992 A
5257374 Hammer et al. Oct 1993 A
5276877 Friedrich Jan 1994 A
5299115 Fields et al. Mar 1994 A
5307496 Ichinose et al. Apr 1994 A
5325526 Cameron et al. Jun 1994 A
5349682 Rosenberry Sep 1994 A
5355508 Kan Oct 1994 A
5377332 Entwistle et al. Dec 1994 A
5408663 Miller Apr 1995 A
5451936 Yang et al. Sep 1995 A
5473773 Aman et al. Dec 1995 A
5477546 Shibata Dec 1995 A
5495533 Linehan et al. Feb 1996 A
5504894 Ferguson et al. Apr 1996 A
5542000 Semba Jul 1996 A
5550970 Cline et al. Aug 1996 A
5594901 Andoh Jan 1997 A
5594908 Hyatt Jan 1997 A
5598536 Slaughter et al. Jan 1997 A
5600844 Shaw et al. Feb 1997 A
5623641 Kadoyashiki Apr 1997 A
5651006 Fujino et al. Jul 1997 A
5652841 Nemirovsky et al. Jul 1997 A
5675739 Eilert et al. Oct 1997 A
5701451 Rogers et al. Dec 1997 A
5729754 Estes Mar 1998 A
5732077 Whitehead Mar 1998 A
5734818 Kern et al. Mar 1998 A
5737009 Payton Apr 1998 A
5752030 Konno et al. May 1998 A
5757771 Li May 1998 A
5761433 Billings Jun 1998 A
5761484 Agarwal et al. Jun 1998 A
5765146 Wolf Jun 1998 A
5774660 Brendel et al. Jun 1998 A
5774668 Choquier et al. Jun 1998 A
5781187 Gephardt et al. Jul 1998 A
5781624 Mitra et al. Jul 1998 A
5787459 Stallmo et al. Jul 1998 A
5799174 Muntz et al. Aug 1998 A
5801985 Roohparvar et al. Sep 1998 A
5826082 Bishop et al. Oct 1998 A
5826236 Narimatsu et al. Oct 1998 A
5826239 Du et al. Oct 1998 A
5828888 Kozaki et al. Oct 1998 A
5832517 Knutsen, II Nov 1998 A
5854887 Kindell et al. Dec 1998 A
5862478 Cutler, Jr. et al. Jan 1999 A
5867382 McLaughlin Feb 1999 A
5874789 Su Feb 1999 A
5881238 Aman et al. Mar 1999 A
5901048 Hu May 1999 A
5908468 Hartmann Jun 1999 A
5911143 Deinhart et al. Jun 1999 A
5918017 Attanasio et al. Jun 1999 A
5920545 Raesaenen et al. Jul 1999 A
5920863 McKeehan et al. Jul 1999 A
5930167 Lee et al. Jul 1999 A
5933417 Rottoo Aug 1999 A
5935293 Petering et al. Aug 1999 A
5950190 Yeager Sep 1999 A
5958003 Preining et al. Sep 1999 A
5961599 Kalavade et al. Oct 1999 A
5968176 Nessett et al. Oct 1999 A
5971804 Gallagher et al. Oct 1999 A
5978356 Elwalid et al. Nov 1999 A
5987611 Freund Nov 1999 A
6003061 Jones et al. Dec 1999 A
6006192 Cheng et al. Dec 1999 A
6012052 Altschuler et al. Jan 2000 A
6021425 Waldron, III et al. Feb 2000 A
6032224 Blumenau Feb 2000 A
6052707 D'Souza Apr 2000 A
6055618 Thorson Apr 2000 A
6058416 Mukherjee et al. May 2000 A
6067545 Wolff May 2000 A
6076174 Freund Jun 2000 A
6078953 Vaid et al. Jun 2000 A
6085238 Yuasa et al. Jul 2000 A
6088718 Altschuler et al. Jul 2000 A
6092178 Jindal et al. Jul 2000 A
6097882 Mogul Aug 2000 A
6098090 Burns Aug 2000 A
6101508 Wolff Aug 2000 A
6108662 Hoskins et al. Aug 2000 A
6122664 Boukobza Sep 2000 A
6141214 Ahn Oct 2000 A
6151598 Shaw et al. Nov 2000 A
6154778 Koistinen et al. Nov 2000 A
6161170 Burger et al. Dec 2000 A
6167445 Gai et al. Dec 2000 A
6175869 Ahuja et al. Jan 2001 B1
6181699 Crinion et al. Jan 2001 B1
6182142 Win et al. Jan 2001 B1
6185575 Orcutt Feb 2001 B1
6185601 Wolff Feb 2001 B1
6192414 Horn Feb 2001 B1
6195678 Komuro Feb 2001 B1
6198741 Yoshizawa et al. Mar 2001 B1
6201611 Carter et al. Mar 2001 B1
6202080 Lu et al. Mar 2001 B1
6205465 Schoening et al. Mar 2001 B1
6212542 Kahle et al. Apr 2001 B1
6223202 Bayeh Apr 2001 B1
6226677 Slemmer May 2001 B1
6226788 Schoening et al. May 2001 B1
6247056 Chou et al. Jun 2001 B1
6252878 Locklear, Jr. Jun 2001 B1
6253230 Couland et al. Jun 2001 B1
6256704 Hlava Jul 2001 B1
6259675 Honda Jul 2001 B1
6263359 Fong et al. Jul 2001 B1
6266667 Olsson Jul 2001 B1
6269398 Leong Jul 2001 B1
6278712 Takihiro et al. Aug 2001 B1
6282561 Jones et al. Aug 2001 B1
6289382 Bowman-Amuah Sep 2001 B1
6298352 Kannan et al. Oct 2001 B1
6314114 Coyle et al. Nov 2001 B1
6314487 Hahn et al. Nov 2001 B1
6314501 Gulick et al. Nov 2001 B1
6314555 Ndumu et al. Nov 2001 B1
6317787 Boyd et al. Nov 2001 B1
6324279 Kalmanek, Jr. et al. Nov 2001 B1
6327364 Shaffer et al. Dec 2001 B1
6330008 Razdow et al. Dec 2001 B1
6330562 Boden et al. Dec 2001 B1
6330583 Reiffin Dec 2001 B1
6330605 Christensen et al. Dec 2001 B1
6333936 Johansson et al. Dec 2001 B1
6334114 Jacobs et al. Dec 2001 B1
6338085 Ramaswamy Jan 2002 B1
6338112 Wipfel et al. Jan 2002 B1
6339717 Baumgartl et al. Jan 2002 B1
6343311 Nishida et al. Jan 2002 B1
6343488 Hackfort Feb 2002 B1
6345287 Fong et al. Feb 2002 B1
6345294 O'Toole et al. Feb 2002 B1
6351775 Yu Feb 2002 B1
6353844 Bitar et al. Mar 2002 B1
6363434 Eytchison Mar 2002 B1
6363488 Ginter et al. Mar 2002 B1
6366945 Fong et al. Apr 2002 B1
6370154 Wickham Apr 2002 B1
6370584 Bestavros et al. Apr 2002 B1
6373841 Goh et al. Apr 2002 B1
6374254 Cochran et al. Apr 2002 B1
6374297 Wolf et al. Apr 2002 B1
6384842 DeKoning May 2002 B1
6385302 Antonucci et al. May 2002 B1
6392989 Jardetzky et al. May 2002 B1
6393569 Orenshteyn May 2002 B1
6393581 Friedman et al. May 2002 B1
6400996 Hoffberg et al. Jun 2002 B1
6404768 Basak et al. Jun 2002 B1
6405234 Ventrone Jun 2002 B2
6418459 Gulick Jul 2002 B1
6434568 Bowman-Amuah Aug 2002 B1
6438125 Brothers Aug 2002 B1
6438134 Chow et al. Aug 2002 B1
6438594 Bowman-Amuah Aug 2002 B1
6438652 Jordan et al. Aug 2002 B1
6442137 Yu et al. Aug 2002 B1
6446192 Narasimhan et al. Sep 2002 B1
6452809 Jackson et al. Sep 2002 B1
6452924 Golden et al. Sep 2002 B1
6453349 Kano et al. Sep 2002 B1
6453383 Stoddard et al. Sep 2002 B1
6460082 Lumelsky et al. Oct 2002 B1
6463454 Lumelsky et al. Oct 2002 B1
6464261 Dybevik et al. Oct 2002 B1
6466965 Chessell et al. Oct 2002 B1
6466980 Lumelsky et al. Oct 2002 B1
6477580 Bowman-Amuah Nov 2002 B1
6487390 Virine et al. Nov 2002 B1
6490432 Wegener et al. Dec 2002 B1
6496566 Posthuma Dec 2002 B1
6496866 Attanasio et al. Dec 2002 B2
6496872 Katz et al. Dec 2002 B1
6502135 Munger et al. Dec 2002 B1
6505228 Schoening et al. Jan 2003 B1
6507586 Satran et al. Jan 2003 B1
6519571 Guheen et al. Feb 2003 B1
6520591 Jun et al. Feb 2003 B1
6526442 Stupek, Jr. et al. Feb 2003 B1
6529499 Doshi et al. Mar 2003 B1
6529932 Dadiomov et al. Mar 2003 B1
6549940 Allen et al. Apr 2003 B1
6556952 Magro Apr 2003 B1
6564261 Gudjonsson et al. May 2003 B1
6571215 Mahapatro May 2003 B1
6571391 Acharya et al. May 2003 B1
6574238 Thrysoe Jun 2003 B1
6574632 Fox et al. Jun 2003 B2
6578068 Bowman-Amuah Jun 2003 B1
6584489 Jones et al. Jun 2003 B1
6584499 Jantz et al. Jun 2003 B1
6587469 Bragg Jul 2003 B1
6587938 Eilert et al. Jul 2003 B1
6590587 Wichelman et al. Jul 2003 B1
6600898 Bonet et al. Jul 2003 B1
6601234 Bowman-Amuah Jul 2003 B1
6606660 Bowman-Amuah Aug 2003 B1
6618820 Krum Sep 2003 B1
6622168 Datta Sep 2003 B1
6626077 Gilbert Sep 2003 B1
6628649 Raj et al. Sep 2003 B1
6629081 Cornelius et al. Sep 2003 B1
6629148 Ahmed et al. Sep 2003 B1
6633544 Rexford et al. Oct 2003 B1
6636853 Stephens, Jr. Oct 2003 B1
6640145 Hoffberg et al. Oct 2003 B2
6640238 Bowman-Amuah Oct 2003 B1
6651098 Carroll et al. Nov 2003 B1
6651125 Maergner Nov 2003 B2
6661671 Franke et al. Dec 2003 B1
6661787 O'Connell et al. Dec 2003 B1
6662202 Krusche et al. Dec 2003 B1
6662219 Nishanov et al. Dec 2003 B1
6668304 Satran et al. Dec 2003 B1
6687257 Balasubramanian Feb 2004 B1
6690400 Moayyad et al. Feb 2004 B1
6690647 Tang et al. Feb 2004 B1
6701318 Fox et al. Mar 2004 B2
6704489 Kurakake Mar 2004 B1
6711691 Howard et al. Mar 2004 B1
6714778 Nykanen et al. Mar 2004 B2
6724733 Schuba et al. Apr 2004 B1
6725456 Bruno et al. Apr 2004 B1
6735188 Becker et al. May 2004 B1
6735630 Gelvin et al. May 2004 B1
6738736 Bond May 2004 B1
6745246 Erimli et al. Jun 2004 B1
6748559 Pfister Jun 2004 B1
6757723 O+3 Toole et al. Jun 2004 B1
6760306 Pan et al. Jul 2004 B1
6766389 Hayter et al. Jul 2004 B2
6771661 Chawla et al. Aug 2004 B1
6772211 Lu et al. Aug 2004 B2
6775701 Pan et al. Aug 2004 B1
6779016 Aziz et al. Aug 2004 B1
6781990 Puri et al. Aug 2004 B1
6785724 Drainville et al. Aug 2004 B1
6785794 Chase et al. Aug 2004 B2
6813676 Henry et al. Nov 2004 B1
6816750 Klaas Nov 2004 B1
6816903 Rakoshitz et al. Nov 2004 B1
6816905 Sheets et al. Nov 2004 B1
6823377 Wu et al. Nov 2004 B1
6826607 Gelvin et al. Nov 2004 B1
6829762 Arimilli et al. Dec 2004 B2
6832251 Gelvin et al. Dec 2004 B1
6836806 Raciborski et al. Dec 2004 B1
6842430 Melnik Jan 2005 B1
6850966 Matsuura et al. Feb 2005 B2
6857020 Chaar et al. Feb 2005 B1
6857026 Cain Feb 2005 B1
6857938 Eilert et al. Feb 2005 B1
6859831 Gelvin et al. Feb 2005 B1
6859927 Moody et al. Feb 2005 B2
6862606 Major et al. Mar 2005 B1
6868097 Soda et al. Mar 2005 B1
6874031 Corbeil Mar 2005 B2
6894792 Abe May 2005 B1
6904460 Raciborski et al. Jun 2005 B1
6912533 Hornick Jun 2005 B1
6922664 Fernandez et al. Jul 2005 B1
6925431 Papaefstathiou Aug 2005 B1
6928471 Pabari et al. Aug 2005 B2
6931640 Asano et al. Aug 2005 B2
6934702 Faybishenko et al. Aug 2005 B2
6938256 Deng et al. Aug 2005 B2
6947982 McGann et al. Sep 2005 B1
6948171 Dan et al. Sep 2005 B2
6950821 Faybishenko et al. Sep 2005 B2
6950833 Costello et al. Sep 2005 B2
6952828 Greene Oct 2005 B2
6954784 Aiken et al. Oct 2005 B2
6963917 Callis et al. Nov 2005 B1
6963926 Robinson Nov 2005 B1
6963948 Gulick Nov 2005 B1
6965930 Arrowood et al. Nov 2005 B1
6966033 Gasser et al. Nov 2005 B1
6971098 Khare et al. Nov 2005 B2
6975609 Khaleghi et al. Dec 2005 B1
6977939 Joy et al. Dec 2005 B2
6978310 Rodriguez et al. Dec 2005 B1
6978447 Okmianski Dec 2005 B1
6985461 Singh Jan 2006 B2
6985937 Keshav et al. Jan 2006 B1
6988170 Barroso et al. Jan 2006 B2
6990063 Lenoski et al. Jan 2006 B1
6990677 Pietraszak et al. Jan 2006 B1
6996821 Butterworth Feb 2006 B1
7003414 Wichelman et al. Feb 2006 B1
7006881 Haffberg et al. Feb 2006 B1
7013303 Faybishenko et al. Mar 2006 B2
7013322 Lahr Mar 2006 B2
7017186 Day Mar 2006 B2
7020695 Kundu et al. Mar 2006 B1
7020701 Gelvin et al. Mar 2006 B1
7020719 Grove et al. Mar 2006 B1
7032119 Fung Apr 2006 B2
7034686 Matsumura Apr 2006 B2
7035230 Shaffer et al. Apr 2006 B1
7035240 Balakrishnan et al. Apr 2006 B1
7035854 Hsaio et al. Apr 2006 B2
7035911 Lowery et al. Apr 2006 B2
7043605 Suzuki May 2006 B2
7058070 Tran et al. Jun 2006 B2
7058951 Bril et al. Jun 2006 B2
7065579 Traversat et al. Jun 2006 B2
7065764 Prael et al. Jun 2006 B1
7072807 Brown et al. Jul 2006 B2
7076717 Grossman et al. Jul 2006 B2
7080078 Slaughter et al. Jul 2006 B1
7080283 Songer et al. Jul 2006 B1
7080378 Noland et al. Jul 2006 B1
7082606 Wood et al. Jul 2006 B2
7085825 Pishevar et al. Aug 2006 B1
7085837 Kimbrel et al. Aug 2006 B2
7085893 Krissell et al. Aug 2006 B2
7089294 Baskey et al. Aug 2006 B1
7093256 Bloks Aug 2006 B2
7095738 Desanti Aug 2006 B1
7099933 Wallace et al. Aug 2006 B1
7100192 Igawa et al. Aug 2006 B1
7102996 Amdahl et al. Sep 2006 B1
7103625 Hipp et al. Sep 2006 B1
7103664 Novaes et al. Sep 2006 B1
7117208 Tamayo et al. Oct 2006 B2
7117273 O'Toole et al. Oct 2006 B1
7119591 Lin Oct 2006 B1
7124289 Suorsa Oct 2006 B1
7124410 Berg et al. Oct 2006 B2
7126913 Patel et al. Oct 2006 B1
7127613 Pabla et al. Oct 2006 B2
7127633 Olson et al. Oct 2006 B1
7136927 Traversat et al. Nov 2006 B2
7140020 McCarthy et al. Nov 2006 B2
7143088 Green et al. Nov 2006 B2
7143153 Black et al. Nov 2006 B1
7143168 DiBiasio et al. Nov 2006 B1
7145995 Oltmanns et al. Dec 2006 B2
7146233 Aziz et al. Dec 2006 B2
7146353 Garg et al. Dec 2006 B2
7146416 Yoo et al. Dec 2006 B1
7150044 Hoefelmeyer et al. Dec 2006 B2
7154621 Rodriguez et al. Dec 2006 B2
7155478 Ims et al. Dec 2006 B2
7155502 Galloway et al. Dec 2006 B1
7165107 Pouyoul et al. Jan 2007 B2
7165120 Giles et al. Jan 2007 B1
7167920 Traversat et al. Jan 2007 B2
7168049 Day Jan 2007 B2
7170315 Bakker et al. Jan 2007 B2
7171415 Kan et al. Jan 2007 B2
7171476 Maeda et al. Jan 2007 B2
7171491 O'Toole et al. Jan 2007 B1
7171593 Whittaker Jan 2007 B1
7177823 Lam et al. Feb 2007 B2
7180866 Chartre et al. Feb 2007 B1
7185046 Ferstl et al. Feb 2007 B2
7185073 Gai et al. Feb 2007 B1
7185077 O'Toole et al. Feb 2007 B1
7188145 Lowery et al. Mar 2007 B2
7188174 Rolia et al. Mar 2007 B2
7191244 Jennings et al. Mar 2007 B2
7197549 Salama et al. Mar 2007 B1
7197559 Goldstein et al. Mar 2007 B2
7197561 Lovy et al. Mar 2007 B1
7197565 Abdelaziz et al. Mar 2007 B2
7203063 Bash et al. Apr 2007 B2
7203746 Harrop Apr 2007 B1
7203753 Yeager et al. Apr 2007 B2
7206819 Schmidt Apr 2007 B2
7206841 Traversat et al. Apr 2007 B2
7206934 Pabla et al. Apr 2007 B2
7213047 Yeager et et al. May 2007 B2
7213050 Shaffer et et al. May 2007 B1
7213062 Raciborski et al. May 2007 B1
7213065 Watt May 2007 B2
7216173 Clayton et al. May 2007 B2
7222187 Yeager et al. May 2007 B2
7222343 Heyrman et al. May 2007 B2
7225249 Barry et al. May 2007 B1
7225442 Dutta et al. May 2007 B2
7228350 Hong et al. Jun 2007 B2
7231445 Aweya et al. Jun 2007 B1
7233569 Swallow Jun 2007 B1
7233669 Swallow Jun 2007 B2
7236915 Algieri et al. Jun 2007 B2
7237243 Sutton et al. Jun 2007 B2
7242501 Ishimoto Jul 2007 B2
7243351 Kundu Jul 2007 B2
7249179 Romero et al. Jul 2007 B1
7251222 Chen et al. Jul 2007 B2
7251688 Leighton et al. Jul 2007 B2
7254608 Yeager et al. Aug 2007 B2
7257655 Burney et al. Aug 2007 B1
7260846 Day Aug 2007 B2
7263288 Islam Aug 2007 B1
7263560 Abdelaziz et al. Aug 2007 B2
7263596 Wideman Aug 2007 B1
7274705 Chang et al. Sep 2007 B2
7275018 Abu-El-Zeet et al. Sep 2007 B2
7275102 Yeager et al. Sep 2007 B2
7275249 Miller et al. Sep 2007 B1
7278008 Case et al. Oct 2007 B1
7278142 Bandhole et al. Oct 2007 B2
7278582 Siegel et al. Oct 2007 B1
7281045 Aggarwal et al. Oct 2007 B2
7283838 Lu Oct 2007 B2
7284109 Paxie et al. Oct 2007 B1
7289619 Vivadelli et al. Oct 2007 B2
7289985 Zeng et al. Oct 2007 B2
7293092 Sukegawa Nov 2007 B2
7296268 Darling et al. Nov 2007 B2
7299294 Bruck et al. Nov 2007 B1
7305464 Phillipi et al. Dec 2007 B2
7308496 Yeager et al. Dec 2007 B2
7308687 Trossman et al. Dec 2007 B2
7310319 Awsienko et al. Dec 2007 B2
7313793 Traut et al. Dec 2007 B2
7315887 Liang et al. Jan 2008 B1
7320025 Steinberg et al. Jan 2008 B1
7324555 Chen et al. Jan 2008 B1
7325050 O'Connor et al. Jan 2008 B2
7328243 Yeager et al. Feb 2008 B2
7328264 Babka Feb 2008 B2
7328406 Kalinoski et al. Feb 2008 B2
7334108 Case et al. Feb 2008 B1
7334230 Chung et al. Feb 2008 B2
7337333 O'Conner et al. Feb 2008 B2
7337446 Sankaranarayan et al. Feb 2008 B2
7340500 Traversat et al. Mar 2008 B2
7340578 Khanzode Mar 2008 B1
7340777 Szor Mar 2008 B1
7343467 Brown et al. Mar 2008 B2
7349348 Johnson et al. Mar 2008 B1
7350186 Coleman et al. Mar 2008 B2
7353276 Bain et al. Apr 2008 B2
7353362 Georgiou et al. Apr 2008 B2
7353495 Somogyi Apr 2008 B2
7356655 Allen et al. Apr 2008 B2
7356770 Jackson Apr 2008 B1
7363346 Groner et al. Apr 2008 B2
7366101 Varier et al. Apr 2008 B1
7366719 Shaw Apr 2008 B2
7370092 Aderton et al. May 2008 B2
7373391 Iinuma May 2008 B2
7373524 Motsinger et al. May 2008 B2
7376693 Neiman et al. May 2008 B2
7380039 Miloushev et al. May 2008 B2
7382154 Ramos et al. Jun 2008 B2
7383433 Yeager et al. Jun 2008 B2
7386586 Headley et al. Jun 2008 B1
7386611 Dias et al. Jun 2008 B2
7386850 Mullen Jun 2008 B2
7386888 Liang et al. Jun 2008 B2
7389310 Bhagwan et al. Jun 2008 B1
7392325 Grove et al. Jun 2008 B2
7392360 Aharoni Jun 2008 B1
7395536 Verbeke et al. Jul 2008 B2
7395537 Brown Jul 2008 B1
7398216 Barnett et al. Jul 2008 B2
7398471 Rambacher Jul 2008 B1
7401114 Block et al. Jul 2008 B1
7401152 Traversat et al. Jul 2008 B2
7401153 Traversat et al. Jul 2008 B2
7401355 Supnik et al. Jul 2008 B2
7403994 Vogl et al. Jul 2008 B1
7409433 Lowery et al. Aug 2008 B2
7412492 Waldspurger Aug 2008 B1
7412703 Cleary et al. Aug 2008 B2
7415709 Hipp et al. Aug 2008 B2
7418518 Grove et al. Aug 2008 B2
7418534 Hayter et al. Aug 2008 B2
7421402 Chang et al. Sep 2008 B2
7421500 Talwar et al. Sep 2008 B2
7423971 Mohaban et al. Sep 2008 B1
7426489 Van Soestbergen et al. Sep 2008 B2
7426546 Breiter et al. Sep 2008 B2
7428540 Coates et al. Sep 2008 B1
7433304 Galloway et al. Oct 2008 B1
7437460 Chidambaran et al. Oct 2008 B2
7437540 Paolucci et al. Oct 2008 B2
7437730 Goyal Oct 2008 B2
7441261 Slater et al. Oct 2008 B2
7447147 Nguyen et al. Nov 2008 B2
7447197 Terrell et al. Nov 2008 B2
7451199 Kandefer et al. Nov 2008 B2
7451201 Alex et al. Nov 2008 B2
7454467 Girouard et al. Nov 2008 B2
7461134 Ambrose Dec 2008 B2
7463587 Rajsic et al. Dec 2008 B2
7464159 Luoffo et al. Dec 2008 B2
7464160 Iszlai et al. Dec 2008 B2
7466712 Makishima et al. Dec 2008 B2
7466810 Quon et al. Dec 2008 B1
7467225 Anerousis et al. Dec 2008 B2
7467306 Cartes et al. Dec 2008 B2
7467358 Kang et al. Dec 2008 B2
7475419 Basu et al. Jan 2009 B1
7483945 Blumofe Jan 2009 B2
7484008 Gelvin et al. Jan 2009 B1
7484225 Hugly et al. Jan 2009 B2
7487254 Walsh et al. Feb 2009 B2
7487509 Hugly et al. Feb 2009 B2
7492720 Pruthi et al. Feb 2009 B2
7502747 Pardo et al. Mar 2009 B1
7502884 Shah et al. Mar 2009 B1
7503045 Aziz et al. Mar 2009 B1
7505463 Schuba Mar 2009 B2
7512649 Faybishenko et al. Mar 2009 B2
7512894 Hintermeister Mar 2009 B1
7516221 Souder et al. Apr 2009 B2
7516455 Matheson et al. Apr 2009 B2
7519677 Lowery et al. Apr 2009 B2
7519843 Buterbaugh et al. Apr 2009 B1
7526479 Zenz Apr 2009 B2
7529835 Agronow et al. May 2009 B1
7533141 Nadgir et al. May 2009 B2
7533161 Hugly et al. May 2009 B2
7533172 Traversat et al. May 2009 B2
7533385 Barnes May 2009 B1
7536541 Isaacson May 2009 B2
7543052 Klein Jun 2009 B1
7546553 Bozak et al. Jun 2009 B2
7551614 Teisberg et al. Jun 2009 B2
7554930 Gaddis et al. Jun 2009 B2
7555666 Brundridge et al. Jun 2009 B2
7562143 Fellenstein et al. Jul 2009 B2
7568199 Bozak et al. Jul 2009 B2
7570943 Sorvari et al. Aug 2009 B2
7571438 Jones et al. Aug 2009 B2
7574523 Traversat et al. Aug 2009 B2
7577722 Khandejar et al. Aug 2009 B1
7577834 Traversat et al. Aug 2009 B1
7577959 Nguyen et al. Aug 2009 B2
7580382 Amis et al. Aug 2009 B1
7580919 Hannel Aug 2009 B1
7583607 Steele et al. Sep 2009 B2
7583661 Chaudhuri Sep 2009 B2
7584274 Bond et al. Sep 2009 B2
7586841 Vasseur Sep 2009 B2
7590746 Slater et al. Sep 2009 B2
7590747 Coates et al. Sep 2009 B2
7594011 Chandra Sep 2009 B2
7594015 Bozak et al. Sep 2009 B2
7596144 Pong Sep 2009 B2
7596784 Abrams et al. Sep 2009 B2
7599360 Edsall et al. Oct 2009 B2
7606225 Xie et al. Oct 2009 B2
7606245 Ma et al. Oct 2009 B2
7610289 Muret et al. Oct 2009 B2
7613796 Harvey et al. Nov 2009 B2
7616646 Ma et al. Nov 2009 B1
7620057 Aloni et al. Nov 2009 B1
7620635 Hornick Nov 2009 B2
7620706 Jackson Nov 2009 B2
7624118 Schipunov et al. Nov 2009 B2
7624194 Kakivaya et al. Nov 2009 B2
7627691 Buchsbaum et al. Dec 2009 B1
7631066 Schatz et al. Dec 2009 B1
7640353 Shen et al. Dec 2009 B2
7640547 Neiman et al. Dec 2009 B2
7644215 Wallace et al. Jan 2010 B2
7657535 Moyaux et al. Feb 2010 B2
7657597 Arora et al. Feb 2010 B2
7657626 Zwicky Feb 2010 B1
7657677 Huang et al. Feb 2010 B2
7657756 Hall Feb 2010 B2
7660887 Reedy et al. Feb 2010 B2
7660922 Harriman Feb 2010 B2
7664110 Lovett et al. Feb 2010 B1
7665090 Tormasov et al. Feb 2010 B1
7668809 Kelly et al. Feb 2010 B1
7673164 Agarwal Mar 2010 B1
7680933 Fatula, Jr. Mar 2010 B2
7685281 Saraiya et al. Mar 2010 B1
7685599 Kanai et al. Mar 2010 B2
7685602 Tran et al. Mar 2010 B1
7689661 Lowery et al. Mar 2010 B2
7693976 Perry et al. Apr 2010 B2
7693993 Sheets et al. Apr 2010 B2
7694076 Lowery et al. Apr 2010 B2
7694305 Karlsson et al. Apr 2010 B2
7698386 Amidon et al. Apr 2010 B2
7698398 Lai Apr 2010 B1
7698430 Jackson Apr 2010 B2
7701948 Rabie et al. Apr 2010 B2
7707088 Schmelzer Apr 2010 B2
7710936 Morales Barroso May 2010 B2
7711652 Schmelzer May 2010 B2
7716193 Krishnamoorthy May 2010 B2
7716334 Rao et al. May 2010 B2
7719834 Miyamoto et al. May 2010 B2
7721125 Fung May 2010 B2
7725583 Jackson May 2010 B2
7730220 Hasha et al. Jun 2010 B2
7730262 Lowery et al. Jun 2010 B2
7730488 Ilzuka et al. Jun 2010 B2
7739308 Baffier et al. Jun 2010 B2
7739541 Rao et al. Jun 2010 B1
7742425 El-Damhougy Jun 2010 B2
7742476 Branda et al. Jun 2010 B2
7743147 Suorsa et al. Jun 2010 B2
7747451 Keohane et al. Jun 2010 B2
RE41440 Briscoe et al. Jul 2010 E
7751433 Dollo et al. Jul 2010 B2
7752258 Lewin et al. Jul 2010 B2
7752624 Crawford, Jr. et al. Jul 2010 B2
7756658 Kulkarni et al. Jul 2010 B2
7757236 Singh Jul 2010 B1
7760720 Pullela et al. Jul 2010 B2
7761557 Fellenstein et al. Jul 2010 B2
7761687 Blumrich et al. Jul 2010 B2
7765288 Bainbridge et al. Jul 2010 B2
7765299 Romero Jul 2010 B2
7769620 Fernandez et al. Aug 2010 B1
7769803 Birdwell et al. Aug 2010 B2
7770120 Baudisch Aug 2010 B2
7774331 Barth et al. Aug 2010 B2
7774495 Pabla et al. Aug 2010 B2
7778234 Cooke et al. Aug 2010 B2
7782813 Wheeler et al. Aug 2010 B2
7783777 Pabla et al. Aug 2010 B1
7783786 Lauterbach Aug 2010 B1
7783910 Felter et al. Aug 2010 B2
7788403 Darugar et al. Aug 2010 B2
7788477 Huang et al. Aug 2010 B1
7791894 Bechtolsheim Sep 2010 B2
7792113 Foschiano et al. Sep 2010 B1
7793288 Sameske Sep 2010 B2
7796399 Clayton et al. Sep 2010 B2
7796619 Feldmann et al. Sep 2010 B1
7797367 Gelvin et al. Sep 2010 B1
7797393 Qiu et al. Sep 2010 B2
7801132 Ofek et al. Sep 2010 B2
7802017 Uemura et al. Sep 2010 B2
7805448 Andrzejak et al. Sep 2010 B2
7805575 Agarwal et al. Sep 2010 B1
7810090 Gebhart Oct 2010 B2
7813822 Hoffberg Oct 2010 B1
7827361 Karlsson et al. Nov 2010 B1
7830820 Duke et al. Nov 2010 B2
7831839 Hatakeyama Nov 2010 B2
7840353 Ouksel et al. Nov 2010 B2
7840703 Arimilli et al. Nov 2010 B2
7844687 Gelvin et al. Nov 2010 B1
7844787 Ranganathan et al. Nov 2010 B2
7848262 El-Damhougy Dec 2010 B2
7849139 Wolfson et al. Dec 2010 B2
7849140 Abdel-Aziz et al. Dec 2010 B2
7853880 Porter Dec 2010 B2
7860999 Subramanian et al. Dec 2010 B1
7865614 Lu et al. Jan 2011 B2
7886023 Johnson Feb 2011 B1
7889675 Mack-Crane et al. Feb 2011 B2
7890571 Kriegsman Feb 2011 B1
7890701 Lowery et al. Feb 2011 B2
7891004 Gelvin et al. Feb 2011 B1
RE42262 Stephens, Jr. Mar 2011 E
7899047 Cabrera et al. Mar 2011 B2
7900206 Joshi et al. Mar 2011 B1
7904569 Gelvin et al. Mar 2011 B1
7925795 Tamir et al. Apr 2011 B2
7930397 Midgley Apr 2011 B2
7934005 Fascenda Apr 2011 B2
7958262 Hasha et al. Jun 2011 B2
7970929 Mahalingaiah Jun 2011 B1
7971204 Jackson Jun 2011 B2
7975032 Lowery et al. Jul 2011 B2
7975035 Popescu et al. Jul 2011 B2
7975110 Spaur et al. Jul 2011 B1
7984137 O'Toole, Jr. et al. Jul 2011 B2
7984183 Andersen et al. Jul 2011 B2
7991817 Dehon et al. Aug 2011 B2
7991922 Hayter et al. Aug 2011 B2
7992151 Warrier et al. Aug 2011 B2
7995501 Jetcheva et al. Aug 2011 B2
7996510 Vicente Aug 2011 B2
8000288 Wheeler et al. Aug 2011 B2
8014408 Habetha et al. Sep 2011 B2
8018860 Cook Sep 2011 B1
8019832 De Sousa et al. Sep 2011 B2
8032634 Eppstein Oct 2011 B1
8037202 Yeager et al. Oct 2011 B2
8037475 Jackson Oct 2011 B1
8041773 Abu-Ghazaleh et al. Oct 2011 B2
8055788 Chan et al. Nov 2011 B1
8060552 Hinni et al. Nov 2011 B2
8060760 Shetty et al. Nov 2011 B2
8060775 Sharma et al. Nov 2011 B1
8073978 Sengupta et al. Dec 2011 B2
8078708 Wang et al. Dec 2011 B1
8079118 Gelvin et al. Dec 2011 B2
8082400 Chang et al. Dec 2011 B1
8090880 Hasha et al. Jan 2012 B2
8095600 Hasha et al. Jan 2012 B2
8095601 Hasha et al. Jan 2012 B2
8103543 Zwicky Jan 2012 B1
8108455 Yeager et al. Jan 2012 B2
8108508 Goh et al. Jan 2012 B1
8108512 Howard et al. Jan 2012 B2
8108930 Hoefelmeyer et al. Jan 2012 B2
8122269 Houlihan et al. Feb 2012 B2
8132034 Lambert et al. Mar 2012 B2
8135812 Lowery et al. Mar 2012 B2
8140658 Gelvin et al. Mar 2012 B1
8151103 Jackson Apr 2012 B2
8155113 Agarwal Apr 2012 B1
8156362 Branover et al. Apr 2012 B2
8160077 Traversat et al. Apr 2012 B2
8161391 McClelland et al. Apr 2012 B2
8165120 Maruccia et al. Apr 2012 B2
8166063 Andersen et al. Apr 2012 B2
8166204 Basu et al. Apr 2012 B2
8170040 Konda May 2012 B2
8171136 Petite May 2012 B2
8176189 Traversat et al. May 2012 B2
8176490 Jackson May 2012 B1
8180996 Fullerton et al. May 2012 B2
8185776 Gentes et al. May 2012 B1
8189612 Lemaire et al. May 2012 B2
8194659 Ban Jun 2012 B2
8196133 Kakumani et al. Jun 2012 B2
8199636 Rouyer et al. Jun 2012 B1
8204992 Arora et al. Jun 2012 B2
8205044 Lowery et al. Jun 2012 B2
8205103 Kazama et al. Jun 2012 B2
8205210 Cleary et al. Jun 2012 B2
8244671 Chen et al. Aug 2012 B2
8260893 Bandhole et al. Sep 2012 B1
8261349 Peng Sep 2012 B2
8266321 Johnston-Watt et al. Sep 2012 B2
8271628 Lowery et al. Sep 2012 B2
8271980 Jackson Sep 2012 B2
8275881 Fellenstein et al. Sep 2012 B2
8302100 Deng et al. Oct 2012 B2
8321048 Coss et al. Nov 2012 B1
8346591 Fellenstein et al. Jan 2013 B2
8346908 Vanyukhin et al. Jan 2013 B1
8359397 Traversat et al. Jan 2013 B2
8370898 Jackson Feb 2013 B1
8379425 Fukuoka et al. Feb 2013 B2
8380846 Abu-Ghazaleh et al. Feb 2013 B1
8386622 Jacobson Feb 2013 B2
8392515 Kakivaya et al. Mar 2013 B2
8396757 Fellenstein et al. Mar 2013 B2
8397092 Karnowski Mar 2013 B2
8402540 Kapoor et al. Mar 2013 B2
8407428 Cheriton et al. Mar 2013 B2
8413155 Jackson Apr 2013 B2
8417715 Bruckhaus et al. Apr 2013 B1
8417813 Kakivaya et al. Apr 2013 B2
8458333 Stoica et al. Jun 2013 B1
8463867 Robertson et al. Jun 2013 B2
8464250 Ansel Jun 2013 B1
8484382 Das et al. Jul 2013 B2
8495201 Klincewicz Jul 2013 B2
8504663 Lowery et al. Aug 2013 B2
8504791 Cheriton et al. Aug 2013 B2
8544017 Prael et al. Sep 2013 B1
8554920 Chen et al. Oct 2013 B2
8560639 Murphy et al. Oct 2013 B2
8572326 Murphy et al. Oct 2013 B2
RE44610 Krakirian et al. Nov 2013 E
8589517 Hoefelmeyer et al. Nov 2013 B2
8599863 Davis Dec 2013 B2
8601595 Gelvin et al. Dec 2013 B2
8606800 Lagad et al. Dec 2013 B2
8615602 Li et al. Dec 2013 B2
8626820 Levy Jan 2014 B1
8631130 Jackson Jan 2014 B2
8684802 Gross et al. Apr 2014 B1
8701121 Saffre Apr 2014 B2
8726278 Shawver et al. May 2014 B1
8737410 Davis May 2014 B2
8738860 Griffin et al. May 2014 B1
8745275 Ikeya et al. Jun 2014 B2
8745302 Davis et al. Jun 2014 B2
8782120 Jackson Jul 2014 B2
8782231 Jackson Jul 2014 B2
8782321 Harriman et al. Jul 2014 B2
8782654 Jackson Jul 2014 B2
8812400 Faraboschi et al. Aug 2014 B2
8824485 Biswas et al. Sep 2014 B2
8854831 Arnouse Oct 2014 B2
8863143 Jackson Oct 2014 B2
8903964 Breslin Dec 2014 B2
8930536 Jackson Jan 2015 B2
8954584 Subbarayan et al. Feb 2015 B1
9008079 Davis et al. Apr 2015 B2
9038078 Jackson May 2015 B2
9054990 Davis Jun 2015 B2
9060060 Lobig Jun 2015 B2
9069611 Jackson Jun 2015 B2
9069929 Borland Jun 2015 B2
9075655 Davis et al. Jul 2015 B2
9075657 Jackson Jul 2015 B2
9077654 Davis Jul 2015 B2
9092594 Borland Jul 2015 B2
9112813 Jackson Aug 2015 B2
9116755 Jackson Aug 2015 B2
9128767 Jackson Sep 2015 B2
9152455 Jackson Oct 2015 B2
9231886 Jackson Jan 2016 B2
9262225 Davis Feb 2016 B2
9268607 Jackson Feb 2016 B2
9288147 Kern Mar 2016 B2
9304896 Chandra et al. Apr 2016 B2
9311269 Davis Apr 2016 B2
9367802 Arndt et al. Jun 2016 B2
9405584 Davis Aug 2016 B2
9413687 Jackson Aug 2016 B2
9454403 Davis Sep 2016 B2
9465771 Davis et al. Oct 2016 B2
9479463 Davis Oct 2016 B2
9491064 Jackson Nov 2016 B2
9509552 Davis Nov 2016 B2
9575805 Jackson Feb 2017 B2
9585281 Schnell Feb 2017 B2
9602573 Abu-Ghazaleh Mar 2017 B1
9619296 Jackson Apr 2017 B2
9648102 Davis et al. May 2017 B1
9680770 Davis Jun 2017 B2
9749326 Davis Aug 2017 B2
9778959 Jackson Oct 2017 B2
9785479 Jackson Oct 2017 B2
9792249 Borland Oct 2017 B2
9825860 Hu Nov 2017 B2
9866477 Davis Jan 2018 B2
9876735 Davis Jan 2018 B2
9886322 Jackson Feb 2018 B2
9929976 Davis Mar 2018 B2
9959140 Jackson May 2018 B2
9959141 Jackson May 2018 B2
9961013 Jackson May 2018 B2
9965442 Borland May 2018 B2
9977763 Davis May 2018 B2
9979672 Jackson May 2018 B2
10021806 Schnell Jul 2018 B2
10050970 Davis Aug 2018 B2
10135731 Davis Nov 2018 B2
10140245 Davis et al. Nov 2018 B2
10277531 Jackson Apr 2019 B2
10311014 Dalton Jun 2019 B2
10333862 Jackson Jun 2019 B2
10379909 Jackson Aug 2019 B2
10445146 Jackson Oct 2019 B2
10445148 Jackson Oct 2019 B2
10585704 Jackson Mar 2020 B2
10608949 Jackson Mar 2020 B2
10733028 Jackson Aug 2020 B2
10735505 Abu-Ghazaleh Aug 2020 B2
10871999 Jackson Dec 2020 B2
10951487 Jackson Mar 2021 B2
10977090 Jackson Apr 2021 B2
11132277 Dalton Sep 2021 B2
11134022 Jackson Sep 2021 B2
11144355 Jackson Oct 2021 B2
11356385 Jackson Jun 2022 B2
20010015733 Sklar Aug 2001 A1
20010023431 Horiguchi Sep 2001 A1
20010034752 Kremien Oct 2001 A1
20010037311 McCoy et al. Nov 2001 A1
20010044759 Kutsumi Nov 2001 A1
20010046227 Matsuhira et al. Nov 2001 A1
20010051929 Suzuki Dec 2001 A1
20010052016 Skene et al. Dec 2001 A1
20010052108 Bowman-Amuah Dec 2001 A1
20020002578 Yamashita Jan 2002 A1
20020002636 Vange et al. Jan 2002 A1
20020004833 Tonouchi Jan 2002 A1
20020004912 Fung Jan 2002 A1
20020007389 Jones et al. Jan 2002 A1
20020010783 Primak et al. Jan 2002 A1
20020018481 Mor et al. Feb 2002 A1
20020031364 Suzuki et al. Mar 2002 A1
20020032716 Nagato Mar 2002 A1
20020035605 Kenton Mar 2002 A1
20020040391 Chaiken et al. Apr 2002 A1
20020049608 Hartsell et al. Apr 2002 A1
20020052909 Seeds May 2002 A1
20020052961 Yoshimine et al. May 2002 A1
20020059094 Hosea et al. May 2002 A1
20020059274 Hartsell et al. May 2002 A1
20020062377 Hillman et al. May 2002 A1
20020062451 Scheidt et al. May 2002 A1
20020065864 Hartsell et al. May 2002 A1
20020083299 Van Huben et al. Jun 2002 A1
20020083352 Fujimoto et al. Jun 2002 A1
20020087611 Tanaka et al. Jul 2002 A1
20020087699 Karagiannis et al. Jul 2002 A1
20020090075 Gabriel Jul 2002 A1
20020091786 Yamaguchi et al. Jul 2002 A1
20020093915 Larson Jul 2002 A1
20020097732 Worster et al. Jul 2002 A1
20020099842 Jennings et al. Jul 2002 A1
20020103886 Rawson, III Aug 2002 A1
20020107903 Richter et al. Aug 2002 A1
20020107962 Richter et al. Aug 2002 A1
20020116234 Nagasawa Aug 2002 A1
20020116721 Dobes et al. Aug 2002 A1
20020120741 Webb et al. Aug 2002 A1
20020124128 Qiu Sep 2002 A1
20020129160 Habetha Sep 2002 A1
20020133537 Lau et al. Sep 2002 A1
20020133821 Shteyn Sep 2002 A1
20020138459 Mandal Sep 2002 A1
20020138635 Redlich et al. Sep 2002 A1
20020143855 Traversat Oct 2002 A1
20020143944 Traversat et al. Oct 2002 A1
20020147663 Walker et al. Oct 2002 A1
20020147771 Traversat et al. Oct 2002 A1
20020147810 Traversat et al. Oct 2002 A1
20020151271 Tatsuji Oct 2002 A1
20020152299 Traversat et al. Oct 2002 A1
20020152305 Jackson et al. Oct 2002 A1
20020156699 Gray et al. Oct 2002 A1
20020156891 Ulrich et al. Oct 2002 A1
20020156893 Pouyoul et al. Oct 2002 A1
20020156904 Gullotta et al. Oct 2002 A1
20020156984 Padovano Oct 2002 A1
20020159452 Foster et al. Oct 2002 A1
20020161869 Griffin et al. Oct 2002 A1
20020161917 Shapiro et al. Oct 2002 A1
20020166117 Abrams et al. Nov 2002 A1
20020172205 Tagore-Brage et al. Nov 2002 A1
20020173984 Robertson et al. Nov 2002 A1
20020174165 Kawaguchi Nov 2002 A1
20020174227 Hartsell et al. Nov 2002 A1
20020184310 Traversat et al. Dec 2002 A1
20020184311 Traversat et al. Dec 2002 A1
20020184357 Traversat et al. Dec 2002 A1
20020184358 Traversat et al. Dec 2002 A1
20020186656 Vu Dec 2002 A1
20020188657 Traversat et al. Dec 2002 A1
20020194384 Habetha Dec 2002 A1
20020194412 Bottom Dec 2002 A1
20020196611 Ho et al. Dec 2002 A1
20020196734 Tanaka et al. Dec 2002 A1
20020198734 Greene et al. Dec 2002 A1
20020198923 Hayes Dec 2002 A1
20030004772 Dutta et al. Jan 2003 A1
20030005130 Cheng Jan 2003 A1
20030005162 Habetha Jan 2003 A1
20030007493 Oi et al. Jan 2003 A1
20030009506 Bril et al. Jan 2003 A1
20030014503 Legout et al. Jan 2003 A1
20030014524 Tormasov Jan 2003 A1
20030014539 Reznick Jan 2003 A1
20030018766 Duvvuru Jan 2003 A1
20030018803 El Batt et al. Jan 2003 A1
20030028585 Yeager et al. Feb 2003 A1
20030028642 Agarwal et al. Feb 2003 A1
20030028645 Romagnoli Feb 2003 A1
20030028656 Babka Feb 2003 A1
20030033547 Larson et al. Feb 2003 A1
20030036820 Yellepeddy et al. Feb 2003 A1
20030039246 Guo et al. Feb 2003 A1
20030041141 Abdelaziz et al. Feb 2003 A1
20030041266 Ke et al. Feb 2003 A1
20030041308 Ganesan et al. Feb 2003 A1
20030050924 Faybishenko et al. Mar 2003 A1
20030050959 Faybishenko et al. Mar 2003 A1
20030050989 Marinescu et al. Mar 2003 A1
20030051127 Miwa Mar 2003 A1
20030055894 Yeager et al. Mar 2003 A1
20030055898 Yeager et al. Mar 2003 A1
20030058277 Bowman-Amuah Mar 2003 A1
20030061260 Rajkumar Mar 2003 A1
20030061261 Greene Mar 2003 A1
20030061262 Hahn et al. Mar 2003 A1
20030065703 Aborn Apr 2003 A1
20030065784 Herrod Apr 2003 A1
20030069918 Lu et al. Apr 2003 A1
20030069949 Chan et al. Apr 2003 A1
20030072263 Peterson Apr 2003 A1
20030074090 Becka Apr 2003 A1
20030076832 Ni Apr 2003 A1
20030088457 Keil et al. May 2003 A1
20030093255 Freyensee et al. May 2003 A1
20030093624 Arimilli et al. May 2003 A1
20030097429 Wu et al. May 2003 A1
20030097439 Strayer et al. May 2003 A1
20030101084 Perez May 2003 A1
20030103413 Jacobi, Jr. et al. Jun 2003 A1
20030105655 Kimbrel et al. Jun 2003 A1
20030105721 Ginter et al. Jun 2003 A1
20030110262 Hasan et al. Jun 2003 A1
20030112792 Cranor et al. Jun 2003 A1
20030115562 Martin Jun 2003 A1
20030120472 Lind Jun 2003 A1
20030120701 Pulsipher et al. Jun 2003 A1
20030120704 Tran et al. Jun 2003 A1
20030120710 Pulsipher et al. Jun 2003 A1
20030120780 Zhu Jun 2003 A1
20030126013 Shand Jul 2003 A1
20030126200 Wolff Jul 2003 A1
20030126202 Watt Jul 2003 A1
20030126265 Aziz et al. Jul 2003 A1
20030126283 Prakash et al. Jul 2003 A1
20030131043 Berg et al. Jul 2003 A1
20030131209 Lee Jul 2003 A1
20030135509 Davis Jul 2003 A1
20030135615 Wyatt Jul 2003 A1
20030135621 Romagnoli Jul 2003 A1
20030140190 Mahony et al. Jul 2003 A1
20030144894 Robertson et al. Jul 2003 A1
20030149685 Trossman et al. Aug 2003 A1
20030154112 Neiman et al. Aug 2003 A1
20030158884 Alford Aug 2003 A1
20030158940 Leigh Aug 2003 A1
20030159083 Fukuhara et al. Aug 2003 A1
20030169269 Sasaki et al. Sep 2003 A1
20030172191 Williams Sep 2003 A1
20030177050 Crampton Sep 2003 A1
20030177121 Moona et al. Sep 2003 A1
20030177334 King et al. Sep 2003 A1
20030182421 Faybishenko et al. Sep 2003 A1
20030182425 Kurakake Sep 2003 A1
20030182429 Jagels Sep 2003 A1
20030185229 Shachar et al. Oct 2003 A1
20030187907 Ito Oct 2003 A1
20030188083 Kumar et al. Oct 2003 A1
20030191795 Bernardin et al. Oct 2003 A1
20030191857 Terrell et al. Oct 2003 A1
20030193402 Post et al. Oct 2003 A1
20030195931 Dauger Oct 2003 A1
20030200109 Honda et al. Oct 2003 A1
20030200258 Hayashi Oct 2003 A1
20030202520 Witkowski et al. Oct 2003 A1
20030202709 Simard et al. Oct 2003 A1
20030204773 Petersen et al. Oct 2003 A1
20030204786 Dinker Oct 2003 A1
20030210694 Jayaraman et al. Nov 2003 A1
20030212738 Wookey et al. Nov 2003 A1
20030212792 Raymond Nov 2003 A1
20030216951 Ginis et al. Nov 2003 A1
20030217129 Knittel et al. Nov 2003 A1
20030227934 White Dec 2003 A1
20030231624 Alappat et al. Dec 2003 A1
20030231647 Petrovykh Dec 2003 A1
20030233378 Butler et al. Dec 2003 A1
20030233446 Earl Dec 2003 A1
20030236745 Hartsell et al. Dec 2003 A1
20040003077 Bantz et al. Jan 2004 A1
20040003086 Parham et al. Jan 2004 A1
20040010544 Slater et al. Jan 2004 A1
20040010550 Gopinath Jan 2004 A1
20040010592 Carver et al. Jan 2004 A1
20040011761 Hensley Jan 2004 A1
20040013113 Singh et al. Jan 2004 A1
20040015579 Cooper et al. Jan 2004 A1
20040015973 Skovira Jan 2004 A1
20040017806 Yazdy et al. Jan 2004 A1
20040017808 Forbes et al. Jan 2004 A1
20040030741 Wolton et al. Feb 2004 A1
20040030743 Hugly et al. Feb 2004 A1
20040030794 Hugly et al. Feb 2004 A1
20040030938 Barr et al. Feb 2004 A1
20040034873 Zenoni Feb 2004 A1
20040039815 Evans et al. Feb 2004 A1
20040044718 Ferstl et al. Mar 2004 A1
20040044727 Abdelaziz et al. Mar 2004 A1
20040054630 Ginter et al. Mar 2004 A1
20040054777 Ackaouy et al. Mar 2004 A1
20040054780 Romero Mar 2004 A1
20040054807 Harvey et al. Mar 2004 A1
20040064511 Abdel-Aziz et al. Apr 2004 A1
20040064512 Arora et al. Apr 2004 A1
20040064568 Arora et al. Apr 2004 A1
20040064817 Shibayama et al. Apr 2004 A1
20040066782 Nassar Apr 2004 A1
20040068676 Larson et al. Apr 2004 A1
20040068730 Miller et al. Apr 2004 A1
20040071147 Roadknight et al. Apr 2004 A1
20040073650 Nakamura Apr 2004 A1
20040073854 Windl Apr 2004 A1
20040073908 Benejam et al. Apr 2004 A1
20040081148 Yamada Apr 2004 A1
20040083287 Gao et al. Apr 2004 A1
20040088347 Yeager et al. May 2004 A1
20040088348 Yeager et al. May 2004 A1
20040088369 Yeager et al. May 2004 A1
20040098391 Robertson et al. May 2004 A1
20040098447 Verbeke et al. May 2004 A1
20040103078 Smedberg et al. May 2004 A1
20040103305 Ginter et al. May 2004 A1
20040103339 Chalasani et al. May 2004 A1
20040103413 Mandava et al. May 2004 A1
20040107123 Haffner Jun 2004 A1
20040107273 Biran et al. Jun 2004 A1
20040107281 Bose et al. Jun 2004 A1
20040109428 Krishnamurthy Jun 2004 A1
20040111307 Demsky et al. Jun 2004 A1
20040111612 Choi et al. Jun 2004 A1
20040117610 Hensley Jun 2004 A1
20040117768 Chang et al. Jun 2004 A1
20040121777 Schwarz et al. Jun 2004 A1
20040122970 Kawaguchi et al. Jun 2004 A1
20040128495 Hensley Jul 2004 A1
20040128670 Robinson et al. Jul 2004 A1
20040133620 Habetha Jul 2004 A1
20040133640 Yeager et al. Jul 2004 A1
20040133665 Deboer et al. Jul 2004 A1
20040133703 Habetha Jul 2004 A1
20040135780 Nims Jul 2004 A1
20040139202 Talwar et al. Jul 2004 A1
20040139464 Ellis et al. Jul 2004 A1
20040141521 George Jul 2004 A1
20040143664 Usa et al. Jul 2004 A1
20040148326 Nadgir Jul 2004 A1
20040148390 Cleary et al. Jul 2004 A1
20040150664 Baudisch Aug 2004 A1
20040151181 Chu Aug 2004 A1
20040153563 Shay et al. Aug 2004 A1
20040158637 Lee Aug 2004 A1
20040162871 Pabla et al. Aug 2004 A1
20040165588 Pandya Aug 2004 A1
20040172464 Nag Sep 2004 A1
20040179528 Powers et al. Sep 2004 A1
20040181370 Froehlich et al. Sep 2004 A1
20040181476 Smith et al. Sep 2004 A1
20040189677 Amann et al. Sep 2004 A1
20040193674 Kurosawa et al. Sep 2004 A1
20040194098 Chung et al. Sep 2004 A1
20040196308 Blomquist Oct 2004 A1
20040199621 Lau Oct 2004 A1
20040199646 Susai et al. Oct 2004 A1
20040199918 Skovira Oct 2004 A1
20040203670 King et al. Oct 2004 A1
20040204978 Rayrole Oct 2004 A1
20040205101 Radhakrishnan Oct 2004 A1
20040210624 Andrzejak et al. Oct 2004 A1
20040210693 Zeitler et al. Oct 2004 A1
20040213395 Ishii et al. Oct 2004 A1
20040215780 Kawato Oct 2004 A1
20040215864 Arimilli et al. Oct 2004 A1
20040215991 McAfee et al. Oct 2004 A1
20040216121 Jones et al. Oct 2004 A1
20040218615 Griffin et al. Nov 2004 A1
20040221038 Clarke et al. Nov 2004 A1
20040236852 Birkestrand et al. Nov 2004 A1
20040243378 Schnatterly et al. Dec 2004 A1
20040243466 Trzybinski et al. Dec 2004 A1
20040244006 Kaufman et al. Dec 2004 A1
20040260701 Lehikoinen Dec 2004 A1
20040260746 Brown et al. Dec 2004 A1
20040267486 Percer et al. Dec 2004 A1
20040267897 Hill et al. Dec 2004 A1
20040267901 Gomez Dec 2004 A1
20040268035 Ueno Dec 2004 A1
20050010465 Drew et al. Jan 2005 A1
20050010608 Horikawa Jan 2005 A1
20050015378 Gammel et al. Jan 2005 A1
20050015621 Ashley et al. Jan 2005 A1
20050018604 Dropps et al. Jan 2005 A1
20050018606 Dropps et al. Jan 2005 A1
20050018663 Dropps et al. Jan 2005 A1
20050021291 Retlich Jan 2005 A1
20050021371 Basone et al. Jan 2005 A1
20050021606 Davies et al. Jan 2005 A1
20050021728 Sugimoto Jan 2005 A1
20050021759 Gupta et al. Jan 2005 A1
20050021862 Schroeder et al. Jan 2005 A1
20050022188 Tameshige et al. Jan 2005 A1
20050027863 Talwar et al. Feb 2005 A1
20050027864 Bozak et al. Feb 2005 A1
20050027865 Bozak et al. Feb 2005 A1
20050027870 Trebes et al. Feb 2005 A1
20050030954 Dropps et al. Feb 2005 A1
20050033742 Kamvar et al. Feb 2005 A1
20050033890 Lee Feb 2005 A1
20050034070 Meir et al. Feb 2005 A1
20050038808 Kutch Feb 2005 A1
20050038835 Chidambaran et al. Feb 2005 A1
20050044195 Westfall Feb 2005 A1
20050044205 Sankaranarayan et al. Feb 2005 A1
20050044226 McDermott et al. Feb 2005 A1
20050044228 Birkestrand et al. Feb 2005 A1
20050049884 Hunt et al. Mar 2005 A1
20050050057 Mital et al. Mar 2005 A1
20050050200 Mizoguchi Mar 2005 A1
20050050270 Horn et al. Mar 2005 A1
20050054354 Roman et al. Mar 2005 A1
20050055322 Masters et al. Mar 2005 A1
20050055694 Lee Mar 2005 A1
20050055697 Buco Mar 2005 A1
20050055698 Sasaki et al. Mar 2005 A1
20050060360 Doyle et al. Mar 2005 A1
20050060608 Marchand Mar 2005 A1
20050066302 Kanade Mar 2005 A1
20050066358 Anderson et al. Mar 2005 A1
20050071843 Guo et al. Mar 2005 A1
20050076145 Ben-Zvi et al. Apr 2005 A1
20050077921 Percer et al. Apr 2005 A1
20050080845 Gopinath Apr 2005 A1
20050080891 Cauthron Apr 2005 A1
20050080930 Joseph Apr 2005 A1
20050086300 Yeager et al. Apr 2005 A1
20050091505 Riley et al. Apr 2005 A1
20050097560 Rolia et al. May 2005 A1
20050102396 Hipp May 2005 A1
20050102400 Nakahara May 2005 A1
20050102683 Branson May 2005 A1
20050105538 Perera et al. May 2005 A1
20050108407 Johnson et al. May 2005 A1
20050108703 Hellier May 2005 A1
20050113203 Mueller et al. May 2005 A1
20050114478 Popescu May 2005 A1
20050114551 Basu et al. May 2005 A1
20050114862 Bisdikian et al. May 2005 A1
20050120160 Plouffe et al. Jun 2005 A1
20050125213 Chen et al. Jun 2005 A1
20050125537 Martins et al. Jun 2005 A1
20050125538 Tawil Jun 2005 A1
20050131898 Fatula, Jr. Jun 2005 A1
20050132378 Horvitz et al. Jun 2005 A1
20050132379 Sankaran et al. Jun 2005 A1
20050138618 Gebhart Jun 2005 A1
20050141424 Lim et al. Jun 2005 A1
20050144315 George et al. Jun 2005 A1
20050149940 Calinescu et al. Jul 2005 A1
20050154861 Arimilli et al. Jul 2005 A1
20050155033 Luoffo et al. Jul 2005 A1
20050156732 Matsumura Jul 2005 A1
20050160137 Ishikawa et al. Jul 2005 A1
20050163143 Kalantar et al. Jul 2005 A1
20050165925 Dan et al. Jul 2005 A1
20050169179 Antal Aug 2005 A1
20050172291 Das et al. Aug 2005 A1
20050177600 Eilam et al. Aug 2005 A1
20050187866 Lee Aug 2005 A1
20050188088 Fellenstein et al. Aug 2005 A1
20050188089 Lichtenstein et al. Aug 2005 A1
20050188091 Szabo et al. Aug 2005 A1
20050190236 Ishimoto Sep 2005 A1
20050192771 Fischer et al. Sep 2005 A1
20050193103 Drabik Sep 2005 A1
20050193231 Scheuren Sep 2005 A1
20050195075 McGraw Sep 2005 A1
20050197877 Kalinoski Sep 2005 A1
20050198200 Subramanian et al. Sep 2005 A1
20050202922 Thomas Sep 2005 A1
20050203761 Barr Sep 2005 A1
20050204040 Ferri et al. Sep 2005 A1
20050209892 Miller Sep 2005 A1
20050210470 Chung et al. Sep 2005 A1
20050213507 Banerjee et al. Sep 2005 A1
20050213560 Duvvury Sep 2005 A1
20050222885 Chen et al. Oct 2005 A1
20050228852 Santos et al. Oct 2005 A1
20050228856 Swildens Oct 2005 A1
20050228892 Riley et al. Oct 2005 A1
20050234846 Davidson et al. Oct 2005 A1
20050235137 Barr et al. Oct 2005 A1
20050235150 Kaler et al. Oct 2005 A1
20050240688 Moerman et al. Oct 2005 A1
20050243867 Petite Nov 2005 A1
20050246705 Etelson et al. Nov 2005 A1
20050249341 Mahone et al. Nov 2005 A1
20050256942 McCardle et al. Nov 2005 A1
20050256946 Childress et al. Nov 2005 A1
20050259397 Bash et al. Nov 2005 A1
20050259683 Bishop et al. Nov 2005 A1
20050262495 Fung et al. Nov 2005 A1
20050262508 Asano et al. Nov 2005 A1
20050267948 McKinley et al. Dec 2005 A1
20050268063 Diao et al. Dec 2005 A1
20050278392 Hansen et al. Dec 2005 A1
20050278760 Dewar et al. Dec 2005 A1
20050283534 Bigagli et al. Dec 2005 A1
20050283782 Lu et al. Dec 2005 A1
20050283822 Appleby et al. Dec 2005 A1
20050288961 Tabrizi Dec 2005 A1
20050289540 Nguyen et al. Dec 2005 A1
20060002311 Iwanaga et al. Jan 2006 A1
20060008256 Khedouri et al. Jan 2006 A1
20060010445 Petersen et al. Jan 2006 A1
20060013132 Garnett et al. Jan 2006 A1
20060013218 Shore et al. Jan 2006 A1
20060015555 Douglass et al. Jan 2006 A1
20060015637 Chung Jan 2006 A1
20060015773 Singh et al. Jan 2006 A1
20060023245 Sato et al. Feb 2006 A1
20060028991 Tan et al. Feb 2006 A1
20060029053 Roberts et al. Feb 2006 A1
20060031379 Kasriel et al. Feb 2006 A1
20060031547 Tsui et al. Feb 2006 A1
20060031813 Bishop et al. Feb 2006 A1
20060036743 Deng et al. Feb 2006 A1
20060037016 Saha et al. Feb 2006 A1
20060039246 King et al. Feb 2006 A1
20060041444 Flores et al. Feb 2006 A1
20060047920 Moore et al. Mar 2006 A1
20060048157 Dawson et al. Mar 2006 A1
20060053215 Sharma Mar 2006 A1
20060053216 Deokar et al. Mar 2006 A1
20060056291 Baker et al. Mar 2006 A1
20060059253 Goodman et al. Mar 2006 A1
20060063690 Billiauw et al. Mar 2006 A1
20060069671 Conley et al. Mar 2006 A1
20060069774 Chen et al. Mar 2006 A1
20060069926 Ginter et al. Mar 2006 A1
20060074925 Bixby Apr 2006 A1
20060074940 Craft et al. Apr 2006 A1
20060088015 Kakivaya et al. Apr 2006 A1
20060089894 Balk et al. Apr 2006 A1
20060090003 Kakivaya et al. Apr 2006 A1
20060090025 Tufford et al. Apr 2006 A1
20060090136 Miller et al. Apr 2006 A1
20060095917 Black-Ziegelbein et al. May 2006 A1
20060097863 Horowitz et al. May 2006 A1
20060112184 Kuo May 2006 A1
20060112308 Crawford May 2006 A1
20060117208 Davidson Jun 2006 A1
20060117317 Crawford et al. Jun 2006 A1
20060120411 Basu Jun 2006 A1
20060126619 Teisberg et al. Jun 2006 A1
20060126667 Smith et al. Jun 2006 A1
20060129667 Anderson Jun 2006 A1
20060129687 Goldszmidt et al. Jun 2006 A1
20060136235 Keohane et al. Jun 2006 A1
20060136570 Pandya Jun 2006 A1
20060136908 Gebhart et al. Jun 2006 A1
20060136928 Crawford et al. Jun 2006 A1
20060136929 Miller et al. Jun 2006 A1
20060140211 Huang et al. Jun 2006 A1
20060143350 Miloushev et al. Jun 2006 A1
20060149695 Bossman et al. Jul 2006 A1
20060153191 Rajsic et al. Jul 2006 A1
20060155740 Chen et al. Jul 2006 A1
20060155912 Singh et al. Jul 2006 A1
20060156273 Narayan et al. Jul 2006 A1
20060159088 Aghvami et al. Jul 2006 A1
20060161466 Trinon et al. Jul 2006 A1
20060161585 Clarke et al. Jul 2006 A1
20060165040 Rathod Jul 2006 A1
20060168107 Balan et al. Jul 2006 A1
20060168224 Midgley Jul 2006 A1
20060173730 Birkestrand Aug 2006 A1
20060174342 Zaheer et al. Aug 2006 A1
20060179241 Clark et al. Aug 2006 A1
20060189349 Montulli et al. Aug 2006 A1
20060190775 Aggarwal et al. Aug 2006 A1
20060190975 Gonzalez Aug 2006 A1
20060200773 Nocera et al. Sep 2006 A1
20060206621 Toebes Sep 2006 A1
20060208870 Dousson Sep 2006 A1
20060212332 Jackson Sep 2006 A1
20060212333 Jackson Sep 2006 A1
20060212334 Jackson Sep 2006 A1
20060212740 Jackson Sep 2006 A1
20060218301 O'Toole et al. Sep 2006 A1
20060224725 Bali et al. Oct 2006 A1
20060224740 Sievers-Tostes Oct 2006 A1
20060224741 Jackson Oct 2006 A1
20060227810 Childress et al. Oct 2006 A1
20060229920 Favorel et al. Oct 2006 A1
20060230140 Aoyama et al. Oct 2006 A1
20060230149 Jackson Oct 2006 A1
20060236368 Raja et al. Oct 2006 A1
20060236371 Fish Oct 2006 A1
20060248141 Mukherjee Nov 2006 A1
20060248197 Evans et al. Nov 2006 A1
20060248359 Fung Nov 2006 A1
20060250971 Gammenthaler et al. Nov 2006 A1
20060251419 Zadikian et al. Nov 2006 A1
20060253570 Biswas et al. Nov 2006 A1
20060259734 Sheu et al. Nov 2006 A1
20060265508 Angel et al. Nov 2006 A1
20060265609 Fung Nov 2006 A1
20060268742 Chu Nov 2006 A1
20060271552 McChesney et al. Nov 2006 A1
20060271928 Gao et al. Nov 2006 A1
20060277278 Hegde Dec 2006 A1
20060282505 Hasha et al. Dec 2006 A1
20060282547 Hasha et al. Dec 2006 A1
20060294238 Naik et al. Dec 2006 A1
20070003051 Kiss et al. Jan 2007 A1
20070006001 Isobe et al. Jan 2007 A1
20070011224 Mena et al. Jan 2007 A1
20070011302 Groner et al. Jan 2007 A1
20070022425 Jackson Jan 2007 A1
20070028244 Landis et al. Feb 2007 A1
20070033292 Sull et al. Feb 2007 A1
20070033533 Sull et al. Feb 2007 A1
20070041335 Znamova et al. Feb 2007 A1
20070043591 Meretei Feb 2007 A1
20070044010 Sull et al. Feb 2007 A1
20070047195 Merkin et al. Mar 2007 A1
20070050777 Hutchinson et al. Mar 2007 A1
20070061441 Landis et al. Mar 2007 A1
20070067366 Landis Mar 2007 A1
20070067435 Landis et al. Mar 2007 A1
20070076653 Park et al. Apr 2007 A1
20070081315 Mondor et al. Apr 2007 A1
20070083899 Compton et al. Apr 2007 A1
20070088822 Coile et al. Apr 2007 A1
20070094486 Moore et al. Apr 2007 A1
20070094665 Jackson Apr 2007 A1
20070094691 Gazdzinski Apr 2007 A1
20070109968 Hussain et al. May 2007 A1
20070118496 Bornhoevd May 2007 A1
20070124344 Rajakannimariyan et al. May 2007 A1
20070130397 Tsu Jun 2007 A1
20070143824 Shahbazi Jun 2007 A1
20070150426 Asher et al. Jun 2007 A1
20070150444 Chesnais et al. Jun 2007 A1
20070155406 Dowling et al. Jul 2007 A1
20070174390 Silvain et al. Jul 2007 A1
20070180310 Johnson et al. Aug 2007 A1
20070180380 Khavari et al. Aug 2007 A1
20070204036 Mohaban et al. Aug 2007 A1
20070209072 Chen Sep 2007 A1
20070220520 Tajima Sep 2007 A1
20070226313 Li et al. Sep 2007 A1
20070226795 Conti et al. Sep 2007 A1
20070233828 Gilbert et al. Oct 2007 A1
20070240162 Coleman et al. Oct 2007 A1
20070253017 Czyszczewski et al. Nov 2007 A1
20070260716 Gnanasambandam et al. Nov 2007 A1
20070264986 Warrillow et al. Nov 2007 A1
20070266136 Esfahany et al. Nov 2007 A1
20070271375 Hwang Nov 2007 A1
20070280230 Park Dec 2007 A1
20070286009 Norman Dec 2007 A1
20070288585 Sekiguchi et al. Dec 2007 A1
20070297350 Eilam et al. Dec 2007 A1
20070299946 El-Damhougy et al. Dec 2007 A1
20070299947 El-Damhougy et al. Dec 2007 A1
20070299950 Kulkarni et al. Dec 2007 A1
20080013453 Chiang et al. Jan 2008 A1
20080016198 Johnston-Watt et al. Jan 2008 A1
20080034082 McKinney Feb 2008 A1
20080040463 Brown et al. Feb 2008 A1
20080052437 Loffink et al. Feb 2008 A1
20080059782 Kruse et al. Mar 2008 A1
20080075089 Evans et al. Mar 2008 A1
20080082663 Mouli et al. Apr 2008 A1
20080089358 Basso et al. Apr 2008 A1
20080104231 Dey et al. May 2008 A1
20080104264 Duerk et al. May 2008 A1
20080126523 Tantrum May 2008 A1
20080140771 Vass et al. Jun 2008 A1
20080140930 Hotchkiss Jun 2008 A1
20080155070 El-Damhougy et al. Jun 2008 A1
20080155100 Ahmed et al. Jun 2008 A1
20080159745 Segal Jul 2008 A1
20080162691 Zhang et al. Jul 2008 A1
20080168451 Challenger et al. Jul 2008 A1
20080183865 Appleby et al. Jul 2008 A1
20080183882 Flynn et al. Jul 2008 A1
20080184248 Barua et al. Jul 2008 A1
20080186965 Zheng et al. Aug 2008 A1
20080199133 Takizawa et al. Aug 2008 A1
20080212273 Bechtolsheim Sep 2008 A1
20080212276 Bottom et al. Sep 2008 A1
20080215730 Sundaram et al. Sep 2008 A1
20080216082 Eilam et al. Sep 2008 A1
20080217021 Lembcke et al. Sep 2008 A1
20080222434 Shimizu et al. Sep 2008 A1
20080235443 Chow et al. Sep 2008 A1
20080235702 Eilam et al. Sep 2008 A1
20080239649 Bradicich Oct 2008 A1
20080243634 Dworkin et al. Oct 2008 A1
20080250181 Li et al. Oct 2008 A1
20080255953 Chang et al. Oct 2008 A1
20080259555 Bechtolsheim et al. Oct 2008 A1
20080259788 Wang et al. Oct 2008 A1
20080263131 Hinni et al. Oct 2008 A1
20080263558 Lin et al. Oct 2008 A1
20080266793 Lee Oct 2008 A1
20080270599 Tamir et al. Oct 2008 A1
20080270731 Bryant et al. Oct 2008 A1
20080279167 Cardei et al. Nov 2008 A1
20080288646 Hasha et al. Nov 2008 A1
20080288659 Hasha et al. Nov 2008 A1
20080288660 Balasubramanian et al. Nov 2008 A1
20080288664 Pettey et al. Nov 2008 A1
20080288683 Ramey Nov 2008 A1
20080288873 McCardle et al. Nov 2008 A1
20080289029 Kim et al. Nov 2008 A1
20080301226 Cleary et al. Dec 2008 A1
20080301379 Pong Dec 2008 A1
20080301794 Lee Dec 2008 A1
20080310848 Yasuda et al. Dec 2008 A1
20080313369 Verdoorn et al. Dec 2008 A1
20080313482 Karlapalem et al. Dec 2008 A1
20080320121 Altaf et al. Dec 2008 A1
20080320161 Maruccia et al. Dec 2008 A1
20090010153 Filsfils et al. Jan 2009 A1
20090021907 Mann et al. Jan 2009 A1
20090043809 Fakhouri et al. Feb 2009 A1
20090043888 Jackson Feb 2009 A1
20090044036 Merkin Feb 2009 A1
20090049443 Powers et al. Feb 2009 A1
20090055542 Zhoa et al. Feb 2009 A1
20090055691 Ouksel et al. Feb 2009 A1
20090063443 Arimilli et al. Mar 2009 A1
20090063690 Verthein et al. Mar 2009 A1
20090064287 Bagepalli et al. Mar 2009 A1
20090070771 Yuyitung et al. Mar 2009 A1
20090080428 Witkowski et al. Mar 2009 A1
20090083390 Abu-Ghazaleh et al. Mar 2009 A1
20090089410 Vicente et al. Apr 2009 A1
20090094380 Qiu et al. Apr 2009 A1
20090097200 Sharma et al. Apr 2009 A1
20090100133 Giulio et al. Apr 2009 A1
20090103501 Farrag et al. Apr 2009 A1
20090105059 Dorry et al. Apr 2009 A1
20090113056 Tameshige et al. Apr 2009 A1
20090113130 He et al. Apr 2009 A1
20090133129 Jeong et al. May 2009 A1
20090135751 Hodges et al. May 2009 A1
20090135835 Gallatin et al. May 2009 A1
20090138594 Fellenstein et al. May 2009 A1
20090158070 Gruendler Jun 2009 A1
20090172423 Song et al. Jul 2009 A1
20090178132 Hudis et al. Jul 2009 A1
20090182836 Aviles Jul 2009 A1
20090187425 Thompson et al. Jul 2009 A1
20090198958 Arimilli et al. Aug 2009 A1
20090204834 Hendin et al. Aug 2009 A1
20090204837 Raval et al. Aug 2009 A1
20090210356 Abrams et al. Aug 2009 A1
20090210495 Wolfson et al. Aug 2009 A1
20090216881 Lovy et al. Aug 2009 A1
20090216910 Duchesneau Aug 2009 A1
20090216920 Lauterbach et al. Aug 2009 A1
20090217329 Riedl et al. Aug 2009 A1
20090219827 Chen et al. Sep 2009 A1
20090222884 Shaji et al. Sep 2009 A1
20090225360 Shirai Sep 2009 A1
20090225751 Koenck et al. Sep 2009 A1
20090234917 Despotovic et al. Sep 2009 A1
20090234962 Strong et al. Sep 2009 A1
20090234974 Arndt et al. Sep 2009 A1
20090235104 Fung Sep 2009 A1
20090238349 Pezzutti Sep 2009 A1
20090240547 Fellenstein et al. Sep 2009 A1
20090248943 Jiang et al. Oct 2009 A1
20090251867 Sharma Oct 2009 A1
20090259606 Seah et al. Oct 2009 A1
20090259863 Williams et al. Oct 2009 A1
20090259864 Li et al. Oct 2009 A1
20090265045 Coxe, III Oct 2009 A1
20090271656 Yokota et al. Oct 2009 A1
20090276666 Haley et al. Nov 2009 A1
20090279518 Falk et al. Nov 2009 A1
20090282274 Langgood et al. Nov 2009 A1
20090282419 Mejdrich et al. Nov 2009 A1
20090285136 Sun et al. Nov 2009 A1
20090287835 Jacobson et al. Nov 2009 A1
20090292824 Marashi et al. Nov 2009 A1
20090300608 Ferris et al. Dec 2009 A1
20090313390 Ahuja et al. Dec 2009 A1
20090316687 Kruppa et al. Dec 2009 A1
20090319684 Kakivaya et al. Dec 2009 A1
20090327079 Parker et al. Dec 2009 A1
20090327489 Swildens et al. Dec 2009 A1
20100005331 Somasundaram et al. Jan 2010 A1
20100008038 Coglitore Jan 2010 A1
20100008365 Porat Jan 2010 A1
20100026408 Shau Feb 2010 A1
20100036945 Allibhoy et al. Feb 2010 A1
20100040053 Gottumukkula et al. Feb 2010 A1
20100049822 Davies et al. Feb 2010 A1
20100049931 Jacobson et al. Feb 2010 A1
20100051391 Jahkonen Mar 2010 A1
20100070675 Pong Mar 2010 A1
20100088205 Robertson Apr 2010 A1
20100091676 Moran et al. Apr 2010 A1
20100103837 Jungck et al. Apr 2010 A1
20100106987 Lambert et al. Apr 2010 A1
20100114531 Korn et al. May 2010 A1
20100118880 Kunz et al. May 2010 A1
20100121932 Joshi et al. May 2010 A1
20100121947 Pirzada et al. May 2010 A1
20100122251 Karc May 2010 A1
20100125742 Ohtani May 2010 A1
20100125915 Hall et al. May 2010 A1
20100131324 Ferris et al. May 2010 A1
20100131624 Ferris May 2010 A1
20100138481 Behrens Jun 2010 A1
20100153546 Clubb et al. Jun 2010 A1
20100158005 Mukhopadhyay et al. Jun 2010 A1
20100161909 Nation et al. Jun 2010 A1
20100165983 Aybay et al. Jul 2010 A1
20100169477 Stienhans et al. Jul 2010 A1
20100169479 Jeong et al. Jul 2010 A1
20100169888 Hare et al. Jul 2010 A1
20100174604 Mattingly et al. Jul 2010 A1
20100174813 Hildreth et al. Jul 2010 A1
20100198972 Umbehocker Aug 2010 A1
20100198985 Kanevsky Aug 2010 A1
20100217801 Leighton et al. Aug 2010 A1
20100218194 Dallman et al. Aug 2010 A1
20100220732 Hussain et al. Sep 2010 A1
20100223332 Maxemchuk et al. Sep 2010 A1
20100228848 Kis et al. Sep 2010 A1
20100235234 Shuster Sep 2010 A1
20100250914 Abdul et al. Sep 2010 A1
20100265650 Chen et al. Oct 2010 A1
20100281166 Buyya et al. Nov 2010 A1
20100281246 Bristow et al. Nov 2010 A1
20100299548 Chadirchi et al. Nov 2010 A1
20100302129 Kastrup et al. Dec 2010 A1
20100308897 Evoy et al. Dec 2010 A1
20100312910 Lin et al. Dec 2010 A1
20100312969 Yamazaki et al. Dec 2010 A1
20100318665 Demmer et al. Dec 2010 A1
20100318812 Auradkar et al. Dec 2010 A1
20100325371 Jagadish et al. Dec 2010 A1
20100332262 Horvitz et al. Dec 2010 A1
20110023104 Franklin Jan 2011 A1
20110026397 Saltsidis et al. Feb 2011 A1
20110029644 Gelvin et al. Feb 2011 A1
20110029652 Chhuor et al. Feb 2011 A1
20110035491 Gelvin et al. Feb 2011 A1
20110055627 Zawacki et al. Mar 2011 A1
20110058573 Balakavi et al. Mar 2011 A1
20110075369 Sun et al. Mar 2011 A1
20110082928 Hasha et al. Apr 2011 A1
20110090633 Rabinovitz Apr 2011 A1
20110103391 Davis May 2011 A1
20110113115 Chang et al. May 2011 A1
20110119344 Eustis May 2011 A1
20110123014 Smith May 2011 A1
20110138046 Bonnier et al. Jun 2011 A1
20110145393 Ben-Zvi et al. Jun 2011 A1
20110153953 Khemani et al. Jun 2011 A1
20110154318 Oshins et al. Jun 2011 A1
20110167110 Hoffberg et al. Jul 2011 A1
20110173295 Bakke et al. Jul 2011 A1
20110173612 El Zur et al. Jul 2011 A1
20110179134 Mayo et al. Jul 2011 A1
20110185370 Tamir et al. Jul 2011 A1
20110191514 Wu et al. Aug 2011 A1
20110191610 Agarwal et al. Aug 2011 A1
20110197012 Liao et al. Aug 2011 A1
20110210975 Wong et al. Sep 2011 A1
20110213869 Korsunsky et al. Sep 2011 A1
20110231510 Korsunsky et al. Sep 2011 A1
20110231564 Korsunsky et al. Sep 2011 A1
20110238841 Kakivaya et al. Sep 2011 A1
20110238855 Korsunsky et al. Sep 2011 A1
20110239014 Karnowski Sep 2011 A1
20110271159 Ahn et al. Nov 2011 A1
20110273840 Chen Nov 2011 A1
20110274108 Fan Nov 2011 A1
20110295991 Aida Dec 2011 A1
20110296141 Daffron Dec 2011 A1
20110307887 Huang et al. Dec 2011 A1
20110314465 Smith et al. Dec 2011 A1
20110320540 Oostlander et al. Dec 2011 A1
20110320690 Petersen et al. Dec 2011 A1
20120011500 Faraboschi et al. Jan 2012 A1
20120020207 Corti et al. Jan 2012 A1
20120036237 Hasha et al. Feb 2012 A1
20120050981 Xu et al. Mar 2012 A1
20120054469 Ikeya et al. Mar 2012 A1
20120054511 Brinks et al. Mar 2012 A1
20120072997 Carlson et al. Mar 2012 A1
20120081850 Regimbal et al. Apr 2012 A1
20120096211 Davis et al. Apr 2012 A1
20120099265 Reber Apr 2012 A1
20120110055 Van Biljon et al. May 2012 A1
20120110056 Van Biljon et al. May 2012 A1
20120110180 Van Biljon et al. May 2012 A1
20120110188 Van Biljon et al. May 2012 A1
20120110651 Van Biljon et al. May 2012 A1
20120117229 Van Biljon et al. May 2012 A1
20120131201 Matthews et al. May 2012 A1
20120137004 Smith May 2012 A1
20120151476 Vincent Jun 2012 A1
20120155168 Kim et al. Jun 2012 A1
20120159116 Lim et al. Jun 2012 A1
20120167083 Suit Jun 2012 A1
20120167084 Suit Jun 2012 A1
20120167094 Suit Jun 2012 A1
20120185334 Sarkar et al. Jul 2012 A1
20120191860 Traversat et al. Jul 2012 A1
20120198252 Kirschtein et al. Aug 2012 A1
20120207165 Davis Aug 2012 A1
20120218901 Jungck et al. Aug 2012 A1
20120226788 Jackson Sep 2012 A1
20120239479 Amaro et al. Sep 2012 A1
20120278378 Lehane et al. Nov 2012 A1
20120278430 Lehane et al. Nov 2012 A1
20120278464 Lehane et al. Nov 2012 A1
20120296974 Tabe et al. Nov 2012 A1
20120297042 Davis et al. Nov 2012 A1
20120324005 Nalawade Dec 2012 A1
20130010639 Armstrong et al. Jan 2013 A1
20130024645 Cheriton et al. Jan 2013 A1
20130031331 Cheriton et al. Jan 2013 A1
20130036236 Morales et al. Feb 2013 A1
20130058250 Casado et al. Mar 2013 A1
20130060839 Van Biljon et al. Mar 2013 A1
20130066940 Shao Mar 2013 A1
20130073602 Meadway et al. Mar 2013 A1
20130073724 Parashar Mar 2013 A1
20130094499 Davis et al. Apr 2013 A1
20130097351 Davis Apr 2013 A1
20130097448 Davis et al. Apr 2013 A1
20130107444 Schnell May 2013 A1
20130111107 Chang et al. May 2013 A1
20130124417 Spears et al. May 2013 A1
20130145375 Kang Jun 2013 A1
20130148667 Hama et al. Jun 2013 A1
20130163605 Chandra et al. Jun 2013 A1
20130247064 Jackson Sep 2013 A1
20130268653 Deng et al. Oct 2013 A1
20130275703 Schenfeld et al. Oct 2013 A1
20130286840 Fan Oct 2013 A1
20130290643 Lim Oct 2013 A1
20130290650 Chang et al. Oct 2013 A1
20130298134 Jackson Nov 2013 A1
20130305093 Jayachandran et al. Nov 2013 A1
20130318269 Dalal et al. Nov 2013 A1
20140052866 Jackson Feb 2014 A1
20140082614 Klein et al. Mar 2014 A1
20140104778 Schnell Apr 2014 A1
20140122833 Davis et al. May 2014 A1
20140135105 Quan et al. May 2014 A1
20140143773 Ciano et al. May 2014 A1
20140317292 Odom Oct 2014 A1
20140359044 Davis et al. Dec 2014 A1
20140359323 Fullerton et al. Dec 2014 A1
20140365596 Kanevsky Dec 2014 A1
20150012679 Davis et al. Jan 2015 A1
20150039840 Chandra et al. Feb 2015 A1
20150103826 Davis Apr 2015 A1
20150229586 Jackson Aug 2015 A1
20150293789 Jackson Oct 2015 A1
20150301880 Allu Oct 2015 A1
20150381521 Jackson Dec 2015 A1
20160161909 Wada Jun 2016 A1
20170115712 Davis Apr 2017 A1
20180018149 Cook Jan 2018 A1
20180054364 Jackson Feb 2018 A1
20190260689 Jackson Aug 2019 A1
20190286610 Dalton Sep 2019 A1
20200073722 Jackson Mar 2020 A1
20200159449 Davis et al. May 2020 A1
20200379819 Jackson Dec 2020 A1
Foreign Referenced Citations (52)
Number Date Country
2496783 Mar 2004 CA
60216001 Jul 2007 DE
112008001875 Aug 2013 DE
0268435 May 1988 EP
0605106 Jul 1994 EP
0 859 314 Aug 1998 EP
1331564 Jul 2003 EP
1365545 Nov 2003 EP
1492309 Dec 2004 EP
1865684 Dec 2007 EP
2391744 Feb 2004 GB
2392265 Feb 2004 GB
2002-207712 Jul 2002 JP
2005-165568 Jun 2005 JP
2005-223753 Aug 2005 JP
2005-536960 Dec 2005 JP
8-212084 Aug 2006 JP
2006-309439 Nov 2006 JP
20040107934 Dec 2004 KR
M377621 Apr 2010 TW
201017430 May 2010 TW
WO1998011702 Mar 1998 WO
WO1998058518 Dec 1998 WO
WO 1999015999 Apr 1999 WO
WO1999057660 Nov 1999 WO
WO2000014938 Mar 2000 WO
WO2000025485 May 2000 WO
WO2000060825 Oct 2000 WO
WO2001009791 Feb 2001 WO
WO2001014987 Mar 2001 WO
WO2001015397 Mar 2001 WO
WO2001039470 May 2001 WO
WO2001044271 Jun 2001 WO
WO2003046751 Jun 2003 WO
WO2003060798 Sep 2003 WO
WO2004021109 Mar 2004 WO
WO2004021641 Mar 2004 WO
WO2004046919 Jun 2004 WO
WO2004070547 Aug 2004 WO
WO2004092884 Oct 2004 WO
WO2005013143 Feb 2005 WO
WO2005017763 Feb 2005 WO
WO2005017783 Feb 2005 WO
WO2005089245 Sep 2005 WO
WO2005091136 Sep 2005 WO
WO2006036277 Apr 2006 WO
WO2006107531 Oct 2006 WO
WO2006108187 Oct 2006 WO
WO2006112981 Oct 2006 WO
WO2008000193 Jan 2008 WO
WO2011044271 Apr 2011 WO
WO2012037494 Mar 2012 WO
Non-Patent Literature Citations (504)
Entry
US 7,774,482 B1, 08/2010, Szeto et al. (withdrawn)
Caesar et al., “Design and Implementation of a Routing Control Platform,” Usenix, NSDI '05 Paper, Technical Program, obtained from the Internet, on Apr. 13, 2021, at URL <https://www.usenix.org/legacy/event/nsdi05/tech/full_papers/caesar/ca- esar_html/>, 23 pages.
Bader et al.; “Applications”; The International Journal of High Performance Computing Applications, vol. 15, No. ; pp. 181-185; Summer 2001.
Coomer et al.; “Introduction to the Cluster Grid—Part 1”; Sun Microsystems White Paper; 19 pages; Aug. 2002.
Joseph et al.; “Evolution of grid computing architecture and grid adoption models”; IBM Systems Journal, vol. 43, No. 4; 22 pages; 2004.
Smith et al.; “Grid computing”; MIT Sloan Management Review, vol. 46, Iss. 1.; 5 pages; Fall 2004.
“Microsoft Computer Dictionary, 5th Ed.”; Microsoft Press; 3 pages; 2002.
“Random House Concise Dictionary of Science & Computers”; 3 pages; Helicon Publishing; 2004.
U.S. Appl. No. 11/279,007 filed Apr. 2006, Jackson.
U.S. Appl. No. 13/705,340 filed Apr. 2012, Davis et al..
U.S. Appl. No. 13/899,751 filed May 2013, Chandra.
U.S. Appl. No. 13/935,108 filed Jul. 2013, Davis.
U.S. Appl. No. 13/959,428 filed Aug. 2013, Chandra.
U.S. Appl. No. 60/662,240 filed Mar. 2005, Jackson.
U.S. Appl. No. 60/552,653 filed Apr. 2005, Jackson.
A Language Modeling Framework for Resource Selection and Results Merging Si et al. CIKM 2002, Proceedings of the eleventh international conference on Iformation and Knowledge Management.
Alhusaini et al. “A framework for mapping with resource co-allocation in heterogeneous computing systems,” Proceedings 9th Heterogeneous Computing Workshop (HCW 2000) (Cat. No. PR00556), Cancun, Mexico, 2000, pp. 273-286. (Year: 2000).
Ali et al., “Task Execution Time Modeling for Heterogeneous Computing System”, IEEE, 2000, pp. 1-15.
Amiri et al., “Dynamic Function Placement for Data-lntensive Cluster Computing,” Jun. 2000.
Banicescu et al., “Competitive Resource management in Distributed Computing Environments with Hectiling”, 1999, High Performance Computing Symposium, p. 1-7 (Year: 1999).
Banicescu et al., “Efficient Resource Management for Scientific Applications in Distributed Computing Environment” 1998, Mississippi State Univ. Dept. of Comp. Science, p. 45-54. (Year: 1998).
Buyya et al., “An Evaluation of Economy-based Resource Trading and Scheduling on Computational Power Grids for Parameter Sweep Applications,” Active Middleware Services, 2000, 10 pages.
Chase et al., “Dynamic Virtual Clusters in a Grid Site Manager”, Proceedings of the 12.sup.th IEEE International Symposium on High Performance Distributed Computing (HPDC'03), 2003.
Chen et al., “A flexible service model for advance reservation”, Computer Networks, Elsevier science publishers, vol. 37, No. 3-4, pp. 251-262. Nov. 5, 2001.
Exhibit 1002, Declaration of Dr. Andrew Wolfe, Ph.D., document filed on behalf of Unified Patents, LLC, in Case No. IPR2022-00136, 110 pages, Declaration dated Nov. 29, 2021.
Exhibit 1008, Declaration of Kevin Jakel, document filed on behalf of Unified Patents, LLC, in Case No. IPR2022-00136, 7 pages, Declaration dated Nov. 4, 2021.
Foster et al., “A Distributed Resource Management Architecture that Supports Advance Reservations and Co-Allocation,” Seventh International Workshop on Quality of Service (IWQoS '99), 1999, pp. 27-36.
Furmento et al. “An Integrated Grid Environment for Component Applications”, Proceedings of the Second International Workshop on Grid Computing table of contents, 2001, pp. 26-37.
He XiaoShan; QoS Guided Min-Min Heuristic for Grid Task Scheduling; Jul. 2003, vol. 18, No. 4, pp. 442-451 J. Comput. Sci. & Technol.
Huy Tuong LE, “The Data-AWare Resource Broker” Research Project Thesis, University of Adelaide, Nov. 2003, pp. 1-63.
IBM Tivoli “IBM Directory Integrator and Tivoli Identity Manager Integration” Apr. 2, 2003, pp. 1-13 online link “http:publib.boulder.ibm.com/tividd/td/ITIM/SC32-1683-00/en_US/HTML/idi_integration/index.html” (Year: 2003).
Intel, Architecture Guide: Intel® Active Management Technology, lntel.com, Oct. 10, 2008, pp. 1-23, (Year 2008).
Kafil et al., “Optimal Task Assignment in Herterogenous Computing Systems,” IEEE, 1997, pp. 135-146.
Kuan-Wei Cheng, Chao-Tung Yang, Chuan-Lin Lai and Shun-Chyi Change, “A parallel loop self-scheduling on grid computing environments,” 7th International Symposium on Parallel Architectures, Algorithms and Networks, 2004. Proceedings. 2004, pp. 409-414 (Year: 2004).
Luo Si et al. “A Language Modeling Framework for Resource Selection and Results Merging”, Conference on Information and Knowledge Management. 2002 ACM pp. 391-397.
Maheswaran et al., “Dynamic Matching and Scheduling of a Class of Independent Tasks onto Heterogeneous Computing Systems,” IEEE, 2000, pp. 1-15.
Mateescu et al., “Quality of service on the grid via metascheduling with resource co-scheduling and co-reservation,” The International Journal of High Performance Computing Applications, 2003, 10 pages.
Notice of Allowance on U.S. Appl. No. 10/530,577, dated Oct. 15, 2015.
Notice of Allowance on U.S. Appl. No. 11/207,438 dated Jan. 3, 2012.
Notice of Allowance on U.S. Appl. No. 11/276,852 dated Nov. 26, 2014.
Notice of Allowance on U.S. Appl. No. 11/276,853, dated Apr. 5, 2016.
Notice of Allowance on U.S. Appl. No. 11/276,854, dated Mar. 6, 2014.
Notice of Allowance on U.S. Appl. No. 11/276,855, dated Sep. 13, 2013.
Notice of Allowance on U.S. Appl. No. 11/616,156, dated Mar. 25, 2014.
Notice of Allowance on U.S. Appl. No. 11/718,867 dated May 25, 2012.
Notice of Allowance on U.S. Appl. No. 12/573,967, dated Jul. 20, 2015.
Notice of Allowance on U.S. Appl. No. 13/234,054, dated Sep. 19, 2017.
Notice of Allowance on U.S. Appl. No. 13/284,855, dated Jul. 14, 2014.
Notice of Allowance on U.S. Appl. No. 13/453,086, dated Jul. 18, 2013.
Notice of Allowance on U.S. Appl. No. 13/475,713, dated Feb. 5, 2015.
Notice of Allowance on U.S. Appl. No. 13/475,722, dated Feb. 27, 2015.
Notice of Allowance on U.S. Appl. No. 13/527,498, dated Feb. 23, 2015.
Notice of Allowance on U.S. Appl. No. 13/527,505, dated Mar. 6, 2015.
Notice of Allowance on U.S. Appl. No. 13/621,987 dated Jun. 4, 2015.
Notice of Allowance on U.S. Appl. No. 13/624,725, dated Mar. 30, 2016.
Notice of Allowance on U.S. Appl. No. 13/624,731, dated Mar. 5, 2015.
Notice of Allowance on U.S. Appl. No. 13/662,759 dated May 10, 2016.
Notice of Allowance on U.S. Appl. No. 13/692,741 dated Dec. 4, 2015.
Notice of Allowance on U.S. Appl. No. 13/705,286 dated Feb. 24, 2016.
Notice of Allowance on U.S. Appl. No. 13/705,340, dated Dec. 3, 2014.
Notice of Allowance on U.S. Appl. No. 13/705,340, dated Mar. 16, 2015.
Notice of Allowance on U.S. Appl. No. 13/705,386, dated Jan. 24, 2014.
Notice of Allowance on U.S. Appl. No. 13/705,414, dated Nov. 4, 2013.
Notice of Allowance on U.S. Appl. No. 13/728,308 dated Oct. 7, 2015.
Notice of Allowance on U.S. Appl. No. 13/728,428 dated Jul. 18, 2016.
Notice of Allowance on U.S. Appl. No. 13/758,164, dated Apr. 15, 2015.
Notice of Allowance on U.S. Appl. No. 13/760,600 dated Feb. 26, 2018.
Notice of Allowance on U.S. Appl. No. 13/760,600 dated Jan. 9, 2018.
Notice of Allowance on U.S. Appl. No. 13/855,241, dated Oct. 27, 2020.
Notice of Allowance on U.S. Appl. No. 13/855,241, dated Sep. 14, 2020.
Notice of Allowance on U.S. Appl. No. 14/052,723 dated Feb. 8, 2017.
Notice of Allowance on U.S. Appl. No. 14/106,254 dated May 25, 2017.
Notice of Allowance on U.S. Appl. No. 14/106,697 dated Oct. 24, 2016.
Notice of Allowance on U.S. Appl. No. 14/137,921 dated Aug. 12, 2021 and Jul. 16, 2021.
Notice of Allowance on U.S. Appl. No. 14/137,940 dated Jan. 30, 2019.
Notice of Allowance on U.S. Appl. No. 14/154,912 dated Apr. 25, 2019.
Notice of Allowance on U.S. Appl. No. 14/154,912, dated Apr. 3, 2019.
Notice of Allowance on U.S. Appl. No. 14/154,912, dated Feb. 7, 2019.
Notice of Allowance on U.S. Appl. No. 14/331,718 dated Jun. 7, 2017.
Notice of Allowance on U.S. Appl. No. 14/331,772, dated Jan. 10, 2018.
Notice of Allowance on U.S. Appl. No. 14/334,178 dated Aug. 19, 2016.
Notice of Allowance on U.S. Appl. No. 14/334,178 dated Jun. 8, 2016.
Notice of Allowance on U.S. Appl. No. 14/334,931 dated May 20, 2016.
Notice of Allowance on U.S. Appl. No. 14/454,049, dated Jan. 20, 2015.
Notice of Allowance on U.S. Appl. No. 14/590,102, dated Jan. 22, 2018.
Notice of Allowance on U.S. Appl. No. 14/704,231, dated Sep. 2, 2015.
Notice of Allowance on U.S. Appl. No. 14/709,642 dated Mar. 19, 2019.
Notice of Allowance on U.S. Appl. No. 14/709,642, dated May 9, 2019.
Notice of Allowance on U.S. Appl. No. 14/725,543 dated Jul. 21, 2016.
Notice of Allowance on U.S. Appl. No. 14/753,948 dated Jun. 14, 2017.
Notice of Allowance on U.S. Appl. No. 14/791,873 dated Dec. 20, 2018.
Notice of Allowance on U.S. Appl. No. 14/809,723 dated Jan. 11, 2018.
Notice of Allowance on U.S. Appl. No. 14/827,927 dated Jan. 21, 2022 and Dec. 9, 2021.
Notice of Allowance on U.S. Appl. No. 14/833,673, dated Dec. 2, 2016.
Notice of Allowance on U.S. Appl. No. 14/842,916 dated Oct. 2, 2017.
Notice of Allowance on U.S. Appl. No. 14/872,645 dated Oct. 13, 2016.
Notice of Allowance on U.S. Appl. No. 14/987,059, dated Feb. 14, 2020.
Notice of Allowance on U.S. Appl. No. 14/987,059, dated Jul. 8, 2019.
Notice of Allowance on U.S. Appl. No. 14/987,059, dated Nov. 7, 2019.
Notice of Allowance on U.S. Appl. No. 15/042,489 dated Jul. 16, 2018.
Notice of Allowance on U.S. Appl. No. 15/049,542 dated Feb. 28, 2018.
Notice of Allowance on U.S. Appl. No. 15/049,542 dated Jan. 4, 2018.
Notice of Allowance on U.S. Appl. No. 15/078,115 dated Jan. 8, 2018.
Notice of Allowance on U.S. Appl. No. 15/254,111 dated Nov. 13, 2017.
Notice of Allowance on U.S. Appl. No. 15/254,111 dated Sep. 1, 2017.
Notice of Allowance on U.S. Appl. No. 15/270,418 dated Nov. 2, 2017.
Notice of Allowance on U.S. Appl. No. 15/345,017 dated Feb. 2, 2021.
Notice of Allowance on U.S. Appl. No. 15/357,332 dated Jul. 12, 2018.
Notice of Allowance on U.S. Appl. No. 15/360,668, dated May 5, 2017.
Notice of Allowance on U.S. Appl. No. 15/430,959 dated Mar. 15, 2018.
Notice of Allowance on U.S. Appl. No. 15/478,467 dated May 30, 2019.
Notice of Allowance on U.S. Appl. No. 15/672,418 dated Apr. 4, 2018.
Notice of Allowance on U.S. Appl. No. 15/717,392 dated Mar. 22, 2019.
Notice of Allowance on U.S. Appl. No. 15/726,509, dated Sep. 25, 2019.
Office Action issued on U.S. Appl. No. 11/276,855, dated Jul. 22, 2010.
Office Action on U.S. Appl. No. 10/530,577, dated May 29, 2015.
Office Action on U.S. Appl. No. 11/207,438 dated Aug. 31, 2010.
Office Action on U.S. Appl. No. 11/207,438 dated Mar. 15, 2010.
Office Action on U.S. Appl. No. 11/276,852, dated Feb. 10, 2009.
Office Action on U.S. Appl. No. 11/276,852, dated Jan. 16, 2014.
Office Action on U.S. Appl. No. 11/276,852, dated Jun. 26, 2012.
Office Action on U.S. Appl. No. 11/276,852, dated Mar. 17, 2011.
Office Action on U.S. Appl. No. 11/276,852, dated Mar. 4, 2010.
Office Action on U.S. Appl. No. 11/276,852, dated Mar. 5, 2013.
Office Action on U.S. Appl. No. 11/276,852, dated Oct. 4, 2010.
Office Action on U.S. Appl. No. 11/276,852, dated Oct. 5, 2011.
Office Action on U.S. Appl. No. 11/276,852, dated Oct. 16, 2009.
Office Action on U.S. Appl. No. 11/276,853, dated Apr. 4, 2014.
Office Action on U.S. Appl. No. 11/276,853, dated Aug. 7, 2009.
Office Action on U.S. Appl. No. 11/276,853, dated Dec. 28, 2009.
Office Action on U.S. Appl. No. 11/276,853, dated Dec. 8, 2008.
Office Action on U.S. Appl. No. 11/276,853, dated Jul. 12, 2010.
Office Action on U.S. Appl. No. 11/276,853, dated May 26, 2011.
Office Action on U.S. Appl. No. 11/276,853, dated Nov. 23, 2010.
Office Action on U.S. Appl. No. 11/276,853, dated Oct. 16, 2009.
Office Action on U.S. Appl. No. 11/276,854, dated Apr. 18, 2011.
Office Action on U.S. Appl. No. 11/276,854, dated Aug. 1, 2012.
Office Action on U.S. Appl. No. 11/276,854, dated Jun. 10, 2009.
Office Action on U.S. Appl. No. 11/276,854, dated Jun. 5, 2013.
Office Action on U.S. Appl. No. 11/276,854, dated Jun. 8, 2010.
Office Action on U.S. Appl. No. 11/276,854, dated Nov. 26, 2008.
Office Action on U.S. Appl. No. 11/276,854, dated Oct. 27, 2010.
Office Action on U.S. Appl. No. 11/276,855, dated Aug. 13, 2009.
Office Action on U.S. Appl. No. 11/276,855, dated Dec. 30, 2008.
Office Action on U.S. Appl. No. 11/276,855, dated Dec. 31, 2009.
Office Action on U.S. Appl. No. 11/276,855, dated Dec. 7, 2010.
Office Action on U.S. Appl. No. 11/276,855, dated Jan. 26, 2012.
Office Action on U.S. Appl. No. 11/276,855, dated Jul. 22, 2010.
Office Action on U.S. Appl. No. 11/276,855, dated Jun. 27, 2011.
Office Action on U.S. Appl. No. 11/616,156, dated Jan. 18, 2011.
Office Action on U.S. Appl. No. 11/616,156, dated Oct. 13, 2011.
Office Action on U.S. Appl. No. 11/616,156, dated Sep. 17, 2013.
Office Action on U.S. Appl. No. 11/718,867 dated Dec. 29, 2009.
Office Action on U.S. Appl. No. 11/718,867 dated Jan. 8, 2009. cited by applicant.
Office Action on U.S. Appl. No. 11/718,867 dated Jul. 11, 2008. cited by applicant.
Office Action on U.S. Appl. No. 11/718,867 dated Jun. 15, 2009. cited by applicant.
Office Action on U.S. Appl. No. 12/573,967, dated Apr. 1, 2014.
Office Action on U.S. Appl. No. 12/573,967, dated Aug. 13, 2012.
Office Action on U.S. Appl. No. 12/573,967, dated Mar. 1, 2012.
Office Action on U.S. Appl. No. 12/573,967, dated Nov. 21, 2014.
Office Action on U.S. Appl. No. 12/573,967, dated Oct. 10, 2013.
Office Action on U.S. Appl. No. 12/794,996, dated Jun. 19, 2013.
Office Action on U.S. Appl. No. 12/794,996, dated Sep. 17, 2012.
Office Action on U.S. Appl. No. 12/889,721 dated Aug. 2, 2016.
Office Action on U.S. Appl. No. 12/889,721, dated Apr. 17, 2014.
Office Action on U.S. Appl. No. 12/889,721, dated Feb. 24, 2016.
Office Action on U.S. Appl. No. 12/889,721, dated Jul. 2, 2013.
Office Action on U.S. Appl. No. 12/889,721, dated May 22, 2015.
Office Action on U.S. Appl. No. 12/889,721, dated Oct. 11, 2012.
Office Action on U.S. Appl. No. 12/889,721, dated Sep. 29, 2014.
Office Action on U.S. Appl. No. 13/234,054 dated May 31, 2017.
Office Action on U.S. Appl. No. 13/234,054 dated Oct. 20, 2016.
Office Action on U.S. Appl. No. 13/234,054, dated Apr. 16, 2015.
Office Action on U.S. Appl. No. 13/234,054, dated Aug. 6, 2015.
Office Action on U.S. Appl. No. 13/234,054, dated Jan. 26, 2016.
Office Action on U.S. Appl. No. 13/234,054, dated Oct. 23, 2014.
Office Action on U.S. Appl. No. 13/284,855, dated Dec. 19, 2013.
Office Action on U.S. Appl. No. 13/453,086, dated Mar. 12, 2013.
Office Action on U.S. Appl. No. 13/475,713, dated Apr. 1, 2014.
Office Action on U.S. Appl. No. 13/475,713, dated Oct. 17, 2014.
Office Action on U.S. Appl. No. 13/475,722, dated Jan. 17, 2014.
Office Action on U.S. Appl. No. 13/475,722, dated Oct. 20, 2014.
Office Action on U.S. Appl. No. 13/527,498, dated May 8, 2014.
Office Action on U.S. Appl. No. 13/527,498, dated Nov. 17, 2014.
Office Action on U.S. Appl. No. 13/527,505, dated Dec. 5, 2014.
Office Action on U.S. Appl. No. 13/527,505, dated May 8, 2014.
Office Action on U.S. Appl. No. 13/621,987 dated Feb. 27, 2015.
Office Action on U.S. Appl. No. 13/621,987 dated Oct. 8, 2014.
Office Action on U.S. Appl. No. 13/624,725 dated Mar. 10, 2016.
Office Action on U.S. Appl. No. 13/624,725, dated Apr. 23, 2015.
Office Action on U.S. Appl. No. 13/624,725, dated Jan. 10, 2013.
Office Action on U.S. Appl. No. 13/624,725, dated Nov. 4, 2015.
Office Action on U.S. Appl. No. 13/624,725, dated Nov. 13, 2013.
Office action on U.S. Appl. No. 13/624,731 dated Jan. 29, 2013.
Office Action on U.S. Appl. No. 13/624,731, dated Jul. 25, 2014.
Office Action on U.S. Appl. No. 13/662,759, dated Feb. 22, 2016.
Office Action on U.S. Appl. No. 13/662,759, dated Nov. 6, 2014.
Office Action on U.S. Appl. No. 13/692,741, dated Jul. 1, 2015.
Office Action on U.S. Appl. No. 13/692,741, dated Mar. 11, 2015.
Office Action on U.S. Appl. No. 13/692,741, dated Sep. 4, 2014.
Office Action on U.S. Appl. No. 13/705,286, dated May 13, 2013.
Office Action on U.S. Appl. No. 13/705,340, dated Aug. 2, 2013.
Office Action on U.S. Appl. No. 13/705,340, dated Mar. 12, 2014.
Office Action on U.S. Appl. No. 13/705,340, dated Mar. 29, 2013.
Office Action on U.S. Appl. No. 13/705,386, dated May 13, 2013.
Office Action on U.S. Appl. No. 13/705,414, dated Apr. 9, 2013.
Office Action on U.S. Appl. No. 13/705,414, dated Aug. 9, 2013.
Office Action on U.S. Appl. No. 13/705,428, dated Jul. 10, 2013.
Office Action on U.S. Appl. No. 13/728,308, dated May 14, 2015.
Office Action on U.S. Appl. No. 13/728,428 dated May 6, 2016.
Office Action on U.S. Appl. No. 13/728,428, dated Jun. 12, 2015.
Office Action on U.S. Appl. No. 13/760,600 dated Aug. 30, 2016.
Office Action on U.S. Appl. No. 13/760,600 dated Jan. 23, 2017.
Office Action on U.S. Appl. No. 13/760,600 dated Jun. 15, 2017.
Office Action on U.S. Appl. No. 13/760,600 dated Mar. 15, 2016.
Office Action on U.S. Appl. No. 13/760,600 dated Oct. 19, 2015.
Office Action on U.S. Appl. No. 13/760,600, dated Apr. 10, 2015.
Office Action on U.S. Appl. No. 13/855,241, dated Jan. 13, 2016.
Office Action on U.S. Appl. No. 13/855,241, dated Jul. 6, 2015.
Office Action on U.S. Appl. No. 13/855,241, dated Jun. 27, 2019.
Office Action on U.S. Appl. No. 13/855,241, dated Mar. 30, 2020.
Office Action on U.S. Appl. No. 13/855,241, dated Sep. 15, 2016.
Office Action on U.S. Appl. No. 14/052,723, dated Dec. 3, 2015.
Office Action on U.S. Appl. No. 14/052,723, dated May 1, 2015.
Office Action on U.S. Appl. No. 14/106,254 dated Aug. 12, 2016.
Office Action on U.S. Appl. No. 14/106,254 dated Feb. 15, 2017.
Office Action on U.S. Appl. No. 14/106,254, dated May 2, 2016.
Office Action on U.S. Appl. No. 14/106,697 dated Feb. 2, 2016.
Office Action on U.S. Appl. No. 14/106,697, dated Aug. 17, 2015.
Office Action on U.S. Appl. No. 14/106,698, dated Aug. 19, 2015.
Office Action on U.S. Appl. No. 14/106,698, dated Feb. 12, 2015.
Office Action on U.S. Appl. No. 14/137,921 dated Feb. 4, 2021.
Office Action on U.S. Appl. No. 14/137,921 dated Jun. 25, 2020.
Office Action on U.S. Appl. No. 14/137,921 dated May 31, 2017.
Office Action on U.S. Appl. No. 14/137,921 dated May 6, 2016.
Office Action on U.S. Appl. No. 14/137,921 dated Oct. 6, 2016.
Office Action on U.S. Appl. No. 14/137,921 dated Oct. 8, 2015.
Office Action on U.S. Appl. No. 14/137,940 dated Aug. 10, 2018.
Office Action on U.S. Appl. No. 14/137,940 dated Jan. 25, 2018.
Office Action on U.S. Appl. No. 14/137,940 dated Jun. 3, 2016.
Office Action on U.S. Appl. No. 14/137,940 dated Jun. 9, 2017.
Office Action on U.S. Appl. No. 14/137,940 dated Nov. 3, 2016.
Office Action on U.S. Appl. No. 14/154,912, dated Dec. 7, 2017.
Office Action on U.S. Appl. No. 14/154,912, dated Jul. 20, 2017.
Office Action on U.S. Appl. No. 14/154,912, dated May 8, 2018.
Office Action on U.S. Appl. No. 14/154,912, dated Oct. 11, 2018.
Office Action on U.S. Appl. No. 14/331,718 dated Feb. 28, 2017.
Office Action on U.S. Appl. No. 14/331,772, dated Aug. 11, 2017.
Office Action on U.S. Appl. No. 14/334,178 dated Dec. 18, 2015.
Office Action on U.S. Appl. No. 14/334,931 dated Dec. 11, 2015.
Office Action on U.S. Appl. No. 14/334,931, dated Jan. 5, 2015.
Office Action on U.S. Appl. No. 14/334,931, dated Jul. 9, 2015.
Office Action on U.S. Appl. No. 14/590,102, dated Aug. 15, 2017.
Office Action on U.S. Appl. No. 14/691,120 dated Mar. 10, 2022.
Office Action on U.S. Appl. No. 14/691,120 dated Mar. 30, 2020.
Office Action on U.S. Appl. No. 14/691,120 dated Oct. 3, 2019.
Office Action on U.S. Appl. No. 14/691,120 dated Oct. 20, 2020.
Office Action on U.S. Appl. No. 14/691,120 dated Sep. 29, 2021.
Office Action on U.S. Appl. No. 14/691,120, dated Aug. 27, 2018.
Office Action on U.S. Appl. No. 14/691,120, dated Feb. 12, 2018.
Office Action on U.S. Appl. No. 14/691,120, dated Mar. 2, 2017.
Office Action on U.S. Appl. No. 14/691,120, dated Mar. 22, 2019.
Office Action on U.S. Appl. No. 14/691,120, dated Sep. 13, 2017.
Office Action on U.S. Appl. No. 14/709,642 dated Feb. 7, 2018.
Office Action on U.S. Appl. No. 14/709,642 dated Feb. 17, 2016.
Office Action on U.S. Appl. No. 14/709,642 dated Jul. 12, 2017.
Office Action on U.S. Appl. No. 14/709,642 dated Sep. 12, 2016.
Office Action on U.S. Appl. No. 14/725,543 dated Apr. 7, 2016.
Office Action on U.S. Appl. No. 14/751,529 dated Aug. 9, 2017.
Office Action on U.S. Appl. No. 14/751,529 dated Oct. 3, 2018.
Office Action on U.S. Appl. No. 14/751,529, dated Jun. 6, 2016.
Office Action on U.S. Appl. No. 14/751,529, dated Nov. 14, 2016.
Office Action on U.S. Appl. No. 14/753,948 dated Nov. 4, 2016.
Office Action on U.S. Appl. No. 14/791,873 dated May 14, 2018.
Office Action on U.S. Appl. No. 14/809,723 dated Aug. 25, 2017.
Office Action on U.S. Appl. No. 14/809,723 dated Dec. 30, 2016.
Office Action on U.S. Appl. No. 14/827,927 dated Jan. 19, 2021.
Office Action on U.S. Appl. No. 14/827,927 dated Jan. 31, 2020.
Office Action on U.S. Appl. No. 14/827,927 dated May 16, 2018.
Office Action on U.S. Appl. No. 14/827,927 dated May 16, 2019.
Office Action on U.S. Appl. No. 14/827,927 dated Sep. 9, 2019.
Office Action on U.S. Appl. No. 14/827,927, dated Aug. 28, 2018.
Office Action on U.S. Appl. No. 14/827,927, dated Jan. 31, 2019.
Office Action on U.S. Appl. No. 14/833,673, dated Feb. 11, 2016.
Office Action on U.S. Appl. No. 14/833,673, dated Jun. 10, 2016.
Office Action on U.S. Appl. No. 14/833,673, dated Sep. 24, 2015.
Office Action on U.S. Appl. No. 14/842,916 dated May 5, 2017.
Office Action on U.S. Appl. No. 14/872,645 dated Feb. 16, 2016.
Office Action on U.S. Appl. No. 14/872,645 dated Jun. 29, 2016.
Office Action on U.S. Appl. No. 14/987,059, dated Jan. 31, 2019.
Office Action on U.S. Appl. No. 14/987,059, dated May 11, 2018.
Office Action on U.S. Appl. No. 14/987,059, dated Oct. 11, 2018.
Office Action on U.S. Appl. No. 15/042,489 dated Jan. 9, 2018.
Office Action on U.S. Appl. No. 15/078,115 dated Sep. 5, 2017.
Office Action on U.S. Appl. No. 15/254,111 dated Jun. 20, 2017.
Office Action on U.S. Appl. No. 15/281,462 dated Apr. 6, 2018.
Office Action on U.S. Appl. No. 15/281,462 dated Dec. 15, 2017.
Office Action on U.S. Appl. No. 15/281,462 dated Feb. 10, 2017.
Office Action on U.S. Appl. No. 15/281,462 dated Jun. 13, 2017.
Office Action on U.S. Appl. No. 15/345,017 dated Aug. 24, 2020.
Office Action on U.S. Appl. No. 15/345,017 dated Aug. 9, 2019.
Office Action on U.S. Appl. No. 15/345,017 dated Jan. 31, 2019.
Office Action on U.S. Appl. No. 15/345,017 dated Jul. 11, 2018.
Office Action on U.S. Appl. No. 15/345,017 dated Mar. 20, 2020.
Office Action on U.S. Appl. No. 15/345,017 dated Nov. 29, 2019.
Office Action on U.S. Appl. No. 15/357,332 dated May 9, 2018.
Office Action on U.S. Appl. No. 15/357,332 dated Nov. 9, 2017.
Office Action on U.S. Appl. No. 15/478,467, dated Jan. 11, 2019.
Office Action on U.S. Appl. No. 15/478,467, dated Jul. 13, 2018.
Office Action on U.S. Appl. No. 15/717,392 dated Dec. 3, 2018.
Office Action on U.S. Appl. No. 15/717,392 dated Jul. 5, 2018.
Office Action on U.S. Appl. No. 15/726,509, dated Jun. 3, 2019.
Office Action on U.S. Appl. No. 13/624,731, dated Nov. 12, 2013.
Office Action on U.S. Appl. No. 15/270,418 dated Apr. 21, 2017.
PCT/US2005/008296—International Search Report dated Aug. 3, 2005 for PCT Application No. PCT/US2005/008296, 1 page.
PCT/US2005/008297—International Search Report for Application No. PCT/US2005/008297, dated Sep. 29, 2005.
PCT/US2005/040669—International Preliminary Examination Report for PCT/US2005/040669, dated Apr. 29, 2008.
PCT/US2005/040669—Written Opinion for PCT/US2005/040669, dated Sep. 13, 2006.
PCT/US2009/044200—International Preliminary Reporton Patentability for PCT/US2009/044200, dated Nov. 17, 2010.
PCT/US2009/044200—International Search Report and Written Opinion on PCT/US2009/044200, dated Jul. 1, 2009.
PCT/US2010/053227—International Preliminary Reporton Patentability for PCT/US2010/053227, dated May 10, 2012.
PCT/US2010/053227—International Search Report and Written Opinion for PCT/US2010/053227, dated Dec. 16, 2010.
PCT/US2011/051996—International Search Report and Written Opinion for PCT/US2011/051996, dated Jan. 19, 2012.
PCT/US2012/038986—International Preliminary Reporton Patentability for PCT/US2012/038986 dated Nov. 26, 2013.
PCT/US2012/038986—International Search Report and Written Opinion on PCT/US2012/038986, dated Mar. 14, 2013.
PCT/US2012/038987—International Search Report and Written Opinion for PCT/US2012/038987, dated Aug. 16, 2012.
PCT/US2012/061747—International Preliminary Reporton Patentability for PCT/US2012/061747, dated Apr. 29, 2014.
PCT/US2012/061747—International Search Report and Written Opinion for PCT/US2012/061747, dated Mar. 1, 2013.
PCT/US2012/062608—International Preliminary Report on Patentability issued on PCT/US2012/062608, dated May 6, 2014.
PCT/US2012/062608—International Search Report and Written Opinion for PCT/US2012/062608, dated Jan. 18, 2013.
Petition for Inter Partes Review of U.S. Pat. No. 8,271,980, Challenging Claims 1-5 and 14-15, document filed on behalf of Unified Patents, LLC, in Case No. IPR2022-00136, 92 pages, Petition document dated Nov. 29, 2021.
Roblitz et al., “Resource Reservations with Fuzzy Requests”, Con-currency and computation: Practice and Experience, 2005.
Snell et al., “The Performance Impact of Advance Reservation Meta-Scheduling”, Springer-Verlag, Berlin, 2000, pp. 137-153.
Stankovic et al., “The Case for Feedback Control Real-Time Scheduling” 1999, IEEE pp. 1-13.
Takahashi et al. “A Programming Interface for Network Resource Management,” 1999 IEEE, pp. 34-44.
Tanaka et al. “Resource Manager for Globus-Based Wide-Area Cluster Computing,” 1999 IEEE, 8 pages.
U.S. Appl. No. 60/552,653, filed Apr. 19, 2005.
U.S. Appl. No. 60/662,240, filed Mar. 16, 2005, Jackson.
Notice of Allowance on U.S. Appl. No. 17/089,207, dated Jul. 7, 2022.
Notice of Allowance on U.S. Appl. No. 17/700,847, dated Jul. 7, 2022.
Notice of Allowance on U.S. Appl. No. 17/722,037, dated Jul. 18, 2022.
Office Action on U.S. Appl. No. 13/728,362, dated Feb. 21, 2014.
Office Action on U.S. Appl. No. 16/537,256 dated Jul. 7, 2022.
Office Action on U.S. Appl. No. 17/711,214, dated Jul. 8, 2022.
Office Action on U.S. Appl. No. 17/711,242, dated Jul. 28, 2022.
Office Action on U.S. Appl. No. 17/835,159 dated Aug. 31, 2022.
Notice of Allowance on U.S. Appl. No. 14/827,927 dated Apr. 25, 2022.
Notice of Allowance on U.S. Appl. No. 16/913,745, dated Jun. 9, 2022.
Notice of Allowance on U.S. Appl. No. 17/700,767 dated Jun. 27, 2022.
Notice of Allowance on U.S. Appl. No. 17/700,808, dated May 26, 2022 and Jun. 6, 2022.
Notice of Allowance on U.S. Appl. No. 17/722,062 dated Jun. 15, 2022.
Office Action on U.S. Appl. No. 16/537,256 dated Dec. 23, 2021.
Office Action on U.S. Appl. No. 16/913,708 dated Jun. 7, 2022.
Office Action on U.S. Appl. No. 16/913,745 dated Jan. 13, 2022.
Office Action on U.S. Appl. No. 17/089,207 dated Jan. 28, 2022.
Office Action on U.S. Appl. No. 17/201,245 dated Mar. 18, 2022.
Office Action on U.S. Appl. No. 17/697,235 dated May 25, 2022.
Office Action on U.S. Appl. No. 17/697,368 dated Jun. 7, 2022.
Office Action on U.S. Appl. No. 17/697,403 dated Jun. 7, 2022.
Office Action on U.S. Appl. No. 17/722,076 dated Jun. 22, 2022.
Office Acton on U.S. Appl. No. 17/722,037 dated Jun. 13, 2022.
Extended European Search Report for EP 10827330.1, dated Jun. 5, 2013.
Search Report on EP Application 10827330.1, dated Feb. 12, 2015.
Reexamination Report on Japanese Application 2012-536877, dated Jan. 22, 2015, including English Translation.
Office Action on Taiwan Application 101139729, dated May 25, 2015 (English translation not available).
Abdelwahed, Sherif et al., “A Control-Based Framework for Self-Managing Distributed Computing Systems”, WOSS'04 Oct. 31-Nov. 1, 2004 Newport Beach, CA, USA. Copyright 2004 ACM 1-58113-989-6/04/0010.
Abdelzaher, Tarek, et al., “Performance Guarantees for Web Server End-Systems: A Control-Theoretical Approach”, IEEE Transactions on Parallel Distributed Systems, vol. 13, No. 1, Jan. 2002.
Advanced Switching Technology Tech Brief, published 2005, 2 pages.
Amini, A. Shaikh, and H. Schulzrinne, “Effective Peering for Multi-provider Content Delivery Services”, In Proceedings of 23.sup.rd Annual IEEE Conference on Computer Communications (INFOCOM'04), pp. 850-861, 2004.
Amir and D. Shaw, “WALRUS—A Low Latency, High Throughput Web Service Using Internet-wide Replication”, In Proceeding of the 19.sup.th International Conference on Distributed Computing Systems Workshop, 1998.
Appleby, K., et al., “Oceano-SLA Based Management of a Computing Utility”, IBM T.J. Watson Research Center, P.O.Box 704, Yorktown Heights, New York 10598, USA. Proc. 7th IFIP/IEEE Int'l Symp. Integrated Network Management, IEEE Press 2001.
Aweya, James et al., “An adaptive load balancing scheme for web servers”, International Journal of Network Management 2002; 12: 3-39 (DOI: 10.1002/nem.421), Copright 2002 John Wiley & Sons, Ltd.
Azuma, T. Okamoto, G. Hasegawa, and M. Murata, “Design, Implementation and Evaluation of Resource Management System for Internet Servers”, IOS Press, Journal of High Speed Networks, vol. 14, Issue 4, pp. 301-316, Oct. 2005.
Baentsch, Michael et al., “World Wide Web Caching: The Application-Level View of the Internet”, Communications Magazine, IEEE, vol. 35, Issue 6, pp. 170-178, Jun. 1997.
Banga, Gaurav et al., “Resource Containers: A New Facility for Resource Management in Server Systems”, Rice University, originally published in the Proceedings of the 3.sup.rd Symposium on Operating Systems Design and Implementation, New Orleans, Louisiana, Feb. 1999.
Belloum, A et al., “A Scalable Web Server Architecture”, World Wide Web Internet and Web Information Systems, 5, 5-23, 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. 2000.
Benkner, Siegfried, et al., “VGE—A Service-Oriented Grid Environment for On-Demand Supercomputing”, Institute for Software Science, University of Vienna, Nordbergstrasse 15/C/3, A-1090 Vienna, Austria. Proceedings of the 5th IEEE/ACM International Workshop on Grid Computing. pp. 11-18. 2004.
Bent, Leeann et al., “Characterization of a Large Web Site Population with Implications for Content Delivery”, WWW2004, May 17-22, 2004, New York, New York, USA ACM 1-58113-844-X/04/0005, pp. 522-533.
Bian, Qiyong, et al., “Dynamic Flow Switching, A New Communication Service for ATM Networks”, 1997.
Bradford, S. Milliner, and M. Dumas, “Experience Using a Coordination-based Architecture for Adaptive Web Content Provision”, In COORDINATION, pp. 140-156. Springer, 2005.
Braumandl, R. et al., “ObjectGlobe: Ubiquitous query processing on the Internet”, Universitat Passau, Lehrstuhl fur Informatik, 94030 Passau, Germany. Technische Universitaat Muunchen, Institutfur Informatik, 81667 Munchen, Germany. Edited by F. Casati, M.-C. Shan, D. Georgakopoulos. Published online Jun. 7, 2001-.sub.-cSpringer-Verlag 2001.
Cardellini, Valeria et al., “Geographic Load Balancing for Scalable Distributed Web Systems”, Proceedings of the 8th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems, pp. 20-27. 2000.
Cardellini, Valeria et al., “The State of the Art in Locally Distributed Web-Server Systems”, ACM Computing Surveys, vol. 34, No. 2, Jun. 2002, pp. 263-311.
Casalicchio, Emiliano, et al., “Static and Dynamic Scheduling Algorithms for Scalable Web Server Farm”, University of Roma Tor Vergata, Roma, Italy, 00133.2001. In Proceedings of the IEEE 9.sup.th EuromicroWorkshop on Parallel and Distributed Processing, pp. 369-376, 2001.
Chandra, Abhishek et al., “Dynamic Resource Allocation for Shared Data Centers Using Online Measurements” Proceedings of the 11th international conference on Quality of service, Berkeley, CA, USA pp. 381-398. 2003.
Chandra, Abhishek et al., “Quantifying the Benefits of Resource Multiplexing in On-Demand Data Centers”, Department of Computer Science, University of Massachusetts Amherst, 2003.
Chapter 1 Overview of the Origin Family Architecture from Origin and Onyx2 Theory of Operations Manual, published 1997, 18 pages.
Chawla, Hamesh et al., “HydraNet: Network Support for Scaling of Large-Scale Services”,Proceedings of 7th International Conference on Computer Communications and Networks, 1998. Oct. 1998.
Chellappa, Ramnath et al., “Managing Computing Resources in Active Intranets”, International Journal of Network Management, 2002, 12:117-128 (DOI:10.1002/nem.427).
Chen and G. Agrawal, “Resource Allocation in a Middleware for Streaming Data”, In Proceedings of the 2.sup.nd Workshop on Middleware for Grid Computing (MGC '04), pp. 5-10, Toronto, Canada, Oct. 2004.
Chen, et al., “Replicated Servers Allocation for Multiple Information Sources in a Distributed Environment”, Department of Computer Science, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, Sep. 2000.
Chen, Liang et al., “Resource Allocation in a Middleware for Streaming Data”, 2nd Workshop on Middleware for Grid Computing Toronto, Canada, pp. 5-10, Copyright 2004 ACM.
Chen, Thomas, “Increasing the Observability of Internet Behavior”, Communications of the ACM, vol. 44, No. 1, pp. 93-98, Jan. 2001.
Chen, Xiangping et al., “Performance Evaluation of Service Differentiating Internet Servers”, IEEE Transactions on Computers, vol. 51, No. 11, pp. 1368-1375, Nov. 2002.
Cisco MDS 9000 Family Multiprotocol Services Module, published 2006, 13 pages.
Clark, et al., “Providing Scalable Web Service Using Multicast Delivery”, College of Computing, Georgia Institute of Technology, Atlanta, GA 30332-0280, 1995.
Clarke and G. Coulson, “An Architecture for Dynamically Extensible Operating Systems”, In Proceedings of the 4th International Conference on Configurable Distributed Systems (ICCDS'98), Annapolis, MD, May 1998.
Colajanni, Michele et al., “Analysis of Task Assignment Policies in Sea able Distributed Web-server Systems”, IEEE Transactions on Parallel and Distributed Systes, vol. 9, No. 6, Jun. 1998.
Colajanni, P. Yu, V. Cardellini, M. Papazoglou, M. Takizawa, B. Cramer and S. Chanson, “Dynamic Load Balancing in Geographically Distributed Heterogeneous Web Servers”, In Proceedings of the 18.sup.th International Conference on Distributed Computing Systems, pp. 295-302, May 1998.
Comparing the I2C Bus to the SMBUS, Maxim Integrated, Dec. 1, 2000, p. 1.
Conti, Marco et al., “Quality of Service Issues in Internet Web Services”, IEEE Transactions on Computers, vol. 51, No. 6, pp. 593-594, Jun. 2002.
Conti, Marco, et al., “Client-side content delivery policies in replicated web services: parallel access versus single server approach”, Istituto di Informatica e Telematica (IIT), Italian National Research Council (CNR), Via G. Moruzzi, I. 56124 Pisa, Italy, Performance Evaluation 59 (2005) 137-157, Available online Sep. 11, 2004.
Das et al., “Unifying Packet and Circuit Switched Networks,” IEEE Globecom Workshops 2009, Nov. 30, 2009, pp. 1-6.
Deering, “IP Multicast Extensions for 4.3BSD Unix and related Systems,” Jun. 1999, 5 pages.
Devarakonda, V.K. Naik, N. Rajamanim, “Policy-based multi-datacenter resource management”, In 6.sup.th IEEE International Workshop on Policies for Distributed Systems and Networks, pp. 247-250, Jun. 2005.
Dilley, John, et al., “Globally Distributed Content Delivery”, IEEE Internet Computing, 1089-7801/02/$17.00 .COPYRGT. 2002 IEEE, pp. 50-58, Sep.-Oct. 2002.
Doyle, J. Chase, O. Asad, W. Jin, and A. Vahdat, “Model-Based Resource Provisioning in a Web Service Utility”, In Proceedings of the Fourth USENIX Symposium on Internet Technologies and Systems (USITS), Mar. 2003.
Edited by William Gropp, Ewing Lusk and Thomas Sterling, “Beowulf Cluster Computing with Linux,” Massachusetts Institute of Technology, 2003.
Elghany et al., “High Throughput High Performance NoC Switch,” NORCHIP 2008, Nov. 2008, pp. 237-240.
Ercetin, Ozgur et al., “Market-Based Resource Allocation for Content Delivery in the Internet”, IEEE Transactions on Computers, vol. 52, No. 12, pp. 1573-1585, Dec. 2003.
Fan, Li, et al., “Summary Cache: A Scalable Wide-Area Web Cache Sharing Protocol”, IEEE/ACM Transactions on networking, vol. 8, No. 3, Jun. 2000.
Feldmann, Anja, et al., “Efficient Policies for Carrying Web Traffic Over Flow-Switched Networks”, IEEE/ACM Transactions on Networking, vol. 6, No. 6, Dec. 1998.
Feldmann, Anja, et al., “Reducing Overhead in Flow-Switched Networks: An Empirical Study of Web Traffic”, AT&T Labs-Research, Florham Park, NJ, 1998.
Fong, L.L. et al., “Dynamic Resource Management in an eUtility”, IBM T. J. Watson Research Center, 0-7803-7382-0/02/$17.00 .COPYRGT. 2002 IEEE.
Foster, Ian et al., “The Anatomy of the Grid-Enabling Scalable Virtual Organizations”, To appear: Intl J. Supercomputer Applications, 2001.
Fox, Armando et al., “Cluster-Based Scalable Network Services”, University of California at Berkeley, SOSP-Oct. 16, 1997 Saint-Malo, France, ACM 1997.
fpga4fun.com,“What is JTAG?”, 2 pages, Jan. 31, 2010.
From AT to BTX: Motherboard Form Factor, Webopedia, Apr. 29, 2005, p. 1.
Furmento et al., “Building computational communities from federated resources.” European Conference on Parallel, Springer, Berlin, Heidelberg, pp. 855-863. (Year: 2001).
Garg, Rahul, et al., “A SLA Framework for QoS Provisioning and Dynamic Capacity Allocation”, 2002.
Gayek, P., et al., “A Web Content Serving Utility”, IBM Systems Journal, vol. 43, No. 1, pp. 43-63. 2004.
Genova, Zornitza et al., “Challenges in URL Switching for Implementing Globally Distributed Web Sites”, Department of Computer Science and Engineering, University of South Florida, Tampa, Florida 33620. 0-7695-077 I-9/00 $10.00-IEEE. 2000.
Grajcar, Martin, “Genetic List Scheduling Algorithm for Scheduling and Allocation on a Loosely Coupled Heterogeneous Multiprocessor System”, Proceedings of the 36.sup.th annual ACM/IEEE Design Automation Conference, New Orleans, Louisiana, pp. 280-285. 1999.
Grecu et al., “A Scalable Communication-Centric SoC Interconnect Architecture” Proceedings 5th International Symposium on Quality Electronic Design, 2005, pp. 343, 348 (full article included).
IQSearchText-202206090108.txt, publication dated Apr. 6, 2005, 2 pages.
Grimm, Robert et al., “System Support for Pervasive Applications”, ACM Transactions on Computer Systems, vol. 22, No. 4, Nov. 2004, pp. 421-486.
Guo, L. Bhuyan, R. Kumar and S. Basu, “QoS Aware Job Scheduling in a Cluster-Based Web Server for Multimedia Applications”, In Proceedings of the 19.sup.th IEEE International Parallel and Distributed Processing Symposium (IPDPS'05), Apr. 2005.
Gupta, A., Kleinberg, J., Kumar, A., Rastogi, R. & Yener, B. “Provisioning a virtual private network: a network design problem for multicommodity flow,” Proceedings of the thirty-third annual ACM symposium on Theory of computing [online], Jul. 2001, pp. 389-398, abstract [retrieved on Jun. 14, 2007],Retrieved from the Internet:<URL:http://portal.acm.org/citation.cfm?id=380830&dl=ACM&coll- -=GUIDE>.
Haddad and E. Paquin, “MOSIX: A Cluster Load-Balancing Solution for Linux”, In Linux Journal, vol. 2001 Issue 85es, Article No. 6, May 2001.
Hadjiefthymiades, Stathes et al., “Using Proxy Cache Relocation to Accelerate Web Browsing in Wireless/Mobile Communications”, University of Athens, Dept. of Informatics and Telecommunications, Panepistimioupolis, llisia, Athens, 15784, Greece. WWW10, May 1-5, 2001, Hong Kong.
He XiaoShan; QoS Guided Min-Min Heuristic for Grud Task Scheduling; Jul. 2003, vol. 18, No. 4, pp. 442-451 J. Comput. Sci. & Technol.
Hossain et al., “Extended Butterfly Fat Tree Interconnection (EFTI) Architecture for Network on CHIP,” 2005 IEEE Pacific Rim Conference on Communicatinos, Computers and Signal Processing, Aug. 2005, pp. 613-616.
HP “OpenView OS Manager using Radia software”, 5982-7478EN, Rev 1, Nov. 2005; (HP_Nov_2005.pdf; pp. 1-4).
HP ProLiant SL6500 Scalable System, Family data sheet, HP Technical sheet, Sep. 2010 4 pages.
HP Virtual Connect Traffic Flow—Technology brief, Jan. 2012, 22 pages.
Hu, E.C. et al., “Adaptive Fast Path Architecture”, Copyright 2001 by International Business Machines Corporation, pp. 191-206, IBM J. Res. & Dev. vol. 45 No. Mar. 2, 2001.
Huang, S. Sebastine and T. Abdelzaher, “An Architecture for Real-Time Active Content Distribution”, In Proceedings of the 16.sup.th Euromicro Conference on Real-Time Systems (ECRTS 04), pp. 271-280, 2004.
IBM Tivoli Workload Scheduler job Scheduling Console User's Guide Feature Level 1.2 (Maintenance Release Oct. 2003). Oct. 2003, IBM Corporation, http://publib.boulder.IBM.com/tividd/td/TWS/SH19-4552-01/en.sub.-US/PDF/-jsc.sub.-user.pdf.
J. Chase, D. Irwin, L. Grit, J. Moore and S. Sprenkle, “Dynamic Virtual Clusters in a Grid Site Manager”, In Proceedings of the 12.sup.th IEEE International Symposium on High Performance Distributed Computing, pp. 90-100, 2003.
Jackson et al., “Grid Computing: Beyond Enablement”,; Cluster Resource, Inc., Jan. 21, 2005.
Liu, Simon: “Securing the Clouds: Methodologies and Practices.” Encyclopedia of Cloud Computing (2016): 220. (Year: 2016).
Jann, Joefon et al., “Web Applications and Dynamic Reconfiguration in UNIX Servers”, IBM, Thomos J. Watson Research Center, Yorktown' Heights, New York 10598, 0-7803-7756-7/03/$17.00. 2003 IEEE. pp. 186-194.
Jansen et al., “SATA-IO to Develop Specification for Mini Interface Connector” Press Release Sep. 21, 2009, Serial ATA3 pages.
Jarek Nabrzyski, Jennifer M. Schopf and Jan Weglarz, “Grid Resources Management, State of the Art and Future Trends,” Kluwer Academic Publishers, 2004.
Jiang, Xuxian et al., “SODA: a Service-On-Demand Architecture for Application Service Hosting Utility Platforms”, Proceedings of the 12th IEEE International Symposium on High Performance Distributed Computing (HPDC'03) 1082-8907/03 $17.00 .COPYRGT. 2003 IEEE.
Kant, Krishna et al., “Server Capacity Planning for Web Traffic Workload”, IEEE Transactions on Knowledge and Data Engineering, vol. 11, No. 5, Sep./Oct. 1999, pp. 731-474.
Kapitza, F. J. Hauck, and H. P. Reiser, “Decentralized, Adaptive Services: The AspectIX Approach for a Flexible and Secure Grid Environment”, In Proceedings of the Grid Services Engineering and Management Conferences (GSEM, Erfurt, Germany, Nov. 2004), pp. 107-118, LNCS 3270, Springer, 2004.
Kavas et al., “Comparing Windows NT, Linux, and QNX as the Basis for Cluster Systems”, Concurrency and Computation Practice & Experience Wiley UK, vol. 13, No. 15, pp. 1303-1332, Dec. 25, 2001.
Koulopoulos, D. et al., “PLEIADES: An Internet-based parallel/distributed system”, Software-Practice and Experience 2002; 32:1035-1049 (DOI: 10.1002/spe.468).
Kuz, Ihor et al., “A Distributed-Object Infrastructure for Corporate Websites”, Delft University of Technology Vrije Universiteit Vrije Universiteit Delft, The Netherlands, 0-7695-0819-7/00 $10.00 0 2000 IEEE.
Lars C. Wolf et al. “Concepts for Resource Reservation in Advance” Multimedia Tools and Applications. [Online] 1997, pp. 255-278, XP009102070 The Netherlands Retreived from the Internet: URL: http://www.springerlink.com/content/h25481221mu22451/fulltext.pdf [retrieved on Jun. 23, 2008].
Leinberger, W. et al., “Gang Scheduling for Distributed Memory Systems”, University of Minnesota-Computer Science and Engineering-Technical Report, Feb. 16, 2000, vol. TR 00-014.
Liao, Raymond, et al., “Dynamic Core Provisioning for Quantitative Differentiated Services”, IEEE/ACM Transactions on Networking, vol. 12, No. 3, pp. 429-442, Jun. 2004.
Liu et al. “Design and Evaluation of a Resouce Selection Framework for Grid Applicaitons” High Performance Distributed Computing. 2002. HPDC-11 2002. Proceeding S. 11.sup.th IEEE International Symposium on Jul. 23-26, 2002, Piscataway, NJ, USA IEEE, Jul. 23, 2002, pp. 63-72, XP010601162 ISBN: 978-0-7695-1686-8.
Lowell, David et al., “Devirtualizable Virtual Machines Enabling General, SingleNode, Online Maintenance”, ASPLQS'04, Oct. 9-13, 2004, Boston, Massachusetts, USA, pp. 211-223, Copyright 2004 ACM.
Lu, Chenyang et al., “A Feedback Control Approach for Guaranteeing Relative Delays in Web Servers”, Department of Computer Science, University of Virginia, Charlottesville, VA 22903, 0-7695-1134-1/01 $10.00.2001 IEEE.
Mahon, Rob et al., “Cooperative Design in Grid Services”, The 8th International Conference on Computer Supported Cooperative Work in Design Proceedings. pp. 406-412. IEEE 2003.
McCann, Julie, et al., “Patia: Adaptive Distributed Webserver (A Position Paper)”, Department of Computing, Imperial College London, SW1 2BZ, UK. 2003.
Montez, Carlos et al., “Implementing Quality of Service in Web Servers”, LCMI-Depto de Automacao e Sistemas-Univ. Fed. de Santa Catarina, Caixa Postal 476-88040-900-Florianopolis-SC-Brasil, 1060-9857/02 $17.00. 2002 IEEE.
Naik, S. Sivasubramanian and S. Krishnan, “Adaptive Resource Sharing in a Web Services Environment”, In Proceedings of the 5.sup.th ACM/IFIP/USENIX International Conference on Middleware (Middleware '04), pp. 311-330, Springer-Verlag New York, Inc. New York, NY, USA, 2004.
Nakrani and C. Tovey, “On Honey Bees and Dynamic Server Allocation in Internet Hosting Centers”, Adaptive Behavior, vol. 12, No. 3-4, pp. 223-240, Dec. 2004.
Nawathe et al., “Implementation of an 8-Core, 64-Thread, Power Efficient SPARC Server on a Chip”, IEEE Journal of Solid-State Circuits, vol. 43, No. 1, Jan. 2008, pp. 6-20.
Pacifici, Giovanni et al., “Performance Management for Cluster Based Web Services”, IBM Tj Watson Research Center, May 13, 2003.
Pande et al., “Design of a Switch for Network on Chip Applications,” May 25-28, 2003 Proceedings of the 2003 International Symposium on Circuits and Systems, vol. 5, pp. V217-V220.
Ranjan, J. Rolia, H. Fu, and E. Knightly, “QoS-driven Server Migration for Internet Data Centers”, In Proceedings of the Tenth International Workshop on Quality of Service (IWQoS 2002), May 2002.
Rashid, Mohammad, et al., “An Analytical Approach to Providing Controllable Differentiated Quality of Service in Web Servers”, IEEE Transactions on Parallel and Distributed Systems, vol. 16, No. 11, pp. 1022-1033, Nov. 2005.
Raunak, Mohammad et al., “Implications of Proxy Caching for Provisioning Networks and Servers”, IEEE Journal on Selected Areas in Communications, vol. 20, No. 7, pp. 1276-1289, Sep. 2002.
Reed, Daniel et al., “The Next Frontier: Interactive and Closed Loop Performance Steering”, Department of Computer Science, University of Illinois, Urbana, Illinois 61801, International Conference on Parallel Processing Workshop, 1996.
Reumann, John et al., “Virtual Services: A New Abstraction for Server Consolidation”, Proceedings of 2000 USENIX Annual Technical Conference, San Diego, California, Jun. 18-23, 2000.
Rolia, S. Singhal, and R. Friedrich, “Adaptive Internet data centers”, In Proceedings of the International Conference on Advances in Infrastructure for Electronic Business, Science, and Education on the Internet (SSGRR '00), Jul. 2000.
Rolia, X. Zhu, and M. Arlitt, “Resource Access Management for a Utility Hosting Enterprise Applications”, In Proceedings of the 8th IFIP/IEEE International Symposium on Integrated Network Management (IM), pp. 549-562, Colorado Springs, Colorado, USA, Mar. 2003.
Roy, Alain, “Advance Reservation API”, University of Wisconsin-Madison, GFD-E.5, Scheduling Working Group, May 23, 2002.
Ryu, Kyung Dong et al., “Resource Policing to Support Fine-Grain Cycle Stealing in Networks of Workstations”, IEEE Transactions on Parallel and Distributed Systems, vol. 15, No. 10, pp. 878-892, Oct. 2004.
Sacks, Lionel et al., “Active Robust Resource Management in Cluster Computing Using Policies”, Journal of Network and Systems Management, vol. 11, No. 3, pp. 329-350, Sep. 2003.
Shaikh, Anees et al., “Implementation of a Service Platform for Online Games”, Network Software and Services, IBM T.J. Watson Research Center, Hawthorne, NY 10532, SIGCOMM'04 Workshops, Aug. 30 & Sep. 3, 2004, Portland, Oregon, USA. Copyright 2004 ACM.
Shen, H. Tang, T. Yang, and L. Chu, “Integrated Resource Management for Cluster-based Internet Services”, In Proceedings of the 5.sup.th Symposium on Operating Systems Design and Implementation (OSDI '02), pp. 225-238, Dec. 2002.
Shen, L. Chu, and T. Yang, “Supporting Cluster-based Network Services on Functionally Symmetric Software Architecture”, In Proceedings of the ACM/IEEE SC2004 Conference, Nov. 2004.
Si et al., “Language Modeling Framework for Resource Selection and Results Merging”, SIKM 2002, Proceedings of the eleventh international conference on Information and Knowledge Management.
Sit, Yiu-Fai et al., “Cyclone: A High-Performance Cluster-Based Web Server with Socket Cloning”, Department of Computer Science and Information Systems, The University of Hong Kong, Cluster Computing vol. 7, issue 1, pp. 21-37, Jul. 2004, Kluwer Academic Publishers.
Sit, Yiu-Fai et al., “Socket Cloning for Cluster-BasedWeb Servers”, Department of Computer Science and Information Systems, The University of Hong Kong, Proceedings of the IEEE International Conference on Cluster Computing, IEEE 2002.
Snell, Quinn et al., “An Enterprise-Based Grid Resource Management System”, Brigham Young University, Provo, Utah 84602, Proceedings of the 11th IEEE International Symposium on High Performance Distributed Computing, 2002.
Soldatos, John, et al., “On the Building Blocks of Quality of Service in Heterogeneous IP Networks”, IEEE Communications Surveys, The Electronic Magazine of Original Peer-Reviewed Survey Articles, vol. 7, No. 1. First Quarter 2005. cited by applicant.
Stone et al., UNIX Fault Management: A Guide for System Administration, Dec. 1, 1999, ISBN 0-13-026525-X, http://www.informit.com/content/images/013026525X/samplechapter/013026525-.pdf. cited by applicant.
Supercluster Research and Development Group, “Maui Administrator's Guide”, Internet citation, 2002.
Tang, Wenting et al., “Load Distribution via Static Scheduling and Client Redirection for Replicated Web Servers”, Department of Computer Science and Engineering, 3115 Engineering Building, Michigan State University, East Lansing, MI 48824-1226, Proceedings of the 2000 International Workshop on Parallel Processing, pp. 127-133, IEEE 2000.
Taylor, M. Surridge, and D. Marvin, “Grid Resources for Industrial Applications”, In Proceedings of the IEEE International Conference on Web Services (ICWS 04), pp. 402-409, San Diego, California, Jul. 2004.
Urgaonkar, Bhuvan, et al., “Share: Managing CPU and Network Bandwidth in Shared Clusters”, IEEE Transactions on Parallel and Distributed Systems, vol. 15, No. 1, pp. 2-17, Jan. 2004.
Venaas, “IPv4 Multicast Address Space Registry,” 2013, http://www.iana.org/assignments/multicast-addresses/multicast-addresses.x-html.
Vidyarthi, A. K. Tripathi, B. K. Sarker, A. Dhawan, and L. T. Yang, “Cluster-Based Multiple Task Allocation in Distributed Computing System”, In Proceedings of the 18.sup.th International Parallel and Distributed Processing Symposium (IPDPS'04), p. 239, Santa Fe, New Mexico, Apr. 2004.
Villela, P. Pradhan, and D. Rubenstein, “Provisioning Servers in the Application Tier for E-commerce Systems”, In Proceedings of the 12.sup.th IEEE International Workshop on Quality of Service (IWQoS '04), pp. 57-66, Jun. 2004.
Wang, Z., et al., “Resource Allocation for Elastic Traffic: Architecture and Mechanisms”, Bell Laboratories, Lucent Technologies, Network Operations and Management Symposium, 2000. 2000 IEEE/IFIP, pp. 157-170. Apr. 2000.
Wesley et al., “Taks Allocation and Precedence Relations for Distributed Real-Time Systems”, IEEE Transactions on Computers, vol. C-36, No. 6, pp. 667-679. Jun. 1987.
Wolf et al. “Concepts for Resource Reservation in Advance” Multimedia Tools and Applications, 1997.
Workshop on Performance and Architecture of Web Servers (PAWS-2000) Jun. 17-18, 2000, Santa Clara, CA (Held in conjunction with SIGMETRICS-2000).
Xu, Jun, et al., “Sustaining Availability of Web Services under Distributed Denial of Service Attacks”, IEEE Transactions on Computers, vol. 52, No. 2, pp. 195-208, Feb. 2003.
Xu, Zhiwei et al., “Cluster and Grid Superservers: The Dawning Experiences in China”, Institute of Computing Technology, Chinese Academy of Sciences, P.O. Box 2704, Beijing 100080, China. Proceedings of the 2001 IEEE International Conference on Cluster Computing. IEEE 2002.
Yang, Chu-Sing, et al., “Building an Adaptable, Fault Tolerant, and Highly Manageable Web Server on Clusters of Non-dedicated Workstations”, Department of Computer Science and Engineering, National Sun Yat-Sen University, Kaohsiung, Taiwan, R.O.C.. 2000.
Zeng, Daniel et al., “Efficient Web Content Delivery Using Proxy Caching Techniques”, IEEE Transactions on Systems, Man, and Cybernetics-Part C Applications and Reviews, vol. 34, No. 3, pp. 270-280, Aug. 2004.
Zhang, Qian et al., “Resource Allocation for Multimedia Streaming Over the Internet”, IEEE Transactions on Multimedia, vol. 3, No. 3, pp. 339-355, Sep. 2001.
Office Action on U.S. Appl. No. 14/691,120, dated Sep. 8, 2022.
Office Action on U.S. Appl. No. 17/088,954, dated Sep. 13, 2022.
Notice of Allowance on U.S. Appl. No. 17/201,245 dated Sep. 14, 2022.
Office Action on U.S. Appl. No. 17/697,235 dated Sep. 20, 2022.
Notice of Allowance on U.S. Appl. No. 17/700,808, dated Sep. 14, 2022.
Related Publications (1)
Number Date Country
20200382585 A1 Dec 2020 US
Provisional Applications (1)
Number Date Country
60974834 Sep 2007 US
Continuations (4)
Number Date Country
Parent 15463542 Mar 2017 US
Child 16913745 US
Parent 13770798 Feb 2013 US
Child 15463542 US
Parent 13243125 Sep 2011 US
Child 13770798 US
Parent 12236396 Sep 2008 US
Child 13243125 US