On-demand compute environment

Information

  • Patent Grant
  • 12120040
  • Patent Number
    12,120,040
  • Date Filed
    Friday, April 15, 2022
    2 years ago
  • Date Issued
    Tuesday, October 15, 2024
    4 months ago
Abstract
An on-demand compute environment comprises a plurality of nodes within an on-demand compute environment available for provisioning and a slave management module operating on a dedicated node within the on-demand compute environment, wherein upon instructions from a master management module at a local compute environment, the slave management module modifies at least one node of the plurality of nodes.
Description
RELATED APPLICATION

The present application is related to U.S. application Ser. No. 11/276,852 incorporated herein by reference.


COPYRIGHT NOTICE

A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the United States Patent & Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.


BACKGROUND OF THE INVENTION
1. Field of Invention

The present invention relates to a resource management system and more specifically to a system and method of providing access to on-demand compute resources.


2. Introduction

Managers of clusters desire maximum return on investment often meaning high system utilization and the ability to deliver various qualities of service to various users and groups. A cluster is typically defined as a parallel computer that is constructed of commodity components and runs as its system software commodity software. A cluster contains nodes each containing one or more processors, memory that is shared by all of the processors in the respective node and additional peripheral devices such as storage disks that are connected by a network that allows data. to move between nodes. A cluster is one example of a compute environment. Other examples include a grid, which is loosely defined as a group of clusters, and a computer farm which is another organization of computer for processing.


Often a set of resources organized in a cluster or a grid may have jobs to be submitted to the resources that require more capability than the set of resource has available. in this regard, there is a need in the art for being able to easily, efficiently and on-demand be able to utilize new resources or different resources to handle a job. The concept of “on-demand” compute resources has been developing in the high performance computing community recently. An on-demand computing environment enables companies to procure compute power for average demand and then contract remote processing power to help in peak loads or to offload all their compute needs to a remote facility. Several reference books having background material related to on-demand computing or utility computing include Mike Ault, Madhu Tumma, Oracle 10g Grid & Real Application Clusters, Rampant TechPress, 2004 and Guy Bunker, Darren Thomson, Delivering Utility Computing Business-driven IT Optimization, John Wiley & Sons Ltd. 2006.


In Bunker and Thompson, section 3.3 on page 32 is entitled “Connectivity: The Great Enabler” wherein they discuss how the interconnecting of computers will dramatically increase their usefillness. This disclosure addresses that issue. There exists in the art a need for improved solutions to enable communication and connectivity with an on-demand high performance computing center.


SUMMARY OF THE INVENTION

Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth herein.


Various embodiments of the invention include, but are not limited to, methods, systems, computing devices, clusters, grids and computer-readable media that perform the processes and steps described herein.


An on-demand compute environment comprises a plurality of nodes within an on-demand compute environment available for provisioning and a slave management module operating on a dedicated node within the on-demand compute environment, wherein upon instructions from a master management module at a local compute environment, the slave management module modifies at least one node of the plurality of nodes. Methods and computer readable media are also disclosed for managing an on-demand compute environment.


A benefit of the approaches disclosed herein is a reduction in unnecessary costs of building infrastructure to accommodate peak demand. Thus, customers only pay for the extra processing power they need only during those times when they need it.





BRIEF DESCRIPTION OF THE DRAWINGS

In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended documents and drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings.



FIG. 1. illustrates the basic arrangement of the present disclosure;



FIG. 2 illustrates basic hardware components;



FIG. 3 illustrates a method aspect of the disclosure;



FIG. 4 illustrates a method aspect of the disclosure;



FIG. 5 illustrates another method aspect of the disclosure;



FIG. 6 illustrates another method aspect of the disclosure;



FIG. 7 illustrates the context of the invention by showing a prior art organization of clusters and a grid;



FIG. 8 illustrates a prior art arrangement of clusters within a company or organization;



FIG. 9 illustrates an embodiment of the present invention; and



FIG. 10 illustrates a method embodiment of the invention.





DETAILED DESCRIPTION OF THE INVENTION

Various embodiments are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the invention.


This disclosure relates to the access and management of on-demand or utility computing resources at a hosting center. FIG. 1 illustrates the basic arrangement and interaction between a local compute environment 104 and an on-demand hosting center 102. The local compute environment may comprise a cluster, a grid, or any other variation on these types of multiple node and commonly managed environments. The on-demand hosting center or on-demand computing environment 102 comprises a plurality of nodes that are available for provisioning and preferably has a dedicated node containing a hosting master 128 which may comprise a slave management module 106 and/or at least one other module such as the entity manager 128 and node provisioner 118.


Products such as Moab provide an essential service for optimization of a local compute environment. It provides an analysis into how & when local resources, such as software and hardware devices, are being used for the purposes of charge-back, planning, auditing, troubleshooting and reporting internally or externally. Such optimization enables the local environment to be tuned to get the most out of the resources in the local compute environment. However, there are times where more resources are needed.


Typically a hosting center 102 will have the following attributes. It allows an organization to provide resources or services to customers where the resources or services are custom-tailored to the needs of the customer. Supporting true utility computing usually requires creating a hosting center 102 with one or more capabilities as follows: secure remote access, guaranteed resource availability at a fixed time or series of times, integrated auditing/accounting/billing services, tiered service level (QoS/SLA) based resource access, dynamic compute node provisioning, full environment management over compute, network, storage, and application/service based resources, intelligent workload optimization, high availability, failure recovery, and automated re-allocation.


A management module 108 such as, by way of example, Moab™ (which may also refer to any Moab product such as the Moab Workload Manager®, Moab Grid Monitor®, etc. from Cluster Resources, Inc.) enables utility computing by allowing compute resources to be reserved, allocated, and dynamically provisioned to meet the needs of internal or external workload. Thus, at peak workload times, the local compute environment does not need to be built out with peak usage in mind. As periodic peak resources are required, triggers can cause overflow to the on-demand environment and thus save money for the customer. The module 108 is able to respond to either manual or automatically generated requests and can guarantee resource availability subject to existing service level agreement (SLA) or quality of service (QOS) based arrangements, As an example, FIG. 1 shows a user submitting a job or a query 110 to the cluster or local environment 104. The local environment will typically be a cluster or a grid with local workload. Jobs may be submitted which have explicit resource requirements. The local environment 104 will have various attributes such as operating systems, architecture, network types, applications, software, bandwidth capabilities, etc, which are expected by the job implicitly. In other words, jobs will typically expect that the local environment will have certain attributes that will enable it to consume resources in an expected way.


Other software is shown by way of example in a distributed resource manager such as Torque 128 and various nodes 130, 132 and 134. The management modules (both master and/or slave) may interact and operate with any resource manager, such as Torque, LSF, SGE, PBS and LoadLeveler and are agnostic in this regard. Those of skill in the art will recognize these different distributed resource manager software packages.


A hosting master or hosting management module 106 may also be an instance of a Moab software product with hosting center capabilities to enable an organization to dynamically control network, compute, application, and storage resources and to dynamically provision operating systems, security, credentials, and other aspects of a complete end-to-end compute environments. Module 106 is responsible for knowing all the policies, guarantees, promises and also for managing the provisioning of resources within the utility computing space 102. In one sense, module 106 may be referred to as the “master” module in that it couples and needs to know all of the information associated with both the utility environment and the local environment. However, in another sense it may be referred to as the slave module or provisioning broker wherein it takes instructions from the customer management module 108 for provisioning resources and builds whatever environment is requested in the on-demand center 102. A slave module would have none of its own local policies but rather follows all requests from another management module. For example, when module 106 is the slave module, then a master module 108 would submit automated or manual (via an administrator) requests that the slave module 106 simply follows to manage the build out of the requested environment. Thus, for both IT and end users, a single easily usable interface can increase efficiency, reduce costs including management costs and improve investments in the local customer environment. The interface to the local environment which also has the access to the on-demand environment may be a web-interface or access portal as well. Restrictions of feasibility only may exist. The customer module 108 would have rights and ownership of all resources, The allocated resources would not be shared but be dedicated to the requester. As the slave module 106 follows all directions from the master module 108, any policy restrictions will preferably occur on the master module 108 in the local environment.


The modules also provide data management services that simplify adding resources from across a local environment. For example, if the local environment comprises a wide area network, the management module 108 provides a security model that ensures, when the environment dictates, that administrators can rely on the system even when entrusted resources at the certain level have been added to the local environment or the on-demand environment. In addition, the management modules comply with n-tier web services based architectures and therefore scalability and. reporting are inherent parts of the system, A system operating according to the principles set forth herein also has the ability to track, record and archive information about jobs or other processes that have been run on the system.


A hosting center 102 provides scheduled dedicated resources to customers for various purposes and typically has a number of key attributes: secure remote access, guaranteed resource availability at a fixed time or series of times, tightly integrated auditing/accounting services, varying quality of service levels providing privileged access to a set of users, node image management allowing the hosting center to restore an exact customer-specific image before enabling access. Resources available to a module 106, which may also be referred to as a provider resource broker, will have both rigid (architecture, RAM, local disk space, etc.) and flexible (OS, queues, installed applications etc.) attributes. The provider or on-demand resource broker 106 can typically provision (dynamically modify) flexible attributes but not rigid attributes. The provider broker 106 may possess multiple resources each with different types with rigid attributes (i.e., single processor and dual processor nodes, Intel nodes, AMD nodes, nodes with 512 MB RAM, nodes with 1 GB RAM, etc).


This combination of attributes presents unique constraints on a management system. We describe herein how the management modules 108 and 106 are able to effectively manage, modify and provision resources in this environment and provide full array of services on top of these resources.


Utility-based computing technology allows a hosting center 102 to quickly harness existing compute resources, dynamically co-allocate the resources, and automatically provision them into a seamless virtual cluster. The management modules' advanced reservation and policy management tools provide support for the establishment of extensive service level agreements, automated billing, and instant chart and report creation.


Also shown in FIG. 1 are several other components such as an identity manager 112 and a node provisioner 118 as part of the hosting center 102. The hosting master' 128 may include an identity manager interface 112 that may coordinate global and local information regarding users, groups, accounts, and classes associated with compute resources. The identity manager interface 112 may also allow the management module 106 to automatically and dynamically create and modify user accounts and credential attributes according to current workload needs. The hosting master 128 allows sites extensive flexibility when it comes to defining credential access, attributes, and relationships. In most cases, use of the USERCFG, GROUPCFG, ACCOUNTCFG, CLASSCFG, and QOSCFG parameters is adequate to specify the needed configuration. However, in certain cases, such as the following, this approach may not be ideal or even adequate: environments with very large user sets; environments with very dynamic credential configurations in terms of fairshare targets, priorities, service access constraints, and credential relationships; grid environments with external credential mapping information services; enterprise environments with fairness policies based on multi-cluster usage.


The modules address these and similar issues through the use of the identity manager 112. The identity manager 112 allows the module to exchange information with an external identity management service. As with the module's resource manager interfaces, this service can be a full commercial package designed for this purpose, or something far simpler by which the module Obtains the needed information for a web service, text file, or database.


Next attention is turned to the node provisioner 118 and as an example of its operation, the node provisioner 118 can enable the allocation of resources in the hosting center 102 for workload from a local compute environment 104. The customer management module 108 will communicate with the hosting management module 106 to begin the provisioning process. In one aspect, the provisioning module 118 may generate another instance of necessary management software 120 and 122 which will be created in the hosting center environment as well as compute nodes 124 and 126 to be consumed by a submitted job. The new management module 120 is created on the fly, may be associated with a specific request and will preferably be operative on a dedicated node. If the new management module 120 is associated with a specific request or job, as the job consumes the resources associated with the provisioned compute nodes 124, 126, and the job becomes complete, then the system would remove the management module 120 since it was only created for the specific request. The new management module 120 may connect to other modules such as module 108. The module 120 does not necessarily have to be created but may be generated on the fly as necessary to assist in communication and provisioning and use of the resources in the utility environment 102. For example, the module 106 may go ahead and allocate nodes within the utility computing environment 102 and connect these nodes directly to module 108 but in that case you may lose some batch ability as a tradeoff. The hosting master 128 having the management module 106, identity manager 112 and node provisioner 118 preferably is co-located with the utility computing environment but may be distributed. The management module on the local environment 108 may then communicate directly with the created management module 120 in the hosting center to manage the transfer of workload and consumption of on-demand center resources.



FIG. 6 provides an illustration of a method aspect of utilizing the new management module. As shown, this method comprises receiving an instruction at a slave management module associated with an on-demand computing environment from a master management module associated with a local computing environment (602) and based on the instruction, creating a new management module on a node in the on-demand computing environment and provisioning at least one compute node in the on-demand computing environment, wherein the new management module manages the at least one compute node and communicates with the master management module (604).


There are two supported primary usage models, a manual and an automatic model. In manual mode, utilizing the hosted resources can be as easy as going to a web site, specifying what is needed, selecting one of the available options, and logging in when the virtual cluster is activated. In automatic mode, it is even simpler. To utilize hosted resources, the user simply submits jobs to the local cluster. When the local cluster can no longer provide an adequate level of service, it automatically contacts the utility hosting center, allocates additional nodes, and runs the jobs. The end user is never aware that the hosting center even exists. He merely notices that the cluster is now bigger and that his jobs are being run more quickly.


When a request for additional resources is made from the local environment, either automatically or manually, a client module or client resource broker (which may be, for example, an instance of a management module 108 or 120) will contact the provider resource broker 106 to request resources. ft will send information regarding rigid attributes of needed resources as well as quantity or resources needed, request duration, and request timeframe (i.e., start time, feasible times of day, etc.) It will also send flexible attributes which must be provisioned on the nodes 124, 126. Both flexible and rigid resource attributes can come from explicit workload-specified requirement or from implicit requirements associated with the local or default compute resources. The provider resource broker 106 must indicate if it is possible to locate requested resources within the specified timeframe for sufficient duration and of the sufficient quantity. This task includes matching rigid resource attributes and identifying one or more provisioning steps required to put in place all flexible attributes.


When provider resources are identified and selected, the client resource broker 108 or 120 is responsible for seamlessly integrating these resources in with other local resources. This includes reporting resource quantity, state, configuration and load. This further includes automatically enabling a trusted connection to the allocated resources which can perform last mile customization, data staging, and job staging. Commands are provided to create this connection to the provider resource broker 106, query available resources, allocate new resources, expand existing allocations, reduce existing allocations, and release all allocated resources.


In most cases, the end goal of a hosting center 102. is to make available to a customer, a complete, secure, packaged environment which allows them to accomplish one or more specific tasks. This packaged environment may be called a virtual cluster and may consist of the compute, network, data, software, and other resources required by the customer. For successful operation, these resources must be brought together and provisioned or configured so as to provide a seamless environment which allows the customers to quickly and easily accomplish their desired tasks.


Another aspect of the invention is the cluster interface. The desired operational model for many environments is providing the customer with a fully automated self-service web interface. Once a customer has registered with the host company, access to a hosting center portal is enabled. Through this interface, customers describe their workload requirements, time constraints, and other key pieces of information. The interface communicates with the backed services to determine when, where, and how the needed virtual cluster can be created and reports back a number of options to the user. The user selects the desired option and can monitor the status of that virtual cluster via web and email updates. When the virtual cluster is ready, web and email notification is provided including access information. The customer logs in and begins working.


The hosting center 102 will have related policies and service level agreements. Enabling access in a first come-first served model provides real benefits but in many cases, customers require reliable resource access with guaranteed responsiveness. These requirements may be any performance, resource or time based rule such as in the following examples: I need my virtual cluster within 24 hours of asking; I want a virtual cluster available from 2 to 4 PM every Monday, Wednesday, and Friday; I want to always have a virtual cluster available and automatically grow/shrink it based on current load, etc.


Quality of service or service level agreement policies allow customers to convert the virtual cluster resources to a strategic part of their business operations greatly increasing the value of these resources. Behind the scenes, a hosting center 102 consists of resource managers, reservations, triggers, and policies. Once configured, administration of such a system involves addressing reported resource failures (i.e., disk failures, network outages, etc) and monitoring delivered performance to determine if customer satisfaction requires tuning policies or adding resources.


The modules associated with the local environment 104 and the hosting center environment 102 may be referred to as a master module 108 and a slave module 106. This terminology relates to the functionality wherein the hosting center 102 receives requests for workload and provisioning of resources from the module 108 and essentially follows those requests. In this regard, the module 108 may be referred to as a client resource broker 108 which will contact a provider resource broker 106 (such as an On-Demand version of Moab).


The management module 108 may also be, by way of example, a Moab Workload Manager® operating in a master mode. The management module 108 communicates with the compute environment to identify resources, reserve resources for consumption by jobs, provision resources and in general manage the utilization of all compute resources within a compute environment. As can be appreciated by one of skill in the art, these modules may be programmed in any programming language, such as C or C++ and which language is immaterial to the invention.


In a typical operation, a user or a group submits a job to a local compute environment 104 via an interface to the management module 108. An example of a job is a submission of a computer program that will perform a weather analysis for a television station that requires the consumption of a large amount of compute resources. The module 108 and/or an optional scheduler 128 such as TORQUE, as those of skill in the art understand, manages the reservation of resources and the consumption of resources within the environment 104 in an efficient manner that complies with policies and restrictions. The use of a resource manager like TORQUE 128 is optional and not specifically required as part of the disclosure.


A user or a group of users will typically enter into a service level agreement (SLA) which will define the policies and guarantees for resources on the local environment 104. For example, the SLA may provide that the user is guaranteed 10 processors and 50 GB of hard drive space within 5 hours of a submission of a job request. Associated with any user may be many parameters related to permissions, guarantees, priority level, time frames, expansion factors, and so forth. The expansion factor is a measure of how long the job is taking to run on a local environment while sharing the environment with other jobs versus how long it would take if the cluster was dedicated to the job only. It therefore relates to the impact of other jobs on the performance of the particular job. Once a job is submitted and will sit in a job queue waiting to be inserted into the cluster 104 to consume those resources. The management software will continuously analyze the environment 104 and make reservations of resources to seek to optimize the consumption of resources within the environment 104. The optimization process must take into account all the SLA's of users, other policies of the environment 104 and other factors.


As introduced above, this disclosure provides improvements in the connectivity between a local environment 104 and an on-demand center 102. The challenges that exist in accomplishing such a connection include managing all of the capabilities of the various environments, their various policies, current workload, workload queued up in the job queues and so forth.


As a general statement, disclosed herein is a method and system for customizing an on-demand compute environment based on both implicit and explicit job or request requirements. For example, explicit requirements may be requirements specified with a job such as a specific number of nodes or processor and a specific amount of memory. Many other attributes or requirements may be explicitly set forth with a job submission such as requirements set forth in an SLA for that user. Implicit prequirements may relate to attributes of the compute environment that the job is expecting because of where it is submitted. For example, the local compute environment 104 may have particular attributes, such as, for example, a certain bandwidth for transmission, memory, software licenses, processors and processor speeds, hard drive memory space, and so forth. Any parameter that may be an attribute of the local environment in which the job is submitted may relate to an implicit requirement. As a local environment 104 communicates with an on-demand environment 102 for the transfer of workload, the implicit and explicit requirements are seamlessly imported into the on-demand environment 102 such that the user's job can efficiently consume resources in the on-demand environment 102 because of the customization of that environment for the job. This seamless communication occurs between a master module 108 and a slave module 106 in the respective environments. As shown in FIG. 1, a new management module 120 may also be created for a specific process or job and also communicate with a master module 108 to manage the provisioning, consumption and clean up of compute nodes 124, 126 in the on-demand environment 102.


Part of the seamless communication process includes the analysis and provisioning of resources taking into account the need to identify resources such as hard drive space and bandwidth capabilities to actually perform the transfer of the workload. For example, if it is determined that a job in the queue has a SLA that guarantees resources within 5 hours of the request, and based on the analysis by the management module of the local environment the resources cannot be available for 8 hours, and if such a scenario is at triggering event, then the automatic and seamless connectivity with the on-demand center 102 will include an analysis of how long it will take to provision an environment in the on-demand center that matches or is appropriate for the job to run. That process, of provisioning the environment in the on-demand center 102, and transferring workload from the local environment 104 to the on-demand center 102, may take, for example, 1 hour. In that case, the on-demand center will begin the provisioning process one hour before the 5 hour required time such that the provisioning of the environment and transfer of data can occur to meet the SLA for that user. This provisioning process may involve reserving resources within the on-demand center 102 from the master module 108 as will be discussed more below.



FIG. 3 illustrates an embodiment in this regard, wherein a method comprises detecting an event in a local compute environment (302). The event may be a resource need event such as a current resource need or a predicted resource need. Based on the detected event, a module automatically establishes communication with an on-demand compute environment (304). This may also involve dynamically negotiating and establishing a grid/peer relationship based on the resource need event. A module provisions resources within the on-demand compute environment (306) and workload is transferred from the local-environment transparently to the on-demand compute environment (308). Preferably, local information is imported to the on-demand environment and on-demand information is communicated to the local compute environment, although only local environment information may be needed to be transmitted to the on-demand environment. Typically, at least local environment information is communicated and also job information may be communicated to the on-demand environment. Examples of local enviromnent information may be at least one of class information, configuration policy information and other information. Information from the on-demand center may relate to at least one of resources, availability of resources, time frames associated with resources and any other kind of data that informs the local environment of the opportunity and availability of the on-demand resources. The communication and management of the data between the master module or client module in the local environment and the slave module is preferably transparent and unknown to the user who submitted the workload to the local environment. However, one aspect may provide for notice to the user of the tapping into the on-demand resources and the progress and availability of those resources.


Example triggering events may be related to at least one of a resource threshold, a service threshold, workload and a policy threshold or other factors. Furthermore, the event may be based one of all workload associated with the local compute environment or a subset of workload associated with the compute environment or any other subset of a given parameter or may be external to the compute environment such as a natural disaster or power outage or predicted event.


The disclosure below provides for various aspects of this connectivity process between a local environment 104 and an on-demand center 102. The CD submitted with the priority Provisional Patent Application includes source code that carries out this functionality. The various aspects will include an automatic triggering approach to transfer workload from the local environment 104 to the on-demand center 102, a manual “one-click” method of integrating the on-demand compute environment 102 with the local environment 104 and a concept related to reserving resources in the on-demand compute environment 102 from the local compute environment 104.


The first aspect relates to enabling the automatic detection of a triggering event such as passing a resource threshold or service threshold within the compute environment 104. This process may be dynamic and involve identifying resources in a hosting center, allocating resources and releasing them after consumption. These processes may be automated based on a number of factors, such as: workload and credential performance thresholds; a job's current time waiting in the queue for execution, (queuetime) (i.e. allocate if a job has waited more than 20 minutes to receive resources); a job's current expansion factor which relates to a comparison of the affect of other jobs consuming local resources has on the particular job in comparison to a value if the job was the only job consuming resources in the local environment; a job's current execution load (i.e., allocate if load on job's allocated resources exceeds 0.9); quantity of backlog workload (i.e., allocate if more than 50,000 proc-hours of workload exist); a job's average response time in handling transactions (i.e., allocate if job reports it is taking more than 0.5 seconds to process transaction); a number of failures workload has experienced (i.e., allocate if a job cannot start after 10 attempts); overall system utilization (i.e., allocate if more than 80% of machine is utilized) and so forth. This is an example list and those of skill in the art will recognize other factors that may be identified as triggering events.


Other triggering events or thresholds may comprise a predicted workload performance threshold. This would relate to the same listing of events above but be applied in the context of predictions made by a management module or customer resource broker.


Another listing of example events that may trigger communication with the hosting center include, but are not limited to events such as resource failures including compute nodes, network, storage, license (i.e., including expired licenses); service failures including DNS, information services, web services, database services, security services; external event detected (i.e., power outage or national emergency reported) and so forth. These triggering events or thresholds may be applied to allocate initial resources, expand allocated resources, reduce allocated resources and release all allocated resources. Thus, while the primary discussion herein relates to an initial allocation of resources, these triggering events may cause any number of resource-related actions. Events and thresholds may also be associated with any subset of jobs or nodes (i.e., allocate only if threshold backlog is exceeded on high priority jobs only or jobs from a certain user or project or allocate resources only if certain service nodes fail or certain licenses become unavailable.)


For example, if a threshold of 95% of processor consumption is met by 951 processors out of the 1000 processors in the environment are being utilized, then the system (which may or may not include the management module 108) automatically establishes a connection with the on-demand environment 102. Another type of threshold may also trigger the automatic connection such as a service level received threshold, a service level predicted threshold, a policy-based threshold, a threshold or event associated with environment changes such as a resource failure (compute node, network storage device, or service failures).


In a service level threshold, one example is where a SLA specifies a certain service level requirement for a customer, such as resources available within 5 hours. If an actual threshold is not met, i.e., a job has waited now for 5 hours without being able to consume resource, or where a threshold is predicted to not be met, these can be triggering events for communication with the on-demand center. The module 108 then communicates with the slave manager 106 to provision or customize the on-demand resources 102. The two environments exchange the necessary information to create reservations of resources, provision, handle licensing, and so forth, necessary to enable the automatic transfer of jobs or other workload from the local environment 104 to the on-demand environment 102. For a particular task or job, all or part of the workload may be transferred to the on-demand center. Nothing about a user job 110 submitted to a management module 108 changes. The on-demand environment 102 then instantly begins running the job without any change in the job or perhaps even any knowledge of the submitter.


There are several aspects of the disclosure that are shown in the source code on the CD. One is the ability to exchange information. For example, for the automatic transfer of workload to the on-demand center, the system will import remote classes, configuration policy information and other information from the local scheduler 108 to the slave scheduler 106 for use by the on-demand environment 102 Information regarding the on-demand compute environment, resources, policies and so forth are also communicated from the slave module 106 to the local module 108.


The triggering event for the automatic establishment of communication with the on-demand center and a transfer of workload to the on-demand center may be a threshold. that has been passed or an event that occurred. Threshold values may comprise an achieved service level, predicted service level and so forth. For example, a job sitting in a queue for a certain amount of time may trigger a process to contact the on-demand center and transfer that job to the on-demand center to run. If a queue has a certain number of jobs that have not been submitted to the compute environment for processing, if a job has an expansion factor that has a certain value, if a job has failed to start on a local cluster one or more times for whatever reason, then these types of events may trigger communication with the on-demand center. These have been examples of threshold values that when passed will trigger communication with the on-demand environment.


Example events that also may trigger the communication with the on-demand environment include, but are not limited to, events such as the failure of nodes within the environment, storage failure, service failure, license expiration, management software failure, resource manager fails, etc. In other words, any event that may be related to any resource or the management of any resource in the compute environment may be a qualifying event that may trigger workload transfer to an on-demand center. In the license expiration context, if the license in a local environment of a certain software package is going to expire such that a job cannot properly consume resources and utilize the software package, the master module 108 can communicate with the slave module 106 to determine if the on-demand center has the requisite license for that software. If so, then the provisioning of the resources in the on-demand center can be negotiated and the workload transferred wherein it can consume resources under an appropriate legal and licensed framework.


The basis for the threshold or the event that triggers the communication, provisioning and transfer of workload to the on-demand center may be all jobs/workload associated with the local compute environment or a subset of jobs/workload associated with the local compute environment. In other words, the analysis of when an event and/or threshold should trigger the transfer of workload may be based on a subset of jobs. For example, the analysis may be based on all jobs submitted from a particular person or group or may be based on a certain type of job, such as the subset of jobs that will require more than 5 hours of processing time to run. Any parameter may be defined for the subset of jobs used to base the triggering event.


The interaction and communication between the local compute environment and the on-demand compute environment enables an improved process for dynamically growing and shirking provisioned resource space based on load. This load balancing between the on-demand center and the local environment may be based on thresholds, events, all workload associated with the local environment or a subset of the local environment workload.


Another aspect of the disclosure is the ability to automate data management between two sites. This involves the transparent handling of data management between the on-demand environment 102 and the local environment 104 that is transparent to the user. Typically environmental information will always be communicated between the local environment 104 and the on-demand environment 102. In some cases, job information may not need to be communicated because a job may be gathering its own information, say from the Internet, or for other reasons. Therefore, in preparing to provision resources in the on-demand environment all information or a subset of information is communicated to enable the process. Yet another aspect of the invention relates to a simple and easy mechanism to enable on-demand center integration. This aspect of the invention involves the ability of the user or an administrator to, in a single action like the click of a button or a one-click action, be able to command the integration of an on-demand center information and capability into the local resource manager 108.


This feature is illustrated in FIG. 4. A module, preferably associated with the local compute environment, receives a request from an administrator to integrate an on-demand compute environment into the local compute environment (402). The creation of a reservation or of a provisioning of resources in the on-demand environment may be from a request from an administrator or local or remote automated broker, In this regard, the various modules will automatically integrate local compute environment information with on-demand compute environment information to make available resources from the on-demand compute environment to requesters of resources in the local compute environment (404). Integration of the on-demand compute environment may provide for integrating: resource configuration, state information, resource utilization reporting, job submission information, job management information resource management, policy controls including priority, resource ownership, queue configuration, job accounting and tracking and resource accounting and tracking. Thus, the detailed analysis and tracking of jobs and resources may be communicated back from the on-demand center to the local compute environment interface. Furthermore, this integration process may also include a step of automatically creating at least one of a data migration interface and a job migration interface.


Another aspect provides for a method of integrating an on-demand compute environment into a local compute environment. The method comprises receiving a request from an administrator or via an automated command from an event trigger or administrator action to integrate an on-demand compute environment into a local compute environment. In response to the request, local workload information and/or resource configuration information is routed to an on-demand center and an environment is created and customized in the on-demand center that is compatible with workload requirements submitted to the local compute environment. Billing and costing are also automatically integrated and handled.


The exchange and integration of all the necessary information and resource knowledge may be performed in a single action or click to broaden the set of resources that may be available to users who have access initially only to the local compute environment 104. The system may receive the request to integrate an on-demand compute environment into a local compute environment in other manners as well, such as any type of multi-modal request, voice request, graffiti on a touch-sensitive screen request, motion detection, and so forth. Thus the one-click action may be a single tap on a touch sensitive display or a single voice command such as “integrate” or another command or multi-modal input that is simple and singular in nature. In response to the request, the system automatically integrates the local compute environment information with the on-demand compute environment information to enable resources from the on-demand compute environment available to requestors of resources in the local compute environment.


The one-click approach relates to the automated approach expect a human is in the middle of the process. For example, if a threshold or a triggering event is passed, an email or a notice may be sent to an administrator with options to allocate resources from the on-demand center. The administrator may be presented with one or more options related to different types of allocations that are available in the on-demand center—and via one-click or one action the administrator may select the appropriate action. For example, three options may include 500 processors in 1 hour; 700 processors in 2 hours; and 1000 processors in 10 hours. The options may be intelligent in that they may take into account the particular triggering event, costs of utilizing the on-demand environment, SLAs, policies, and any other parameters to present options that comply with policies and available resources. The administrator may be given a recommended selection based on SLAs, cost, or any other parameters discussed herein but may then choose the particular allocation package for the on-demand center. The administrator also may have an option, without an alert, to view possible allocation packages in the on-demand center if the administrator knows of an upcoming event that is not capable of being detected by the modules, such as a meeting with a group wherein they decide to submit a large job the next day which will clearly require on-demand resources. The one-click approach encapsulates the command line instruction to proceed with the allocation of on-demand resources.


One of the aspects of the disclosure is the integration of an on-demand environment 102 and a local compute environment 104 is that the overall data appears locally. In other words, the local scheduler 108 will have access to the resources and knowledge of the on-demand environment 102 but those resources, with the appropriate adherence to local policy requirements, is handled locally and appears locally to users and administrators of the local environment 104.


Another aspect of the invention that is enabled with the attached source code is the ability to specify configuration information and feeding it down the line. For example, the interaction between the compute environments supports static reservations. A static reservation is a reservation that a user or an administrator cannot change, remove or destroy. It is a reservation that is associated with the resource manager 108 itself. A static reservation blocks out time frames when resources are not available for other uses. For example, if to enable a compute environment to have workload run on (or consume) resources, a job takes an hour to provision a resources, then the module 108 may make a static reservation of resources for the provisioning process. The module 108 will locally create a static reservation for the provisioning component of running the job. The module 108 will report on these constraints associated with the created static reservation within the on-demand compute environment.


Then, the module 108 will communicate with the slave module 106 if on-demand resources are needed to run a job. The module 108 communicates with the slave module 106 and identifies what resources are needed (20 processors and 512 MB of memory, for example) and inquires when can those resources be available. Assume that module 106 responds that the processors and memory will be available in one hour and that the module 108 can have those resources for 36 hours. Once all the appropriate information has been communicated between the modules 106 and 108, then module 108 creates a static reservation to block the first part of the resources which requires the one hour of provisioning. The module 108 may also block out the resources with a static reservation from hour 36 to infinity until the resources go away. Therefore, from zero to one hour is blocked out by a static reservation and from the end of the 36 hours to infinity is blocked out. In this way, the scheduler 108 can optimize the on-demand resources and insure that they are available for local workloads. The communication between the modules 106 and 108 is performed preferably via tunneling.


Another aspect relates to receiving requests or information associated with resources in an on-demand center. An example will illustrate. Assume that a company has a reservation of resources within an on-demand center but then finds out that their budget is cut for the year. There is a mechanism for an administrator to enter information such as a request for a cancellation of a reservation so that they do not have to pay for the consumption of those resources. Any type of modification of the on-demand resources may be contemplated here. This process involves translating a current or future state of the environment for a requirement of the modification of usable resources. Another example includes where a group determines that they will run a large job over the weekend that will knowingly need more than the local environment. An administrator can submit in the local resource broker 108 a submission of information associated with a parameter—such as a request for resources and the local broker 108 will communicate with the hosting center 106 and the necessary resources can be reserved in the on-demand center even before the job is submitted to the local environment.


The modification of resources within the on-demand center may be an increase, decrease, or cancellation of resources or reservations for resources. The parameters may be a direct request for resources or a modification of resources or may be a change in an SLA which then may trigger other modifications. For example, if an SLA prevented a user from obtaining more than 500 nodes in an on-demand center and a current reservation has maximized this request, a change in the SLA agreement that extended this parameter may automatically cause the module 106 to increase the reservation of nodes according to the modified SLA. Changing policies in this manner may or may not affect the resources in the on-demand center.



FIG. 5 illustrates a method embodiment related to modifying resources in the on-demand compute environment. The method comprises receiving information at a local resource broker that is associated with resources within an on-demand compute environment (502). Based on the information, the method comprises communicating instructions from the local resource broker to the on-demand compute environment (504) and modifying resources associated with the on-demand compute environment based on the instructions (506). As mentioned above, examples of the type of information that may be received include information associated with a request for a new reservation, a cancellation of an existing reservation, or a modification of a reservation such as expanding or contracting the reserved resources in the on-demand compute environment. Other examples include a revised policy or revision to an SLA that alters (increases or perhaps decreases) allowed resources that may be reserved in the on-demand center. The master module 108 will then provide instructions to the slave module 106 to create or modify reservations in the on-demand computing environment or to make some other alteration to the resources as instructed.


Receiving resource requirement information may be based on user specification, current or predicted workload. The specification of resources may be fully explicit, or may be partially or fully implicit based on workload or based on virtual private cluster (VPC) package concept where VPC package can include aspects of allocated or provisioning support environment and adjustments to resource request timeframes including pre-allocation, allocation duration, and post-allocation timeframe adjustments. The Application incorporated above provides information associated with the VPC that may be utilized in many respects in this invention. The reserved resources may be associated with provisioning or customizing the delivered compute environment. A reservation may involve the co-allocation of resources including any combination of compute, network, storage, license, or service resources (i.e., parallel database services, security services, provisioning services) as part of a reservation across multiple different resource types. Also, the co-allocation of resources over disjoint timeframes to improve availability and utilization of resources may be part of a reservation or a modification of resources. Resources may also be reserved with automated failure handling and resource recovery.


Another feature associated with reservations of resources within the on-demand environment is the use of provisioning padding. This is an alternate approach to the static reservation discussed above, For example, if a reservation of resources would require 2 hours of processing time for 5 nodes, then that reservation may be created in the on-demand center as directed by the client resource broker 108. As part of that same reservation or as part of a separate process, the reservation may be modified or adjusted to increase its duration to accommodate for provisioning overhead and clean up processes. Therefore, there may need to be ½ hour of time in advance of the beginning of the two hour block wherein data transmission, operating system set up, or any other provisioning step needs to occur. Similarly, at the end of the two hours, there may need to be 15 minutes to clean up the nodes and transmit processed data to storage or hack to the local compute environment. Thus, an adjustment of the reservation may occur to account for this provisioning in the on-demand environment. This may or may not occur automatically, for example, the user may request resources for 2 hours and the system may automatically analyze the job submitted or utilize other information to automatically adjust the reservation for the provisioning needs. The administrator may also understand the provisioning needs and specifically request a reservation with provisioning pads on one or both ends of the reservation.


A job may also be broken into component parts and only one aspect of the job transferred to an on-demand center for processing. In that case, the modules will work together to enable co-allocation of resources across local resources and on-demand resources. For example, memory and processors may be allocated in the local environment while disk space is allocated in the on-demand center. In this regard, the local management module could request the particular resources needed for the co-allocation from the on-demand center and when the job is submitted for processing that portion of the job would consume on-demand center resources while the remaining portion of the job consumes local resources. This also may be a manual or automated process to handle the co-allocation of resources.


Another aspect relates to interaction between the master management module 106 and the slave management module 106. Assume a scenario where the local compute environment requests immediate resources from the on-demand center. Via the communication between the local and the on-demand environments, the on-demand environment notifies the local environment that resources are not available for eight hours but provides the information about those resources in the eight hours. At the local environment, the management module 108 may instruct the on-demand management module 106 to establish a reservation for those resources as soon as possible (in eight hours) including, perhaps, provisioning padding for overhead. Thus, although the local environment requested immediate resources from the on-demand center, the best that could be done in this case is a reservation of resources in eight hours given the provisioning needs and other workload and jobs running on the on-demand center. Thus, jobs running or in the queue at the local environment will have an opportunity to tap into the reservation and given a variety of parameters, say job number 12 has priority or an opportunity to get a first choice of those reserved resources.


With reference to FIG. 2, an exemplary system for implementing the invention includes a general purpose computing device 200, including a processing unit (CPU) 220, a system memory 230, and a system bus 210 that couples various system components including the system memory 230 to the processing unit 220. The system bus 210 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. The system may also include other memory such as read only memory (ROM) 240 and random access memory (RAM) 250. A basic input/output (BIOS), containing the basic routine that helps to transfer information between elements within the computing device 200, such as during start-up, is typically stored in ROM 240. The computing device 200 further includes storage means such as a hard disk drive 260, a magnetic disk drive, an optical disk drive, tape drive or the like. The storage device 260 is connected to the system bus 210 by a drive interface. The drives and the associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the computing device 200. In this regard, the various functions associated with the invention that are primarily set forth as the method. embodiment of the invention may be practiced by using any programming language and programming modules to perform the associated operation within the system or the compute environment. Here the compute environment may be a cluster, grid, or any other type of coordinated commodity resources and may also refer to two separate compute environments that are coordinating workload, workflow and so forth such as a local compute environment and an on-demand compute environment. Any such programming module will preferably be associated with a resource management or workload manager or other compute environment management software such as Moab but may also be separately programmed. The basic components are known to those of skill in the art and appropriate variations are contemplated depending on the type of device, such as whether the device is a small, handheld computing device, a desktop computer, or a computer server.


As mentioned above, the present application is related to U.S. patent application Ser. No. 11/276,852, which was incorporated herein by reference. The following paragraphs, modified for formatting, are from that application.


The present invention provides a system, method and computer-readable media for generating virtual private clusters out of a group of compute resources. Typically, the group of compute resources involves a group of clusters independently administered. The method provides for aggregating the group of compute resources, partitioning the aggregated group of compute resources and presenting to each user in an organization a partition representation the organization's virtual private cluster. The users transparently view their cluster and have control over its operation. The partitions may be static or dynamic.


The present invention relates to clusters and more specifically a system and method of creating a virtual private cluster.


The present invention applies to computer clusters and computer grids. A computer cluster may be defined as a parallel computer that is constructed of commodity components and runs commodity software. FIG. 7 illustrates in a general way an example relationship between clusters and grids. A cluster 710 is made up of a plurality of nodes 708A, 708B, 708C, each containing computer processors, memory that is shared by processors in the node and other peripheral devices such as storage discs connected by a network. A resource manager 706A for the node 710 manages jobs submitted by users to be processed by the cluster. Other resource managers 706B, 706C are also illustrated that may manage other clusters (not shown). An example job would be a weather forecast analysis that is compute intensive that needs to have scheduled a cluster of computers to process the job in time for the evening news report.


A cluster scheduler 704A may receive job submissions and identify using information from the resource managers 706A, 706B, 706C which cluster has available resources. The job would then be submitted to that resource manager for processing. Other cluster schedulers 704B and 704C are shown by way of illustration. A grid scheduler 702 may also receive job submissions and identify based on information from a plurality of cluster schedulers 704A, 704B, 704C which clusters may have available resources and then submit the job accordingly.


Several books provide background information on how to organize and create a cluster or a grid and related technologies. See, e.g., Grid Resource Management. State of the Art and Future Trends, Jarek Nabrzyski, Jennifer M. Schopf, and Jan Weglarz, Kluwer Academic Publishers, 2004; and Beowulf Cluster Computing with Linux, edited by William Gropp, Ewing Lusk, and Thomas Sterling, Massachusetts Institute of Technology, 2003.



FIG. 8 illustrates a known arrangement 800 comprising a group of computer clusters 814, 816, 818 consisting of a number of computer nodes 802, 804, 806 each having a group of memory disks, swap, local to the computer itself. In addition, there may exist a number of services that are a part of that cluster. Block 818 comprises two components, a cluster 802 and a storage manager 812 providing network storage services such as LAN-type services. Block 818 illustrates that the network storage services 812 and the cluster or object 802 are organized into a single and independently administered cluster. An example of this may be a marketing department in a large company may have an information technology (“IT”) staff that administers this cluster for that department.


Storage manager 812 may also communicate with nodes or objects 804 in other clusters such as are shown in FIG. 7. Block 816 shows a computer cluster 804 and a network manager 810 that communicate with cluster 804 and may impact other clusters, shown in this case as cluster 802 and cluster 806.


Block 814 illustrates a computer cluster 806 and a software license manager 808. The license manager 808 is responsible for providing software licenses to various user applications and it ensures that an entity stays within bounds of its negotiated licenses with a software vendor. The license manager 808 may also communicate with other clusters 804 as shown.


Assuming that computer clusters 814, 816 and 818 are all part of a single company's computer resources, that company would probably have a number IT teams managing each cluster 816, 814, 818. Typically, there is little crossover or no crossover between the clusters in terms of managing and administration from one cluster to another other than the example storage manager 812, network manager 810 or license manager 808.


There are also many additional services that are local and internal to each cluster. The following are examples of local services that would be found within each duster 814, 816, 818: duster scheduling, message passing, network file system auto mounter, network information services and password services are examples of local services shown as feature 820 in block 814. These illustrate local services that are unique and locally managed. All of those have to be independently managed within each cluster by the respective IT staff.


Assuming that a company owns and administers each cluster 818, 816 and 814, there are reasons for aggregating and partitioning the compute resources. Each organization in the company desires complete ownership and administration over its compute resources. Take the example of a large auto manufacturing company. Various organizations within the company include sales, engineering, marketing and research and development. The sales organization does market research, looking at sales, historical information, analyzing related data and determining how to target the next sales campaign. Design graphics and rendering of advertising may require computer processing power. The engineering department performs aerodynamics and materials science studies and analysis. Each organization with in the company has its own set of goals and computer resource requirements to make certain they can generate its deliverables to the customers.


While this model provides each organization control over their resources, there are downsides to this arrangement. A large cost is the requirement for independent IT teams administering each cluster. There is no opportunity for load balancing where if the sale organization has extra resources not being used, there is no way to connect these clusters to enable access by the engineer teams.


Another cause of reduced efficiency with individual clusters as shown in FIG. 7 is over or under restraining. Users who submit jobs to the cluster for processing desire a certain level of response time according to their desired parameters and permissions. In order to insure the response time, cluster managers typically must significantly over-specify the cluster resources to get the results they want or control over the cycle distribution. When a job is over-specified and then submitted to the cluster, often the job simply does not utilize all the specified resources. This process can leave a percentage of the resources simply unused.


What is needed in the art is a means of maintaining cluster partitions but also sharing resources where needed to improve the efficiency of a cluster or a group of clusters.


Those who manage clusters or submit jobs to clusters want to be able to control the cluster's resources in an efficient manner. There was previously no mechanism to soft partition a cluster or a group of clusters to provide managers with the control they want without giving them a whole lot of additional overhead. Most users do not care how their cluster is set up as long as the resources are available to process submitted jobs and they have the desired level of control.


The present invention addresses the deficiencies in the prior art by providing a system and method of establishing a virtual private cluster out of a group of compute resources. In one aspect of the invention, the group of compute resources may be viewed as a group of clusters. In order to address the deficiencies in the prior art, the present invention introduces steps to create and utilize a virtual private cluster. The method comprises aggregating compute resources across the group of compute resources. This step may comprise two levels, a first level of aggregating multiple resources of the same type and a second level of aggregating resources of distinct types. Aggregating multiple resources of the same type would typically indicate pulling together compute hosts that are possibly connected across multiple networks (or clusters) and aggregating those as though they were one giant cluster. The second type of aggregating involving resources of various types involves aggregating compute resources together with network resources, application or license management resources and storage management resources.


The method next comprises establishing partitions of the group of compute resources to fairly distribute available compute resources amongst a plurality of organizations and presenting only partitioned resources accessible by each organization to users within each organization, wherein the resources presented to each is the virtual private cluster. In this manner, aggregating, partitioning and presenting to a user only their soft partitioned resources enables a more efficient use of the combined group of clusters and is also transparent to the user while providing the desired level of control over the virtual private cluster to the user.


Various embodiments of the invention include systems, methods and computer-readable media storing instructions for controlling a computing device to perform the steps of generating a virtual private cluster.


Applicants note that the capability for performing the steps set forth herein are contained within the source code filed with the CD in the parent provisional application.



FIG. 9 illustrates in more detail the example arrangement of three clusters 818, 816 and 814. In this figure, block 818 includes a group of compute nodes 912 and other compute resources 908 organized as a cluster 802. Block 816 includes compute nodes 904 and resources 910 organized as cluster 804. Block 814 includes compute nodes 906 and resources 912 in cluster 806.


One embodiment of the invention is a method of creating a virtual private cluster. The basic method steps are set forth in FIG. 10 and these will be discussed with further reference to FIG. 9. The method comprises first aggregating compute resources 1002. This step may comprise two levels, a first level of aggregating multiple resources of the same type and a second level of aggregating resources of distinct types. Aggregating multiple resources of the same type would typically indicate pulling together compute hosts that are possibly connected across multiple networks (or clusters) and aggregating those as though they were one giant cluster. FIG. 9 illustrates this step by aggregating some compute nodes from cluster 802 and some compute nodes from cluster 804. The aggregation is shown as feature 920, The second type of aggregating involving resources of various types. For example, this second type may involve aggregating compute resources together with network resources, application or license management resources and storage management resources. This aggregation of a plurality of types of compute resources is illustrated as feature 922. Other distinct compute resources may also be aggregated in addition to those illustrated.


The method next comprises establishing partitions of the group of compute resources to fairly distribute available compute resources amongst a plurality of organizations 1004 and presenting only partitioned resources accessible by each organization to users within each organization 1006, wherein the resources presented to each is the virtual private cluster. FIG. 9 shows that the sales organization “S” is partitioned with particular nodes and compute resources and the engineering organization “E” is assigned various nodes and compute resources. These span blocks 818 and 816 and span different clusters. In this manner, aggregating, partitioning and presenting to a user only their soft partitioned resources enables a more efficient use of the combined group of compute resources or clusters and is also transparent to the user while providing the desired level of control over the virtual private cluster to the user.


There are several aspects to aggregation. FIG. 9 illustrates an aggregation of a portion of the compute resources within blocks 814, 816 and 818. Another approach to aggregation involves aggregating all of the compute resources in the clusters 814, 816 and 818. In this case feature 920 would cover all of the compute resources and feature 922 would envelop all the compute resources including the storage manager 812, the network manager 810 and the license manager 808. The preferred approach would depend on the requirements for the resulting virtual private clusters.


Basically, any other type of resource could be controlled under any type of service middleware in a cluster space. The aggregation process generates a giant virtual cluster spanning all resources of all types. The giant virtual cluster is partitioned into a plurality of smaller sub-clusters. One aspect of the partitioning process involves partitioning based on organizational needs. These needs can be dynamic in that they can change over time and can change in terms of space and resources. They can also change according to environmental factors such as current load, quality of service, guarantees and a number of other factors. For example, a dynamic policy may be rigidly dynamic in time, or vary the same way in time, such as on Monday and Wednesday only. The policies can also be dynamic based on a load or backlog. There are many different ways in which policies can be established for creating partitions for virtual private clusters.


An important aspect of presenting the partition to each organization relates to organizing the partition so that users within each individual organization cannot tell that there is any other consumer any other load or any other resources outside of their own virtual partition. In other words, they only see inside their partition. In this regard, users only see their own jobs, their own historical information, their own resources, their own credentials, users, groups, classes, etc. This approach gives users a feeling of complete control, that they're in their own virtual environment and the policies that affect the site and the changes of that partition over time do not impact the user in their decisions. With this model, companies can have a single IT team manage a single compute resource for all parties and all that would be needed on a per organization basis is basically a single account manager or champion manager that would make certain that what was needed by each organization within the company was guaranteed within the scope of the virtual cluster partitioning policies.


The process of establishing partitions may further comprise establishing partitions of resources, workloads, policies/services and statistics. These are some of the main factors used in determining the structure of the various partitions for each of the virtual private clusters that are created out of the large aggregated cluster or grid. Other factors are also contemplated which may be the basis for partitioning decisions, such as based at least in part on accessibility credentials. Inside each partition exists a particular quality of service and groups of services are established within each virtual private cluster. Services such as the ability to pre-empt jobs, restart jobs and so forth are services that may be established with each partition.


A graphical user interface for generating virtual private clusters is also provided. The virtual private cluster would be generated by an IT manager or other user with a computer interface. The user would insure that the policies for the various organizations in the company were configured such that guarantees were made and that the needs of each individual organization were satisfied. That's interface would be with some of the software, the graphical interface a cluster manager which gives you the policies to manage the virtual partitioning.


There is no specific hardware layout necessary to accomplish virtual private clusters. Any desired model will work. For example, if one wanted these compute clusters to actually be distributed geographically, that invention would operate in the same manner across the distributed network. There may be some losses introduced and there may be difficulties associated with the management of the clusters for a single IT term. However, the concepts are the same. Because of these downsides, it is preferable to aggregate the hardware at a single location and have them virtually partitioned so that they look like they are independently available to the scattered end users. The present invention works according to either model but the recommended model would be to be geographically aggregate to take the benefits of scale.


The preferable programming language for the present invention is c code but there is no requirement for any specific language. The cluster manager that performs the operations of aggregation, partitioning and presenting would run on a server and would communicate with client modules on the various nodes within each cluster. The cluster manager would actually run on a single server or additional service, a fallback server enabled, but the way it talks is it talks to various services the services actually aggregate the information from the cluster and make it available over the network so it does not necessarily have to have its own client but it uses these clusters' peer services so whether the peer services are aggregated or distributed doesn't matter. It pulls it in over the network.


The interfaces allow the cluster manager to communicate natively with the various nodes in the clusters using the appropriate protocols. For example, the cluster manager uses SQL if it is communicating directly to databases. The cluster manager can communicate with any of the propriety resource manager interfaces including load leveler PBS, TORQUE, LSF, SGE and others. In addition it can also speak basic flat text the department of energy SSS, XML-based resource management specification. It can communicate with Ganglia natively. Basically every major protocol that is available in resource management is already speaks and is able to pull information from those nodes or services to perform the steps of the present invention. Those of skill in the art will understand these various protocols and interfaces. Therefore, no further details are provided herein.


An important aspect of dynamic partitioning is that as the partitioned virtual private cluster reflected by the system to each individual user is not a static partition. The partition boundaries will be based upon resource lines but they can change over time according to a fixed calendar schedule or they can change according to load based on needs. For example, if a particular organization needs additional resources it can actually vary the partition boundaries by dynamically modifying them according to the load. This modification is within the constraints of various policies. In addition, an administrator can step in and directly adjust either the calendar or the partition boundaries manually. Other factors can be incorporated into the policy to make certain decisions on when and where these partition boundaries or how their adjusted.


When it comes to reflecting the cluster to the end user, the cluster manager partitions not only according to a block of resources but also according to workload. All current and historic workload is analyzed and their use is returned on a per cluster basis. Thus, marketing or sales would only see jobs submitted by their own department and only have historical information on that. Each department would only be able to get start time estimates for jobs within their environment for resources within their environment. In addition, this virtual partitioning also constrains the resources the credentials that are displayed if there are a number of users or groups, or a number of qualities of service that are set up and configured to enable these users to have special services. Only the services or the credentials that are defined within their partition are reflected and shown out to them and so only those are the ones that they can reflect from, configure, check statistics on and so forth.


The definition of a cluster is varied within the industry but commonly it is used to denote a collection of compute resources under a single administrative domain. In most cases they are also within a single user space and single data space although that is not always the case. As used herein, the term cluster is broadly defined as anything that has a single administrative domain, a single group of policies and a single group of prioritizations. With the present invention, the creation of a virtual private cluster enables one to set up any number of virtual private clusters within a larger single aggregate cluster where each of them has their own set of distinct priotitizations, policies, rules, etc. That is the definition most sites would use as a grid so any place you have a multi-administration domain may be defined in this way.


What one achieves is a grid in a box using the principles of the present invention in that every group is able to set up their environment the way they want it, run independently and share workload across clusters and inside this space. It differs from a standard definition of a grid which typically involves pulling together geographically distributed resources under no centralized control. This model differs in that you have a centralized place of control but that centralized place of control is transparent to all the users and the account managers within the system only see their own private grid. They are not aware of the fact that the resources available within their grid are actually being modified and adjusted to meet a larger set of policy needs.


One of the unique aspects of this invention is the way in which it aggregates. The cluster manager has the ability to aggregate resources using multiple interfaces so its actually able to talk to multiple distinct services. Some of the key issues that it must handle in aggregating these resources is not only speaking to multiple APIs (application programming interfaces) or the various interfaces of any type. The cluster manager has to be able to speak all those interfaces, retrieve data related to each of those interfaces and correlate the data. That is another distinct issue is correlating conflicts in data, filling in holes of missing data. In addition to aggregating the data from those multiple sources and correlating the data and determining a resulting state, the present invention also uses the same interface to distribute its functionality across multiple services, and it is able to do that allowing a site or an administrator to actually assign various services and various pieces of control. The cluster manager may assign an allocation manager responsibility of reconfiguring a node while it talks to a queue manager for launching jobs in parallel across the system. Therefore, the ability of differentiating the required services that are needed to manage such a cluster amongst multiple services is valuable.


In experiments, the inventor set up a Portable Batch System (PBS) system, a standard resource manager which pulls in information about the state of the nodes allows you to also submit jobs, query the jobs, launch the jobs and manage the jobs. A shortcoming of that approach is the fact that it does not provide very accurate or very complete pieces of resource information. In the experiment, the particular lab setup was used to introduce the Ganglia service (which is a node monitor which allows an IT manager to see a lot more information about the node). A multiple resource manager configuration was set up to pull in basically everything PBS knows about the jobs and about the compute nodes and then on top of that the inventor (wed ayed the information available from Ganglia giving a more complete view including network load information, network traffic, IO, traffic swap activity and the like. This information is important for making good scheduling decisions that are not available through a standard resource manager. In addition to that, the system enables one to connect the cluster manager to Red Carpet or some other provisioning system. Those of skill in the art will understand the operation of the Red Carpet software in this space. This allows one to analyze workload that is coming in through PBS and view all the load metrics that are coming in from Ganglia. If it is determined that the load is such that the cluster is not properly configured to optimally meet customer needs, the IT manager or the system automatically can communicate with Red Carpet to change the configuration of this or that node such that it has the operating system or the applications that are needed by these jobs that are coming in through PBS. Then as the node(s) reboots, the information that is available from PBS is no longer valid because the node is off line, the PBS services are dead but the cluster manager does not care because it has alternate sources of information about state. The duster manager can use that information, continue to proceed with the understanding that the node is in fact being re-provisioned and rebuilt. Everything works properly and stays on track and the cluster manager can schedule the workload onto this newly installed node as soon as it becomes available.


There are several benefits to virtual clustering. One benefit is the aggregation which results in reduced cost in hardware, staffing, and fewer points of failure. Another benefit lies with the ability to specify the true partitioning boundaries along the lines of what users really care about without over specifying which is required by other systems where one again fragments the resources. With the dynamic partition one is able to load balance across the clusters while still providing a view as if they were independent and distinct to end users.


While load balancing is commonly used, the present invention is distinct in that it provides is load balancing with absolute guarantees (providing the resources do not fail). It guarantees resource availability to various organizations allowing them to have high levels of confidence that they can meet their deadlines and their objectives.


Although the exemplary environment described herein employs the hard disk, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, memory cartridges, random access memories (RAMs) read only memory (ROM), and the like, may also be used in the exemplary operating environment. The system above provides an example server or computing device that may be utilized and networked with a cluster, clusters or a grid to manage the resources according to the principles set forth herein. It is also recognized that other hardware configurations may be developed in the future upon which the method may be operable.


Embodiments within the scope of the present invention may also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or combination thereof) to a computer, the computer properly views the connection as a computer-readable medium. Thus, any such connection is properly termed a computer-readable medium. Combinations of the above should also be included within the scope of the computer-readable media.


Computer-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Computer-executable instructions also include program modules that are executed by computers in stand-alone or network environments. Generally, program modules include routines, programs, objects, components, and data structures, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps.


Those of skill in the art will appreciate that other embodiments of the invention may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. Embodiments may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination thereof) through a communications network. As can also be appreciated, the compute environment itself, being managed according to the principles of the invention, may be an embodiment of the invention. Thus, separate embodiments may include an on-demand compute environment, a local compute environment, both of these environments together as a more general compute environment, and so forth. In a distributed computing environment, program modules may be located in both local and remote memory storage devices. Accordingly, the scope of the claims should be governed by the claims and their equivalents below rather than by any particular example in the specification.

Claims
  • 1. A method of operating a compute environment comprising a plurality of resources so as to provide a plurality of logically independent clusters to respective ones of a plurality of users or organizations, the method comprising: identifying a plurality of resource requirements associated with respective ones of the plurality of users or organizations;causing partitioning of at least part of the plurality of resources based at least on the identified plurality of resource requirements, the partitioning enabling allocation of respective portions of the at least part of the plurality of resources to only the respective plurality of users or organizations for use thereby as a logically independent cluster such that each of the respective plurality of users or organizations have dedicated access to the respective portions of the at least part of the plurality of resources portioned therefor; andpresenting each of the plurality of users or organizations with respective access to only one or more aspects of the logically independent cluster allocated thereto, the presenting comprising (i) presenting only one or more first services which the respective user or organization utilizes or has utilized for processing of workload, and (ii) selectively excluding presentation of one or more second services which do not relate to the logically independent cluster of the respective user or organization.
  • 2. The method of claim 1, wherein: the compute environment comprises one or more commonly managed clusters, each of the one or more clusters comprising a plurality of compute nodes; andeach of the plurality of users or organizations are separately or independently managed.
  • 3. The method of claim 2, further comprising: receiving data indicative of one or more changes to at least one of the identified plurality of resource requirements; andbased at least on the received data, dynamically varying the allocation of at least one of the respective portions to accommodate the one or more changes.
  • 4. The method of claim 3, wherein the one or more changes comprise changes to one or more quality of service or service level requirements.
  • 5. The method of claim 1, wherein: the identifying the plurality of resource requirements associated with each of the respective ones of the plurality of users or organizations comprises identifying at least one quality of service (QOS) or service level requirement associated with at least one of the plurality of users or organizations; andthe causing partitioning of at least part of the plurality of resources based at least on the identified plurality of resource requirements comprises causing partitioning so as to ensure the at least one QoS or service level requirement is at least met.
  • 6. The method of claim 1, wherein the presenting each of the plurality of users or organizations with respective access to only one or more aspects of the logically independent cluster allocated thereto, comprises presentation of only resources which the respective user or organization may utilize for processing of workload.
  • 7. The method of claim 1, wherein the presenting each of the plurality of users or organizations with respective access to only one or more aspects of the logically independent cluster allocated thereto, comprises presentation of only jobs or workload submitted by the respective user or organization.
  • 8. The method of claim 1, wherein the presenting each of the plurality of users or organizations with respective access to only one or more aspects of the logically independent cluster allocated thereto, comprises presentation of only one or more users or user credentials for the respective user or organization.
  • 9. The method of claim 1, wherein at least one of (i) the identifying a plurality of resource requirements associated with each of the respective ones of the plurality of users or organizations, or (ii) causing partitioning of at least part of the plurality of resources based at least on the identified plurality of resource requirements, comprises implementing at least one respective partitioning policy for each of the respective ones of the plurality of users or organizations.
  • 10. The method of claim 1, further comprising: monitoring one or more conditions; andbased at least on the monitoring, dynamically varying the allocation of at least one of the respective portions.
  • 11. The method of claim 10, wherein the dynamically varying the allocation of at least one of the respective portions comprises dynamically varying in accordance with one or more virtual cluster-specific or user-specific policies.
  • 12. The method of claim 10, wherein the dynamically varying the allocation of at least one of the respective portions comprises dynamically varying in accordance with one or more policies applicable to an entirety of the compute environment.
  • 13. The method of claim 10, wherein the one or more conditions comprise a then-current load on at least one of (i) the compute environment, or (ii) the one or more logically independent clusters.
  • 14. The method of claim 10, wherein the one or more conditions comprises a temporal condition or state.
  • 15. The method of claim 14, wherein the temporal condition or state comprises one of (i) a guaranteed or required response time for processing of workload, or (ii) a particular calendared event.
  • 16. A method of providing virtual compute clusters to respective ones of a plurality of users using one or more commonly managed compute environments, the method comprising: identifying a plurality of resource requirements associated with respective ones of the plurality of users; andcausing partitioning of resources of the one or more commonly managed compute environments based at least on the identified plurality of resource requirements thereby generating respective partitions, the partitioning enabling allocation of respective portions of the resources to the respective plurality of users for concurrent use thereby as respective virtual compute clusters, wherein each of the respective plurality of users have exclusive access to the respective portions of the resources partitioned therefor; andwherein the partitioning is performed so as to (i) at least provide at least some of the plurality of users with guaranteed availability of resources allocated to their respective virtual compute cluster for at least a period of time, and (ii) allow modification of the resources in support of one or more policies affecting both a) the at least some of the plurality of users and b) other users not part of the plurality of users, the modification being transparent to the at least some of the plurality of users.
  • 17. The method of claim 16, wherein the sharing of at least some of the resources by two or more of the plurality of users comprises sharing which is dynamically modified as a function of at least one of a) time, or b) load on at least one of the virtual compute clusters.
  • 18. The method of claim 16, further comprising providing load-balancing between at least two of the virtual compute clusters, the providing load-balancing between at least two of the virtual compute clusters comprises providing the load-balancing between at least two of the at least some of the plurality of users having guaranteed resource availability.
  • 19. The method of claim 16, wherein the guaranteed resource availability is sufficient to at least meet the plurality of resource requirements associated with the at least some of the plurality of users.
  • 20. The method of claim 16, wherein: the one or more commonly managed compute environments comprise two or more commonly managed compute environments;the partitioning is further performed so as to at least reduce over-specification of resources of the two or more commonly managed compute environments for workload, relative to a non-virtual clustered environment; andthe at least reduction of the over-specification of resources increases an efficiency of resource utilization within the two or more commonly managed compute environments.
  • 21. A method of operating a compute environment comprising a plurality of resources so as to provide a plurality of logically independent clusters to respective ones of a plurality of users or organizations, the method comprising: identifying a plurality of resource requirements associated with respective ones of the plurality of users or organizations;causing partitioning of at least part of the plurality of resources based at least on the identified plurality of resource requirements, the partitioning enabling allocation of respective portions of the at least part of the plurality of resources to the respective plurality of users or organizations for use thereby as a logically independent cluster; andpresenting each of the plurality of users or organizations with respective access to only one or more aspects of the logically independent cluster allocated thereto, wherein the access comprises at least exclusive control of particular resources associated with the logically independent cluster, the one or more aspects of the logically independent cluster relating to one more particular services associated with a particular department of an organization or company.
  • 22. The method of claim 21, further comprising varying the partition boundaries.
  • 23. A commonly managed compute environment configured for provision of a plurality of virtual private clusters, comprising: a plurality of resources, at least some of the resources which may be shared by two or more users of the commonly managed compute environment; andat least one computerized management process in data communication with the plurality of resources and comprising computerized logic configured to, when executed: access data identifying two or more requirements for resources associated with respective ones of the two or more users;aggregate portions of the plurality of resources associated with respective ones of services via utilization of respective ones of a plurality of APIs (application programming interfaces) associated with the respective ones of services, the aggregation comprising (i) retrieval, via at least data communication with each of the plurality of APIs, of data related to each of the plurality of APIs and (ii) correlation of at least portions of the data related to each of the plurality of APIs with other portions of the data related to each of the plurality of APIs;cause partitioning of the aggregated portions of the plurality of resources based at least on the identified two or more requirements for resources, the partitioning comprising allocation of two or more respective portions of the plurality of resources to the respective two or more users for concurrent and exclusive use thereby as respective virtual compute clusters; andthereafter, based at least on second data received by the computerized management process, the second data regarding one or more monitored parameters, cause dynamic modification of the partitioning to generate two or more new respective portions of the plurality of resources for allocation to the respective two or more users, at least one of the two or more new respective portions comprising at least some different ones of the plurality of resources.
  • 24. The commonly managed compute environment of claim 23, wherein the computerized logic is further configured to, when executed: receive policy data relating to respective one or more resource usage policies associated with respective ones of the two or more users; andutilize at least a portion of the received policy data in performance of at least one of (i) the partitioning of the plurality of resources, or (ii) the dynamic modification of the partitioning.
  • 25. The commonly managed compute environment of claim 23, wherein: at least one of (i) the partitioning of the plurality of resources, or (ii) the dynamic modification of the partitioning, is configured to obey or meet the at least one QoS or service level requirement.
  • 26. The commonly managed compute environment of claim 23, wherein the one or more monitored parameters comprise at least one of: (i) then-current load on at least a portion of the commonly managed compute environment, or (ii) a predicted future load on at least a portion of the commonly managed compute environment.
  • 27. The commonly managed compute environment of claim 23, wherein the one or more monitored parameters comprises a backlog of workload associated with one or more users of the commonly managed compute environment.
  • 28. The commonly managed compute environment of claim 27, wherein the allocation of two or more respective portions of the plurality of resources to the respective two or more users for concurrent use thereby as respective virtual compute clusters comprises allocation only if a threshold for the backlog of the workload is exceeded for workloads with higher priority than other workloads.
  • 29. The commonly managed compute environment of claim 23, wherein: the plurality of resources comprises a plurality of compute nodes or hosts; andthe allocation of two or more respective portions of the plurality of resources to the respective two or more users for concurrent use thereby as respective virtual compute clusters comprises aggregation of at least two of the compute nodes or hosts that are in data communication with one another over an existing network connection into at least one of the virtual compute clusters.
  • 30. The commonly managed compute environment of claim 29, wherein: the plurality of resources further comprises at least one of: a) one or more network resources, or b) one or more data storage management resources; andthe allocation of two or more respective portions of the plurality of resources to the respective two or more users for concurrent use thereby as respective virtual compute clusters comprises aggregation of the at least one of a) or b) together with the at least one of the virtual compute clusters.
  • 31. The commonly managed compute environment of claim 23, wherein: the one or more monitored parameters comprises a failure of a node of one of the respective virtual compute clusters; andthe allocation of two or more respective portions of the plurality of resources to the respective two or more users for concurrent use thereby as respective virtual compute clusters comprises allocation based at least on the failure.
  • 32. The commonly managed compute environment of claim 23, wherein: the one or more monitored parameters comprises a license becoming unavailable; andthe allocation of two or more respective portions of the plurality of resources to the respective two or more users for concurrent use thereby as respective virtual compute clusters comprises allocation based at least on the license becoming unavailable.
PRIORITY CLAIM

The present application is a continuation of U.S. patent application Ser. No. 14/827,927, filed Aug. 17, 2015, which is a continuation of U.S. patent application Ser. No. 13/758,164, filed Feb. 4, 2013, (now U.S. Pat. No. 9,112,813 issued Aug. 18, 2015), which is a continuation of U.S. patent application Ser. No. 12/752,622, filed Apr. 1, 2010, now U.S. Pat. No. 8,370,495, issued Feb. 5, 2013, which is a continuation of U.S. patent application Ser. No. 11/276,856, filed Mar. 16, 2006, now U.S. Pat. No. 7,698,430, issued Apr. 13, 2010, which claims priority to U.S. Provisional Application No. 60/662,240 filed Mar. 16, 2005, the contents of which are incorporated herein by reference.

US Referenced Citations (2023)
Number Name Date Kind
4215406 Gomola et al. Jul 1980 A
4412288 Herman Oct 1983 A
4525780 Bratt et al. Jun 1985 A
4532893 Day et al. Aug 1985 A
4542458 Kitajima Nov 1985 A
4553202 Trufyn Nov 1985 A
4677614 Circo Jun 1987 A
4850891 Walkup et al. Jul 1989 A
4852001 Tsushima et al. Jul 1989 A
4943932 Lark et al. Jul 1990 A
4975840 DeTore Dec 1990 A
4992958 Kageyama Feb 1991 A
5012409 Fletcher Apr 1991 A
5056070 Shibuya Oct 1991 A
5084696 Guscott Jan 1992 A
5132625 Shaland Jul 1992 A
5146561 Carey et al. Sep 1992 A
5168441 Onarheim Dec 1992 A
5175800 Galis et al. Dec 1992 A
5233533 Edstrom Aug 1993 A
5257374 Hammer et al. Oct 1993 A
5274809 Iwasaki Dec 1993 A
5276877 Friedrich Jan 1994 A
5299115 Fields et al. Mar 1994 A
5307496 Ichinose et al. Apr 1994 A
5325526 Cameron et al. Jun 1994 A
5349682 Rosenberry Sep 1994 A
5355508 Kan Oct 1994 A
5377332 Entwistle et al. Dec 1994 A
5408663 Miller Apr 1995 A
5451936 Yang et al. Sep 1995 A
5465328 Dievendorff Nov 1995 A
5469566 Hohenstein Nov 1995 A
5473773 Aman et al. Dec 1995 A
5477546 Shibata Dec 1995 A
5495533 Linehan et al. Feb 1996 A
5504894 Ferguson et al. Apr 1996 A
5542000 Semba Jul 1996 A
5550970 Cline et al. Aug 1996 A
5594901 Andoh Jan 1997 A
5594908 Hyatt Jan 1997 A
5598536 Slaughter et al. Jan 1997 A
5600844 Shaw et al. Feb 1997 A
5623641 Kadoyashiki Apr 1997 A
5623672 Popat Apr 1997 A
5651006 Fujino et al. Jul 1997 A
5652841 Nemirovsky et al. Jul 1997 A
5666293 Metz Sep 1997 A
5675739 Eilert et al. Oct 1997 A
5701451 Rogers et al. Dec 1997 A
5729754 Estes Mar 1998 A
5732077 Whitehead Mar 1998 A
5734818 Kern et al. Mar 1998 A
5737009 Payton Apr 1998 A
5745694 Egawa Apr 1998 A
5752022 Chiu May 1998 A
5752030 Konno et al. May 1998 A
5757771 Li May 1998 A
5761433 Billings Jun 1998 A
5761475 Yung Jun 1998 A
5761484 Agarwal et al. Jun 1998 A
5765146 Wolf Jun 1998 A
5774660 Brendel et al. Jun 1998 A
5774668 Choquier Jun 1998 A
5781187 Gephardt et al. Jul 1998 A
5781624 Mitra et al. Jul 1998 A
5787000 Lilly Jul 1998 A
5787459 Stallmo et al. Jul 1998 A
5799174 Muntz et al. Aug 1998 A
5801985 Roohparvar et al. Sep 1998 A
5826080 Dworzecki Oct 1998 A
5826082 Bishop et al. Oct 1998 A
5826236 Narimatsu et al. Oct 1998 A
5826239 Du et al. Oct 1998 A
5828743 Pinnell Oct 1998 A
5828888 Kozaki et al. Oct 1998 A
5832517 Knutsen, II Nov 1998 A
5854887 Kindell et al. Dec 1998 A
5862478 Cutler, Jr. et al. Jan 1999 A
5867382 McLaughlin Feb 1999 A
5874789 Su Feb 1999 A
5881238 Aman et al. Mar 1999 A
5901048 Hu May 1999 A
5908468 Hartmann Jun 1999 A
5911143 Deinhart et al. Jun 1999 A
5913921 Tosey Jun 1999 A
5918017 Attanasio et al. Jun 1999 A
5920545 Raesaenen et al. Jul 1999 A
5920863 McKeehan et al. Jul 1999 A
5926798 Carter Jul 1999 A
5930167 Lee et al. Jul 1999 A
5933417 Rottoo Aug 1999 A
5935293 Detering et al. Aug 1999 A
5950190 Yeager Sep 1999 A
5956715 Glasser Sep 1999 A
5958003 Preining et al. Sep 1999 A
5961599 Kalavade et al. Oct 1999 A
5963911 Walker Oct 1999 A
5968176 Nessett et al. Oct 1999 A
5971804 Gallagher et al. Oct 1999 A
5978356 Elwalid et al. Nov 1999 A
5987611 Freund Nov 1999 A
6003061 Jones Dec 1999 A
6006192 Cheng et al. Dec 1999 A
6012052 Altschuler et al. Jan 2000 A
6021425 Waldron, III et al. Feb 2000 A
6032224 Blumenau Feb 2000 A
6052707 D'Souza Apr 2000 A
6055618 Thorson Apr 2000 A
6058416 Mukherjee et al. May 2000 A
6067545 Wolff May 2000 A
6076174 Freund Jun 2000 A
6078953 Vaid et al. Jun 2000 A
6079863 Furukawa Jun 2000 A
6085238 Yuasa et al. Jul 2000 A
6088718 Altschuler et al. Jul 2000 A
6092178 Jindal et al. Jul 2000 A
6094712 Follett Jul 2000 A
6097882 Mogul Aug 2000 A
6098090 Burns Aug 2000 A
6101508 Wolff Aug 2000 A
6105117 Ripley Aug 2000 A
6108662 Hoskins et al. Aug 2000 A
6115382 Abe Sep 2000 A
6122664 Boukobza Sep 2000 A
6141214 Ahn Oct 2000 A
6151598 Shaw et al. Nov 2000 A
6154778 Koistinen et al. Nov 2000 A
6161170 Burger et al. Dec 2000 A
6167445 Gai et al. Dec 2000 A
6175869 Ahuja et al. Jan 2001 B1
6181699 Crinion et al. Jan 2001 B1
6182139 Brendel et al. Jan 2001 B1
6182142 Win et al. Jan 2001 B1
6185272 Hiraoglu Feb 2001 B1
6185575 Orcutt Feb 2001 B1
6185601 Wolff Feb 2001 B1
6189111 Alexander Feb 2001 B1
6192414 Horn Feb 2001 B1
6195678 Komuro Feb 2001 B1
6198741 Yoshizawa et al. Mar 2001 B1
6201611 Carter et al. Mar 2001 B1
6202080 Lu et al. Mar 2001 B1
6205465 Schoening et al. Mar 2001 B1
6210275 Olsen Apr 2001 B1
6212542 Kahle et al. Apr 2001 B1
6223202 Bayeh Apr 2001 B1
6226677 Slemmer May 2001 B1
6226788 Schoening et al. May 2001 B1
6240453 Chang May 2001 B1
6247056 Chou et al. Jun 2001 B1
6252878 Locklear Jun 2001 B1
6253230 Couland et al. Jun 2001 B1
6256704 Hlava Jul 2001 B1
6259675 Honda Jul 2001 B1
6263359 Fong et al. Jul 2001 B1
6266667 Olsson Jul 2001 B1
6269398 Leong Jul 2001 B1
6278712 Takihiro et al. Aug 2001 B1
6282561 Jones et al. Aug 2001 B1
6289382 Bowman-Amuah Sep 2001 B1
6298352 Kannan et al. Oct 2001 B1
6304549 Srinivasan Oct 2001 B1
6314114 Coyle et al. Nov 2001 B1
6314487 Hahn et al. Nov 2001 B1
6314501 Gulick et al. Nov 2001 B1
6314555 Ndumu et al. Nov 2001 B1
6317787 Boyd et al. Nov 2001 B1
6324279 Kalmanek, Jr. et al. Nov 2001 B1
6327364 Shaffer et al. Dec 2001 B1
6330008 Razdow et al. Dec 2001 B1
6330562 Boden et al. Dec 2001 B1
6330583 Reiffin Dec 2001 B1
6330605 Christensen et al. Dec 2001 B1
6333936 Johansson et al. Dec 2001 B1
6334114 Jacobs et al. Dec 2001 B1
6338085 Ramaswamy Jan 2002 B1
6338112 Wipfel et al. Jan 2002 B1
6339717 Baumgartl et al. Jan 2002 B1
6343311 Nishida et al. Jan 2002 B1
6343488 Hackfort Feb 2002 B1
6345287 Fong et al. Feb 2002 B1
6345294 O'Toole et al. Feb 2002 B1
6349295 Tedesco Feb 2002 B1
6351775 Yu Feb 2002 B1
6353844 Bitar Mar 2002 B1
6363434 Eytchison Mar 2002 B1
6363488 Ginter et al. Mar 2002 B1
6366945 Fong et al. Apr 2002 B1
6370154 Wickham Apr 2002 B1
6370584 Bestavros et al. Apr 2002 B1
6373841 Goh et al. Apr 2002 B1
6374254 Cochran et al. Apr 2002 B1
6374297 Wolf et al. Apr 2002 B1
6384842 DeKoning May 2002 B1
6385302 Antonucci et al. May 2002 B1
6392989 Jardetzky et al. May 2002 B1
6393569 Orenshteyn May 2002 B1
6393581 Friedman et al. May 2002 B1
6400996 Hoffberg et al. Jun 2002 B1
6401133 York Jun 2002 B1
6404768 Basak et al. Jun 2002 B1
6405212 Samu Jun 2002 B1
6405234 Ventrone Jun 2002 B2
6418459 Gulick Jul 2002 B1
6434568 Bowman-Amuah Aug 2002 B1
6438125 Brothers Aug 2002 B1
6438134 Chow et al. Aug 2002 B1
6438553 Yamada Aug 2002 B1
6438594 Bowman-Amuah Aug 2002 B1
6438652 Jordan et al. Aug 2002 B1
6442137 Yu et al. Aug 2002 B1
6445968 Jalla Sep 2002 B1
6446192 Narasimhan et al. Sep 2002 B1
6446206 Feldbaum Sep 2002 B1
6452809 Jackson et al. Sep 2002 B1
6452924 Golden et al. Sep 2002 B1
6453349 Kano et al. Sep 2002 B1
6453383 Stoddard et al. Sep 2002 B1
6460082 Lumelsky et al. Oct 2002 B1
6463454 Lumelsky et al. Oct 2002 B1
6464261 Dybevik et al. Oct 2002 B1
6466935 Stuart Oct 2002 B1
6466965 Chessell et al. Oct 2002 B1
6466980 Lumelsky et al. Oct 2002 B1
6477575 Koeppel Nov 2002 B1
6477580 Bowman-Amuah Nov 2002 B1
6487390 Virine et al. Nov 2002 B1
6490432 Wegener et al. Dec 2002 B1
6496566 Posthuma Dec 2002 B1
6496866 Attanasio et al. Dec 2002 B2
6496872 Katz et al. Dec 2002 B1
6502135 Munger et al. Dec 2002 B1
6505228 Schoening et al. Jan 2003 B1
6507586 Satran et al. Jan 2003 B1
6519571 Guheen et al. Feb 2003 B1
6520591 Jun et al. Feb 2003 B1
6526442 Stupek, Jr. et al. Feb 2003 B1
6529499 Doshi et al. Mar 2003 B1
6529932 Dadiomov et al. Mar 2003 B1
6538994 Horspool Mar 2003 B1
6549940 Allen et al. Apr 2003 B1
6556952 Magro Apr 2003 B1
6557073 Fujiwara Apr 2003 B1
6564261 Gudjonsson et al. May 2003 B1
6571215 Mahapatro May 2003 B1
6571391 Acharya et al. May 2003 B1
6574238 Thrysoe Jun 2003 B1
6574632 Fox et al. Jun 2003 B2
6578005 Lesaint Jun 2003 B1
6578068 Bowman-Amuah Jun 2003 B1
6584489 Jones et al. Jun 2003 B1
6584499 Jantz et al. Jun 2003 B1
6587469 Bragg Jul 2003 B1
6587938 Eilert Jul 2003 B1
6590587 Wichelman et al. Jul 2003 B1
6594718 Ebner Jul 2003 B1
6600898 Bonet et al. Jul 2003 B1
6601234 Bowman-Amuah Jul 2003 B1
6606660 Bowman-Amuah Aug 2003 B1
6618820 Krum Sep 2003 B1
6622168 Datta Sep 2003 B1
6626077 Gilbert Sep 2003 B1
6628649 Raj et al. Sep 2003 B1
6629081 Cornelius et al. Sep 2003 B1
6629148 Ahmed et al. Sep 2003 B1
6633544 Rexford et al. Oct 2003 B1
6636853 Stephens, Jr. Oct 2003 B1
6640145 Hoffberg et al. Oct 2003 B2
6640238 Bowman-Amuah Oct 2003 B1
6640248 Jorgensen Oct 2003 B1
6651098 Carroll et al. Nov 2003 B1
6651125 Maergner Nov 2003 B2
6661671 Franke et al. Dec 2003 B1
6661787 O'Connell et al. Dec 2003 B1
6662202 Krusche et al. Dec 2003 B1
6662219 Nishanov et al. Dec 2003 B1
6668304 Satran et al. Dec 2003 B1
6678065 Hikawa Jan 2004 B1
6687257 Balasubramanian Feb 2004 B1
6690400 Moayyad et al. Feb 2004 B1
6690647 Tang et al. Feb 2004 B1
6701318 Fox et al. Mar 2004 B2
6704489 Kurauchi Mar 2004 B1
6708220 Olin Mar 2004 B1
6711691 Howard et al. Mar 2004 B1
6714778 Nykanen et al. Mar 2004 B2
6724733 Schuba et al. Apr 2004 B1
6725456 Bruno et al. Apr 2004 B1
6735188 Becker et al. May 2004 B1
6735630 Gelvin et al. May 2004 B1
6735716 Podanoffsky May 2004 B1
6738736 Bond May 2004 B1
6738974 Nageswaran May 2004 B1
6745221 Ronca Jun 2004 B1
6745246 Erimli et al. Jun 2004 B1
6745262 Benhase Jun 2004 B1
6748559 Pfister Jun 2004 B1
6754892 Johnson Jun 2004 B1
6757723 O'Toole et al. Jun 2004 B1
6757897 Shi Jun 2004 B1
6760306 Pan et al. Jul 2004 B1
6763519 McColl Jul 2004 B1
6763520 Seeds Jul 2004 B1
6766389 Hayter et al. Jul 2004 B2
6771661 Chawla et al. Aug 2004 B1
6772211 Lu et al. Aug 2004 B2
6775701 Pan et al. Aug 2004 B1
6779016 Aziz et al. Aug 2004 B1
6781990 Puri et al. Aug 2004 B1
6782408 Chandra Aug 2004 B1
6785724 Drainville et al. Aug 2004 B1
6785794 Chase et al. Aug 2004 B2
6813676 Henry et al. Nov 2004 B1
6816750 Klaas Nov 2004 B1
6816903 Rakoshitz et al. Nov 2004 B1
6816905 Sheets et al. Nov 2004 B1
6823377 Wu et al. Nov 2004 B1
6826607 Gelvin et al. Nov 2004 B1
6829206 Watanabe Dec 2004 B1
6829762 Arimilli et al. Dec 2004 B2
6832251 Gelvin et al. Dec 2004 B1
6836806 Raciborski et al. Dec 2004 B1
6842430 Melnik Jan 2005 B1
6850966 Matsuura et al. Feb 2005 B2
6857020 Chaar et al. Feb 2005 B1
6857026 Cain Feb 2005 B1
6857938 Smith et al. Feb 2005 B1
6859831 Gelvin et al. Feb 2005 B1
6859927 Moody et al. Feb 2005 B2
6862451 Alard Mar 2005 B1
6862606 Major et al. Mar 2005 B1
6868097 Soda et al. Mar 2005 B1
6874031 Corbeil Mar 2005 B2
6882718 Smith Apr 2005 B1
6894792 Abe May 2005 B1
6904460 Raciborski et al. Jun 2005 B1
6912533 Hornick Jun 2005 B1
6922664 Fernandez et al. Jul 2005 B1
6925431 Papaefstathiou Aug 2005 B1
6928471 Pabari et al. Aug 2005 B2
6931640 Asano et al. Aug 2005 B2
6934702 Faybishenko et al. Aug 2005 B2
6938256 Deng et al. Aug 2005 B2
6947982 McGann et al. Sep 2005 B1
6948171 Dan et al. Sep 2005 B2
6950821 Faybishenko et al. Sep 2005 B2
6950833 Costello et al. Sep 2005 B2
6952828 Greene Oct 2005 B2
6954784 Aiken et al. Oct 2005 B2
6963917 Callis et al. Nov 2005 B1
6963926 Robinson Nov 2005 B1
6963948 Gulick Nov 2005 B1
6965930 Arrowood et al. Nov 2005 B1
6966033 Gasser et al. Nov 2005 B1
6968323 Bansal et al. Nov 2005 B1
6971098 Khare et al. Nov 2005 B2
6975609 Khaleghi et al. Dec 2005 B1
6977939 Joy et al. Dec 2005 B2
6978310 Rodriguez et al. Dec 2005 B1
6978447 Okmianski Dec 2005 B1
6985461 Singh Jan 2006 B2
6985937 Keshav et al. Jan 2006 B1
6988170 Barroso et al. Jan 2006 B2
6990063 Lenoski et al. Jan 2006 B1
6990616 Botton-Dascal Jan 2006 B1
6990677 Pietraszak et al. Jan 2006 B1
6996821 Butterworth Feb 2006 B1
6996822 Willen Feb 2006 B1
7003414 Wichelman et al. Feb 2006 B1
7006881 Hoffberg et al. Feb 2006 B1
7013303 Faybishenko et al. Mar 2006 B2
7013322 Lahr Mar 2006 B2
7017186 Day Mar 2006 B2
7020695 Kundu et al. Mar 2006 B1
7020701 Gelvin et al. Mar 2006 B1
7020719 Grove et al. Mar 2006 B1
7032119 Fung Apr 2006 B2
7034686 Matsumura Apr 2006 B2
7035230 Shaffer et al. Apr 2006 B1
7035240 Balakrishnan et al. Apr 2006 B1
7035854 Hsiao et al. Apr 2006 B2
7035911 Lowery et al. Apr 2006 B2
7043605 Suzuki May 2006 B2
7058070 Tran et al. Jun 2006 B2
7058716 Sundaresan et al. Jun 2006 B1
7058949 Willen Jun 2006 B1
7058951 Bril et al. Jun 2006 B2
7065579 Traversal et al. Jun 2006 B2
7065764 Prael et al. Jun 2006 B1
7072807 Brown et al. Jul 2006 B2
7076717 Grossman, IV et al. Jul 2006 B2
7080078 Slaughter et al. Jul 2006 B1
7080283 Songer et al. Jul 2006 B1
7080285 Kosugi Jul 2006 B2
7080378 Noland et al. Jul 2006 B1
7082606 Wood et al. Jul 2006 B2
7085825 Pishevar et al. Aug 2006 B1
7085837 Kimbrel et al. Aug 2006 B2
7085893 Krissell et al. Aug 2006 B2
7089294 Baskey et al. Aug 2006 B1
7093256 Bloks Aug 2006 B2
7095738 Desanti Aug 2006 B1
7099933 Wallace et al. Aug 2006 B1
7100192 Igawa et al. Aug 2006 B1
7102996 Amdahl et al. Sep 2006 B1
7103625 Hipp et al. Sep 2006 B1
7103664 Novaes et al. Sep 2006 B1
7107578 Alpem Sep 2006 B1
7107589 Tal Sep 2006 B1
7117208 Tamayo et al. Oct 2006 B2
7117273 O'Toole et al. Oct 2006 B1
7119591 Lin Oct 2006 B1
7124289 Suorsa Oct 2006 B1
7124410 Berg et al. Oct 2006 B2
7126913 Patel et al. Oct 2006 B1
7127613 Pabla et al. Oct 2006 B2
7127633 Olson et al. Oct 2006 B1
7136927 Traversal et al. Nov 2006 B2
7140020 McCarthy et al. Nov 2006 B2
7143088 Green et al. Nov 2006 B2
7143153 Black et al. Nov 2006 B1
7143168 BiBiasio et al. Nov 2006 B1
7145995 Oltmanns et al. Dec 2006 B2
7146233 Aziz et al. Dec 2006 B2
7146353 Garg et al. Dec 2006 B2
7146416 Yoo et al. Dec 2006 B1
7150044 Hoefelmeyer et al. Dec 2006 B2
7154621 Rodriguez et al. Dec 2006 B2
7155478 Ims et al. Dec 2006 B2
7155502 Galloway et al. Dec 2006 B1
7165107 Pouyoul et al. Jan 2007 B2
7165120 Giles et al. Jan 2007 B1
7167920 Traversat et al. Jan 2007 B2
7168049 Day Jan 2007 B2
7170315 Bakker et al. Jan 2007 B2
7171415 Kan et al. Jan 2007 B2
7171476 Maeda et al. Jan 2007 B2
7171491 O'Toole et al. Jan 2007 B1
7171593 Whittaker Jan 2007 B1
7177823 Lam et al. Feb 2007 B2
7180866 Chartre et al. Feb 2007 B1
7185046 Ferstl et al. Feb 2007 B2
7185073 Gai et al. Feb 2007 B1
7185077 O'Toole et al. Feb 2007 B1
7188145 Lowery et al. Mar 2007 B2
7188174 Rolia et al. Mar 2007 B2
7191244 Jennings et al. Mar 2007 B2
7197071 Weigand Mar 2007 B1
7197549 Salama et al. Mar 2007 B1
7197559 Goldstein et al. Mar 2007 B2
7197561 Lovy et al. Mar 2007 B1
7197565 Abdelaziz et al. Mar 2007 B2
7200716 Aiello Apr 2007 B1
7203063 Bash et al. Apr 2007 B2
7203746 Harrop Apr 2007 B1
7203753 Yeager et al. Apr 2007 B2
7206819 Schmidt Apr 2007 B2
7206841 Traversal et al. Apr 2007 B2
7206934 Pabla et al. Apr 2007 B2
7213047 Yeager et et al. May 2007 B2
7213050 Shaffer et May 2007 B1
7213062 Raciborski et al. May 2007 B1
7213065 Watt May 2007 B2
7216173 Clayton et al. May 2007 B2
7222187 Yeager et al. May 2007 B2
7222343 Heyrman et al. May 2007 B2
7225249 Barry et al. May 2007 B1
7225442 Dutta et al. May 2007 B2
7228348 Farley Jun 2007 B1
7228350 Hong et al. Jun 2007 B2
7231445 Aweya et al. Jun 2007 B1
7233569 Swallow Jun 2007 B1
7233669 Candelore Jun 2007 B2
7236915 Algieri et al. Jun 2007 B2
7237243 Sutton et al. Jun 2007 B2
7242501 Ishimoto Jul 2007 B2
7243351 Kundu Jul 2007 B2
7249179 Romero et al. Jul 2007 B1
7251222 Chen et al. Jul 2007 B2
7251688 Leighton et al. Jul 2007 B2
7254608 Yeager et al. Aug 2007 B2
7257655 Burney et al. Aug 2007 B1
7260846 Day Aug 2007 B2
7263288 Islam Aug 2007 B1
7263560 Abdelaziz et al. Aug 2007 B2
7263596 Wideman Aug 2007 B1
7274705 Chang et al. Sep 2007 B2
7275018 Abu-El-Zeet et al. Sep 2007 B2
7275102 Yeager et al. Sep 2007 B2
7275249 Miller et al. Sep 2007 B1
7278008 Case et al. Oct 2007 B1
7278142 Bandhole et al. Oct 2007 B2
7278582 Siegel et al. Oct 2007 B1
7281045 Aggarwal et al. Oct 2007 B2
7283838 Lu Oct 2007 B2
7284109 Paxie et al. Oct 2007 B1
7289619 Vivadelli et al. Oct 2007 B2
7289985 Zeng et al. Oct 2007 B2
7293092 Sukegawa Nov 2007 B2
7296268 Darling et al. Nov 2007 B2
7299294 Bruck et al. Nov 2007 B1
7305464 Phillipi et al. Dec 2007 B2
7308496 Yeager et al. Dec 2007 B2
7308687 Trossman et al. Dec 2007 B2
7310319 Awsienko et al. Dec 2007 B2
7313793 Traut et al. Dec 2007 B2
7315887 Liang et al. Jan 2008 B1
7320025 Steinberg et al. Jan 2008 B1
7324555 Chen et al. Jan 2008 B1
7325050 O'Connor et al. Jan 2008 B2
7328243 Yeager et al. Feb 2008 B2
7328264 Babka Feb 2008 B2
7328406 Kalinoski et al. Feb 2008 B2
7334108 Case et al. Feb 2008 B1
7334230 Chung et al. Feb 2008 B2
7337333 O'Conner et al. Feb 2008 B2
7337446 Sankaranarayan et al. Feb 2008 B2
7340500 Traversal et al. Mar 2008 B2
7340578 Khanzode Mar 2008 B1
7340777 Szor Mar 2008 B1
7343467 Brown et al. Mar 2008 B2
7349348 Johnson et al. Mar 2008 B1
7350186 Coleman et al. Mar 2008 B2
7353276 Bain et al. Apr 2008 B2
7353362 Georgiou et al. Apr 2008 B2
7353495 Somogyi Apr 2008 B2
7356655 Allen et al. Apr 2008 B2
7356770 Jackson Apr 2008 B1
7363346 Groner et al. Apr 2008 B2
7366101 Varier et al. Apr 2008 B1
7366719 Shaw Apr 2008 B2
7370092 Aderton et al. May 2008 B2
7373391 Iinuma May 2008 B2
7373524 Motsinger et al. May 2008 B2
7376693 Neiman et al. May 2008 B2
7380039 Miloushev et al. May 2008 B2
7382154 Ramos et al. Jun 2008 B2
7383433 Yeager et al. Jun 2008 B2
7386586 Headley et al. Jun 2008 B1
7386611 Dias et al. Jun 2008 B2
7386850 Mullen Jun 2008 B2
7386888 Liang et al. Jun 2008 B2
7389310 Bhagwan et al. Jun 2008 B1
7392325 Grove et al. Jun 2008 B2
7392360 Aharoni Jun 2008 B1
7395536 Verbeke et al. Jul 2008 B2
7395537 Brown Jul 2008 B1
7398216 Barnett et al. Jul 2008 B2
7398471 Rambacher Jul 2008 B1
7398525 Leymann Jul 2008 B2
7401114 Block et al. Jul 2008 B1
7401152 Traversal et al. Jul 2008 B2
7401153 Traversal et al. Jul 2008 B2
7401355 Supnik et al. Jul 2008 B2
7403994 Vogl et al. Jul 2008 B1
7409433 Lowery et al. Aug 2008 B2
7412492 Waldspurger Aug 2008 B1
7412703 Cleary et al. Aug 2008 B2
7415709 Hipp et al. Aug 2008 B2
7418518 Grove et al. Aug 2008 B2
7418534 Hayter et al. Aug 2008 B2
7421402 Chang et al. Sep 2008 B2
7421500 Talwar et al. Sep 2008 B2
7423971 Mohaban et al. Sep 2008 B1
7426489 Van Soestbergen et al. Sep 2008 B2
7426546 Breiter et al. Sep 2008 B2
7428540 Coates et al. Sep 2008 B1
7433304 Galloway et al. Oct 2008 B1
7437460 Chidambaran et al. Oct 2008 B2
7437540 Paolucci et al. Oct 2008 B2
7437730 Goyal Oct 2008 B2
7441261 Slater et al. Oct 2008 B2
7447147 Nguyen et al. Nov 2008 B2
7447197 Terrell et al. Nov 2008 B2
7451197 Davis Nov 2008 B2
7451199 Kandefer et al. Nov 2008 B2
7451201 Alex et al. Nov 2008 B2
7454467 Girouard et al. Nov 2008 B2
7461134 Ambrose Dec 2008 B2
7463587 Rajsic et al. Dec 2008 B2
7464159 Luoffo et al. Dec 2008 B2
7464160 Iszlai et al. Dec 2008 B2
7466712 Makishima et al. Dec 2008 B2
7466810 Quon et al. Dec 2008 B1
7467225 Anerousis et al. Dec 2008 B2
7467306 Cartes et al. Dec 2008 B2
7467358 Kang et al. Dec 2008 B2
7475419 Basu et al. Jan 2009 B1
7483945 Blumofe Jan 2009 B2
7484008 Gelvin et al. Jan 2009 B1
7484225 Hugly et al. Jan 2009 B2
7487254 Walsh et al. Feb 2009 B2
7487509 Hugly et al. Feb 2009 B2
7492720 Pruthi et al. Feb 2009 B2
7496494 Altman Feb 2009 B2
7502747 Pardo et al. Mar 2009 B1
7502884 Shah et al. Mar 2009 B1
7503045 Aziz et al. Mar 2009 B1
7505463 Schuba Mar 2009 B2
7512649 Faybishenko et al. Mar 2009 B2
7512894 Hintermeister Mar 2009 B1
7516208 Kerrison Apr 2009 B1
7516221 Souder et al. Apr 2009 B2
7516455 Matheson et al. Apr 2009 B2
7519677 Lowery et al. Apr 2009 B2
7519843 Buterbaugh et al. Apr 2009 B1
7526479 Zenz Apr 2009 B2
7529835 Agronow et al. May 2009 B1
7533141 Nadgir et al. May 2009 B2
7533161 Hugly et al. May 2009 B2
7533172 Traversal et al. May 2009 B2
7533385 Barnes May 2009 B1
7536541 Isaacson May 2009 B2
7543052 Klein Jun 2009 B1
7546553 Bozak et al. Jun 2009 B2
7551614 Teisberg et al. Jun 2009 B2
7554930 Gaddis et al. Jun 2009 B2
7555666 Brundridge et al. Jun 2009 B2
7562143 Fellenstein et al. Jul 2009 B2
7568199 Bozak et al. Jul 2009 B2
7570943 Sorvari et al. Aug 2009 B2
7571438 Jones et al. Aug 2009 B2
7574523 Traversal et al. Aug 2009 B2
7577722 Khandekar et al. Aug 2009 B1
7577834 Traversal et al. Aug 2009 B1
7577959 Nguyen et al. Aug 2009 B2
7580382 Amis et al. Aug 2009 B1
7580919 Hannel Aug 2009 B1
7583607 Steele et al. Sep 2009 B2
7583661 Chaudhuri Sep 2009 B2
7584239 Yan Sep 2009 B1
7584274 Bond et al. Sep 2009 B2
7586841 Vasseur Sep 2009 B2
7590746 Slater et al. Sep 2009 B2
7590747 Coates et al. Sep 2009 B2
7594011 Chandra Sep 2009 B2
7594015 Bozak et al. Sep 2009 B2
7596144 Pong Sep 2009 B2
7596784 Abrams et al. Sep 2009 B2
7599360 Edsall et al. Oct 2009 B2
7606225 Xie et al. Oct 2009 B2
7606245 Ma et al. Oct 2009 B2
7610266 Cascaval Oct 2009 B2
7610289 Muret et al. Oct 2009 B2
7613796 Harvey et al. Nov 2009 B2
7616646 Ma et al. Nov 2009 B1
7620057 Aloni et al. Nov 2009 B1
7620635 Hornick Nov 2009 B2
7620706 Jackson Nov 2009 B2
7624118 Schipunov et al. Nov 2009 B2
7624194 Kakivaya et al. Nov 2009 B2
7627691 Buchsbaum et al. Dec 2009 B1
7631066 Schatz et al. Dec 2009 B1
7631307 Wang et al. Dec 2009 B2
7640353 Shen et al. Dec 2009 B2
7640547 Neiman et al. Dec 2009 B2
7644215 Wallace et al. Jan 2010 B2
7657535 Moyaux et al. Feb 2010 B2
7657597 Arora et al. Feb 2010 B2
7657626 Zwicky Feb 2010 B1
7657677 Huang et al. Feb 2010 B2
7657756 Hall Feb 2010 B2
7657779 Kaminsky Feb 2010 B2
7660887 Reedy et al. Feb 2010 B2
7660922 Harriman Feb 2010 B2
7664110 Lovett et al. Feb 2010 B1
7665090 Tormasov et al. Feb 2010 B1
7668809 Kelly et al. Feb 2010 B1
7673164 Agarwal Mar 2010 B1
7680933 Fatula, Jr. Mar 2010 B2
7685281 Saraiya et al. Mar 2010 B1
7685599 Kanai et al. Mar 2010 B2
7685602 Tran et al. Mar 2010 B1
7689661 Lowery et al. Mar 2010 B2
7693976 Perry et al. Apr 2010 B2
7693993 Sheets et al. Apr 2010 B2
7694076 Lowery et al. Apr 2010 B2
7694305 Karlsson et al. Apr 2010 B2
7698386 Amidon et al. Apr 2010 B2
7698398 Lai Apr 2010 B1
7698430 Jackson Apr 2010 B2
7701948 Rabie et al. Apr 2010 B2
7702779 Gupta et al. Apr 2010 B1
7707088 Schmelzer Apr 2010 B2
7707185 Czezatke Apr 2010 B1
7710936 Morales Barroso May 2010 B2
7711652 Schmelzer May 2010 B2
7716193 Krishnamoorthy May 2010 B2
7716334 Rao et al. May 2010 B2
7719834 Miyamoto et al. May 2010 B2
7721125 Fung May 2010 B2
7725583 Jackson May 2010 B2
7730220 Hasha et al. Jun 2010 B2
7730262 Lowery et al. Jun 2010 B2
7730488 Ilzuka et al. Jun 2010 B2
7739308 Baffier et al. Jun 2010 B2
7739541 Rao et al. Jun 2010 B1
7742425 El-Damhougy Jun 2010 B2
7742476 Branda et al. Jun 2010 B2
7743147 Suorsa et al. Jun 2010 B2
7747451 Keohane et al. Jun 2010 B2
RE41440 Briscoe et al. Jul 2010 E
7751433 Dollo et al. Jul 2010 B2
7752258 Lewin et al. Jul 2010 B2
7752624 Crawford, Jr. et al. Jul 2010 B2
7756658 Kulkarni et al. Jul 2010 B2
7757033 Mehrotra Jul 2010 B1
7757236 Singh Jul 2010 B1
7760720 Pullela et al. Jul 2010 B2
7761557 Fellenstein et al. Jul 2010 B2
7761687 Blumrich et al. Jul 2010 B2
7765288 Bainbridge et al. Jul 2010 B2
7765299 Romero Jul 2010 B2
7769620 Fernandez et al. Aug 2010 B1
7769803 Birdwell et al. Aug 2010 B2
7770120 Baudisch Aug 2010 B2
7774331 Barth et al. Aug 2010 B2
7774495 Pabla et al. Aug 2010 B2
7778234 Cooke et al. Aug 2010 B2
7782813 Wheeler et al. Aug 2010 B2
7783777 Pabla et al. Aug 2010 B1
7783786 Lauterbach Aug 2010 B1
7783910 Felter et al. Aug 2010 B2
7788403 Darugar et al. Aug 2010 B2
7788477 Huang et al. Aug 2010 B1
7791894 Bechtolsheim Sep 2010 B2
7792113 Foschiano et al. Sep 2010 B1
7793288 Sameske Sep 2010 B2
7796399 Clayton et al. Sep 2010 B2
7796619 Feldmann et al. Sep 2010 B1
7797367 Gelvin et al. Sep 2010 B1
7797393 Qiu et al. Sep 2010 B2
7801132 Ofek et al. Sep 2010 B2
7802017 Uemura et al. Sep 2010 B2
7805448 Andrzejak et al. Sep 2010 B2
7805575 Agarwal et al. Sep 2010 B1
7810090 Gebhart Oct 2010 B2
7813822 Hoffberg Oct 2010 B1
7827361 Karlsson et al. Nov 2010 B1
7830820 Duke et al. Nov 2010 B2
7831839 Hatakeyama Nov 2010 B2
7840353 Ouksel et al. Nov 2010 B2
7840703 Arimilli et al. Nov 2010 B2
7840810 Eastham Nov 2010 B2
7844687 Gelvin et al. Nov 2010 B1
7844787 Ranganathan et al. Nov 2010 B2
7848262 El-Damhougy Dec 2010 B2
7849139 Wolfson et al. Dec 2010 B2
7849140 Abdel-Aziz et al. Dec 2010 B2
7853880 Porter Dec 2010 B2
7860999 Subramanian et al. Dec 2010 B1
7865614 Lu et al. Jan 2011 B2
7886023 Johnson Feb 2011 B1
7889675 Mack-Crane et al. Feb 2011 B2
7890571 Kriegsman Feb 2011 B1
7890701 Lowery et al. Feb 2011 B2
7891004 Gelvin et al. Feb 2011 B1
RE42262 Stephens, Jr. Mar 2011 E
7899047 Cabrera et al. Mar 2011 B2
7899864 Margulis Mar 2011 B2
7900206 Joshi et al. Mar 2011 B1
7904569 Gelvin et al. Mar 2011 B1
7921169 Jacobs Apr 2011 B2
7925795 Tamir et al. Apr 2011 B2
7930397 Midgley Apr 2011 B2
7934005 Fascenda Apr 2011 B2
7958262 Hasha et al. Jun 2011 B2
7970830 Staggs Jun 2011 B2
7970929 Mahalingaiah Jun 2011 B1
7971204 Jackson Jun 2011 B2
7975032 Lowery et al. Jul 2011 B2
7975035 Popescu et al. Jul 2011 B2
7975110 Spaur et al. Jul 2011 B1
7984137 O'Toole, Jr. et al. Jul 2011 B2
7984183 Andersen et al. Jul 2011 B2
7991817 Dehon et al. Aug 2011 B2
7991922 Hayter et al. Aug 2011 B2
7992151 Warrier et al. Aug 2011 B2
7992983 Nanjo Aug 2011 B2
7995501 Jetcheva et al. Aug 2011 B2
7996458 Nielsen Aug 2011 B2
7996510 Vicente Aug 2011 B2
8000288 Wheeler et al. Aug 2011 B2
8014408 Habetha et al. Sep 2011 B2
8018860 Cook Sep 2011 B1
8019832 De Sousa et al. Sep 2011 B2
8032634 Eppstein Oct 2011 B1
8037202 Yeager et al. Oct 2011 B2
8037475 Jackson Oct 2011 B1
8041773 Abu-Ghazaleh et al. Oct 2011 B2
8055788 Chan et al. Nov 2011 B1
8060552 Hinni et al. Nov 2011 B2
8060619 Saulpaugh Nov 2011 B1
8060760 Shetty et al. Nov 2011 B2
8060775 Sharma et al. Nov 2011 B1
8073978 Sengupta et al. Dec 2011 B2
8078708 Wang et al. Dec 2011 B1
8079118 Gelvin et al. Dec 2011 B2
8082400 Chang et al. Dec 2011 B1
8090880 Hasha et al. Jan 2012 B2
8095600 Hasha et al. Jan 2012 B2
8095601 Hasha et al. Jan 2012 B2
8103543 Zwicky Jan 2012 B1
8108455 Yeager et al. Jan 2012 B2
8108508 Goh et al. Jan 2012 B1
8108512 Howard et al. Jan 2012 B2
8108930 Hoefelmeyer et al. Jan 2012 B2
8122269 Houlihan et al. Feb 2012 B2
8132034 Lambert et al. Mar 2012 B2
8135812 Lowery et al. Mar 2012 B2
8140658 Gelvin et al. Mar 2012 B1
8151103 Jackson Apr 2012 B2
8155113 Agarwal Apr 2012 B1
8156362 Branover et al. Apr 2012 B2
8160077 Traversal et al. Apr 2012 B2
8161391 McClelland et al. Apr 2012 B2
8165120 Maruccia et al. Apr 2012 B2
8166063 Andersen et al. Apr 2012 B2
8166204 Basu et al. Apr 2012 B2
8170040 Konda May 2012 B2
8171136 Petite May 2012 B2
8176189 Traversal et al. May 2012 B2
8176490 Jackson May 2012 B1
8180996 Fullerton et al. May 2012 B2
8185776 Gentes et al. May 2012 B1
8189612 Lemaire et al. May 2012 B2
8194659 Ban Jun 2012 B2
8196133 Kakumani et al. Jun 2012 B2
8199636 Rouyer et al. Jun 2012 B1
8204992 Arora et al. Jun 2012 B2
8205044 Lowery et al. Jun 2012 B2
8205103 Kazama et al. Jun 2012 B2
8205210 Cleary et al. Jun 2012 B2
8244671 Chen et al. Aug 2012 B2
8260893 Bandhole et al. Sep 2012 B1
8261349 Peng Sep 2012 B2
8266321 Johnston-Watt et al. Sep 2012 B2
8271628 Lowery et al. Sep 2012 B2
8271980 Jackson Sep 2012 B2
8275881 Fellenstein et al. Sep 2012 B2
8302100 Deng et al. Oct 2012 B2
8321048 Coss et al. Nov 2012 B1
8346591 Fellenstein et al. Jan 2013 B2
8346908 Vanyukhin et al. Jan 2013 B1
8359397 Traversal et al. Jan 2013 B2
8370898 Jackson Feb 2013 B1
8379425 Fukuoka et al. Feb 2013 B2
8380846 Abu-Ghazaleh et al. Feb 2013 B1
8386622 Jacobson Feb 2013 B2
8392515 Kakivaya et al. Mar 2013 B2
8396757 Fellenstein et al. Mar 2013 B2
8397092 Karnowski Mar 2013 B2
8402540 Kapoor et al. Mar 2013 B2
8407428 Cheriton et al. Mar 2013 B2
8413155 Jackson Apr 2013 B2
8417715 Bruckhaus et al. Apr 2013 B1
8417813 Kakivaya et al. Apr 2013 B2
8429396 Trivedi Apr 2013 B1
8458333 Stoica et al. Jun 2013 B1
8463867 Robertson et al. Jun 2013 B2
8464250 Ansel Jun 2013 B1
8484382 Das et al. Jul 2013 B2
8495201 Klincewicz Jul 2013 B2
8504663 Lowery et al. Aug 2013 B2
8504791 Cheriton et al. Aug 2013 B2
8516470 van Rietschote Aug 2013 B1
8544017 Prael et al. Sep 2013 B1
8554920 Chen et al. Oct 2013 B2
8560639 Murphy et al. Oct 2013 B2
8572326 Lowery et al. Oct 2013 B2
RE44610 Krakirian et al. Nov 2013 E
8578130 DeSota et al. Nov 2013 B2
8584129 Czajkowski Nov 2013 B1
8589517 Hoefelmeyer et al. Nov 2013 B2
8599863 Davis Dec 2013 B2
8601595 Gelvin et al. Dec 2013 B2
8606800 Lagad et al. Dec 2013 B2
8615602 Li et al. Dec 2013 B2
8626820 Levy Jan 2014 B1
8631130 Jackson Jan 2014 B2
8684802 Gross et al. Apr 2014 B1
8701121 Saffre Apr 2014 B2
8726278 Shawver et al. May 2014 B1
8737410 Davis May 2014 B2
8738860 Griffin et al. May 2014 B1
8745275 Ikeya et al. Jun 2014 B2
8745302 Davis et al. Jun 2014 B2
8782120 Jackson Jul 2014 B2
8782231 Jackson Jul 2014 B2
8782321 Harriman et al. Jul 2014 B2
8782654 Jackson Jul 2014 B2
8812400 Faraboschi et al. Aug 2014 B2
8824485 Biswas et al. Sep 2014 B2
8826270 Lewis Sep 2014 B1
8854831 Arnouse Oct 2014 B2
8863143 Jackson Oct 2014 B2
8903964 Breslin Dec 2014 B2
8924560 Pang Dec 2014 B2
8930536 Jackson Jan 2015 B2
8954584 Subbarayan et al. Feb 2015 B1
9008079 Davis et al. Apr 2015 B2
9038078 Jackson May 2015 B2
9054990 Davis Jun 2015 B2
9060060 Lobig Jun 2015 B2
9069611 Jackson Jun 2015 B2
9069929 Borland Jun 2015 B2
9075655 Davis et al. Jul 2015 B2
9075657 Jackson Jul 2015 B2
9077654 Davis Jul 2015 B2
9092594 Borland Jul 2015 B2
9112813 Jackson Aug 2015 B2
9116755 Jackson Aug 2015 B2
9128767 Jackson Sep 2015 B2
9152455 Jackson Oct 2015 B2
9176785 Jackson Nov 2015 B2
9231886 Jackson Jan 2016 B2
9258276 Dalal et al. Feb 2016 B2
9262225 Davis Feb 2016 B2
9268607 Jackson Feb 2016 B2
9288147 Kern Mar 2016 B2
9304896 Chandra et al. Apr 2016 B2
9311269 Davis Apr 2016 B2
9367802 Arndt et al. Jun 2016 B2
9405584 Davis Aug 2016 B2
9413687 Jackson Aug 2016 B2
9438515 McCormick Sep 2016 B2
9450875 Tong Sep 2016 B1
9454403 Davis Sep 2016 B2
9465771 Davis et al. Oct 2016 B2
9479463 Davis Oct 2016 B2
9491064 Jackson Nov 2016 B2
9509552 Davis Nov 2016 B2
9575805 Jackson Feb 2017 B2
9585281 Schnell Feb 2017 B2
9602573 Abu-Ghazaleh et al. Mar 2017 B1
9619296 Jackson Apr 2017 B2
9648102 Davis et al. May 2017 B1
9680770 Davis Jun 2017 B2
9749326 Davis Aug 2017 B2
9778959 Jackson Oct 2017 B2
9785479 Jackson Oct 2017 B2
9792249 Borland Oct 2017 B2
9825860 Hu Nov 2017 B2
9866477 Davis Jan 2018 B2
9876735 Davis Jan 2018 B2
9886322 Jackson Feb 2018 B2
9929976 Davis Mar 2018 B2
9959140 Jackson May 2018 B2
9959141 Jackson May 2018 B2
9961013 Jackson May 2018 B2
9965442 Borland May 2018 B2
9977763 Davis May 2018 B2
9979672 Jackson May 2018 B2
10021806 Schnell Jul 2018 B2
10050970 Davis Aug 2018 B2
10135731 Davis Nov 2018 B2
10140245 Davis et al. Nov 2018 B2
10212092 Dalal et al. Feb 2019 B2
10277531 Jackson Apr 2019 B2
10311014 Dalton Jun 2019 B2
10333862 Jackson Jun 2019 B2
10379909 Jackson Aug 2019 B2
10445146 Jackson Oct 2019 B2
10445148 Jackson Oct 2019 B2
10585704 Jackson Mar 2020 B2
10608949 Jackson Mar 2020 B2
10733028 Jackson Aug 2020 B2
10735505 Abu-Ghazaleh et al. Aug 2020 B2
10871999 Jackson Dec 2020 B2
10951487 Jackson Mar 2021 B2
10977090 Jackson Apr 2021 B2
11132277 Dalton Sep 2021 B2
11134022 Jackson Sep 2021 B2
11144355 Jackson Oct 2021 B2
11356385 Jackson Jun 2022 B2
11467883 Jackson Oct 2022 B2
11494235 Jackson Nov 2022 B2
11496415 Jackson Nov 2022 B2
11522811 Jackson Dec 2022 B2
11522952 Abu-Ghazaleh Dec 2022 B2
11526304 Davis et al. Dec 2022 B2
11533274 Jackson Dec 2022 B2
11537434 Jackson Dec 2022 B2
11537435 Jackson Dec 2022 B2
11630704 Jackson Apr 2023 B2
11650857 Jackson May 2023 B2
11652706 Jackson May 2023 B2
11656907 Jackson May 2023 B2
11658916 Jackson May 2023 B2
11709709 Jackson Jul 2023 B2
11720290 Davis Aug 2023 B2
11762694 Jackson Sep 2023 B2
11765101 Jackson Sep 2023 B2
11831564 Jackson Nov 2023 B2
11861404 Jackson Jan 2024 B2
11886915 Jackson Jan 2024 B2
11960937 Jackson Apr 2024 B2
12008405 Jackson Jun 2024 B2
12009996 Jackson Jun 2024 B2
20010010605 Aoki Aug 2001 A1
20010015733 Sklar Aug 2001 A1
20010023431 Horiguchi Sep 2001 A1
20010032109 Gonyea Oct 2001 A1
20010034752 Kremien Oct 2001 A1
20010037311 McCoy et al. Nov 2001 A1
20010044667 Nakano Nov 2001 A1
20010044759 Kutsumi Nov 2001 A1
20010046227 Matsuhira et al. Nov 2001 A1
20010051929 Suzuki Dec 2001 A1
20010052016 Skene et al. Dec 2001 A1
20010052108 Bowman-Amuah Dec 2001 A1
20020002578 Yamashita Jan 2002 A1
20020002636 Vange et al. Jan 2002 A1
20020004833 Tonouchi Jan 2002 A1
20020004912 Fung Jan 2002 A1
20020007389 Jones et al. Jan 2002 A1
20020010783 Primak et al. Jan 2002 A1
20020016809 Foulger Feb 2002 A1
20020018481 Mor et al. Feb 2002 A1
20020031364 Suzuki et al. Mar 2002 A1
20020032716 Nagato Mar 2002 A1
20020035605 Kenton Mar 2002 A1
20020040391 Chaiken et al. Apr 2002 A1
20020049608 Hartsell et al. Apr 2002 A1
20020052909 Seeds May 2002 A1
20020052961 Yoshimine et al. May 2002 A1
20020053006 Kawamoto May 2002 A1
20020059094 Hosea et al. May 2002 A1
20020059274 Hartsell et al. May 2002 A1
20020062377 Hillman et al. May 2002 A1
20020062451 Scheidt et al. May 2002 A1
20020062465 Goto May 2002 A1
20020065864 Hartsell et al. May 2002 A1
20020083299 Van Huben et al. Jun 2002 A1
20020083352 Fujimoto et al. Jun 2002 A1
20020087611 Tanaka et al. Jul 2002 A1
20020087699 Karagiannis et al. Jul 2002 A1
20020090075 Gabriel Jul 2002 A1
20020091786 Yamaguchi et al. Jul 2002 A1
20020093915 Larson Jul 2002 A1
20020097732 Worster et al. Jul 2002 A1
20020099842 Jennings et al. Jul 2002 A1
20020103681 Tolis Aug 2002 A1
20020103886 Rawson, III Aug 2002 A1
20020107903 Richter et al. Aug 2002 A1
20020107962 Richter et al. Aug 2002 A1
20020116234 Nagasawa Aug 2002 A1
20020116721 Dobes et al. Aug 2002 A1
20020120741 Webb et al. Aug 2002 A1
20020124128 Qiu Sep 2002 A1
20020129160 Habetha Sep 2002 A1
20020129274 Baskey et al. Sep 2002 A1
20020133537 Lau et al. Sep 2002 A1
20020133821 Shteyn Sep 2002 A1
20020137565 Blanco Sep 2002 A1
20020138459 Mandal Sep 2002 A1
20020138635 Redlich et al. Sep 2002 A1
20020138679 Koning Sep 2002 A1
20020143855 Traversat Oct 2002 A1
20020143944 Traversal et al. Oct 2002 A1
20020147663 Walker et al. Oct 2002 A1
20020147771 Traversal et al. Oct 2002 A1
20020147810 Traversal et al. Oct 2002 A1
20020151271 Tatsuji Oct 2002 A1
20020152299 Traversal et al. Oct 2002 A1
20020152305 Jackson et al. Oct 2002 A1
20020156675 Pedone Oct 2002 A1
20020156699 Gray et al. Oct 2002 A1
20020156891 Ulrich et al. Oct 2002 A1
20020156893 Pouyoul et al. Oct 2002 A1
20020156904 Gullotta et al. Oct 2002 A1
20020156984 Padovano Oct 2002 A1
20020159452 Foster et al. Oct 2002 A1
20020161869 Griffin et al. Oct 2002 A1
20020161917 Shapiro et al. Oct 2002 A1
20020166110 Powell Nov 2002 A1
20020166117 Abrams et al. Nov 2002 A1
20020172205 Tagore-Brage et al. Nov 2002 A1
20020173984 Robertson et al. Nov 2002 A1
20020174165 Kawaguchi Nov 2002 A1
20020174227 Hartsell et al. Nov 2002 A1
20020184129 Arena Dec 2002 A1
20020184310 Traversal et al. Dec 2002 A1
20020184311 Traversal et al. Dec 2002 A1
20020184357 Traversal et al. Dec 2002 A1
20020184358 Traversal et al. Dec 2002 A1
20020186656 Vu Dec 2002 A1
20020188657 Traversal et al. Dec 2002 A1
20020194242 Chandrasekaran Dec 2002 A1
20020194384 Habetha Dec 2002 A1
20020194412 Bottom Dec 2002 A1
20020196611 Ho et al. Dec 2002 A1
20020196734 Tanaka et al. Dec 2002 A1
20020198734 Greene et al. Dec 2002 A1
20020198923 Hayes Dec 2002 A1
20030004772 Dutta et al. Jan 2003 A1
20030005130 Cheng Jan 2003 A1
20030005162 Habetha Jan 2003 A1
20030007493 Oi et al. Jan 2003 A1
20030009506 Bril et al. Jan 2003 A1
20030014503 Legout et al. Jan 2003 A1
20030014524 Tormasov Jan 2003 A1
20030014539 Reznick Jan 2003 A1
20030014613 Soni Jan 2003 A1
20030018573 Comas Jan 2003 A1
20030018766 Duvvuru Jan 2003 A1
20030018803 El Batt et al. Jan 2003 A1
20030028443 Ellis Feb 2003 A1
20030028585 Yeager et al. Feb 2003 A1
20030028642 Agarwal Feb 2003 A1
20030028645 Romagnoli Feb 2003 A1
20030028656 Babka Feb 2003 A1
20030033547 Larson et al. Feb 2003 A1
20030036820 Yellepeddy et al. Feb 2003 A1
20030039213 Holtzman Feb 2003 A1
20030039246 Guo et al. Feb 2003 A1
20030041141 Abdelaziz et al. Feb 2003 A1
20030041238 French Feb 2003 A1
20030041266 Ke et al. Feb 2003 A1
20030041308 Ganesan et al. Feb 2003 A1
20030046330 Hayes Mar 2003 A1
20030050924 Faybishenko et al. Mar 2003 A1
20030050959 Faybishenko et al. Mar 2003 A1
20030050989 Marinescu et al. Mar 2003 A1
20030051127 Miwa Mar 2003 A1
20030055894 Yeager et al. Mar 2003 A1
20030055898 Yeager et al. Mar 2003 A1
20030058277 Bowman-Amuah Mar 2003 A1
20030061260 Rajkumar Mar 2003 A1
20030061261 Greene Mar 2003 A1
20030061262 Hahn Mar 2003 A1
20030065703 Aborn Apr 2003 A1
20030065784 Herrod Apr 2003 A1
20030069828 Blazey Apr 2003 A1
20030069918 Lu et al. Apr 2003 A1
20030069949 Chan et al. Apr 2003 A1
20030072263 Peterson Apr 2003 A1
20030074090 Becka Apr 2003 A1
20030076832 Ni Apr 2003 A1
20030081938 Nishimura May 2003 A1
20030084435 Messer May 2003 A1
20030088457 Keil et al. May 2003 A1
20030093255 Freyensee et al. May 2003 A1
20030093624 Arimilli et al. May 2003 A1
20030093647 Mogi May 2003 A1
20030097284 Shinozaki May 2003 A1
20030097429 Wu et al. May 2003 A1
20030097439 Strayer et al. May 2003 A1
20030101084 Perez May 2003 A1
20030103413 Jacobi et al. Jun 2003 A1
20030105655 Kimbrel et al. Jun 2003 A1
20030105721 Ginter et al. Jun 2003 A1
20030110262 Hasan et al. Jun 2003 A1
20030112792 Cranor et al. Jun 2003 A1
20030115562 Martin Jun 2003 A1
20030120472 Lind Jun 2003 A1
20030120701 Pulsipher et al. Jun 2003 A1
20030120704 Tran et al. Jun 2003 A1
20030120710 Pulsipher et al. Jun 2003 A1
20030120780 Zhu Jun 2003 A1
20030126013 Shand Jul 2003 A1
20030126200 Wolff Jul 2003 A1
20030126202 Watt Jul 2003 A1
20030126265 Aziz et al. Jul 2003 A1
20030126283 Prakash et al. Jul 2003 A1
20030131043 Berg et al. Jul 2003 A1
20030131209 Lee Jul 2003 A1
20030135509 Davis Jul 2003 A1
20030135615 Wyatt Jul 2003 A1
20030135621 Romagnoli Jul 2003 A1
20030140190 Mahony et al. Jul 2003 A1
20030144894 Robertson et al. Jul 2003 A1
20030149685 Trossman et al. Aug 2003 A1
20030154112 Neiman et al. Aug 2003 A1
20030158884 Alford Aug 2003 A1
20030158940 Leigh Aug 2003 A1
20030159083 Fukuhara et al. Aug 2003 A1
20030169269 Sasaki et al. Sep 2003 A1
20030172191 Williams Sep 2003 A1
20030177050 Crampton Sep 2003 A1
20030177121 Moona et al. Sep 2003 A1
20030177239 Shinohara Sep 2003 A1
20030177334 King et al. Sep 2003 A1
20030182421 Faybishenko et al. Sep 2003 A1
20030182425 Kurakake Sep 2003 A1
20030182429 Jagels Sep 2003 A1
20030182496 Yoo Sep 2003 A1
20030185229 Shachar et al. Oct 2003 A1
20030187907 Ito Oct 2003 A1
20030188083 Kumar et al. Oct 2003 A1
20030191795 Bernardin et al. Oct 2003 A1
20030191857 Terrell et al. Oct 2003 A1
20030193402 Post et al. Oct 2003 A1
20030195931 Dauger Oct 2003 A1
20030200109 Honda et al. Oct 2003 A1
20030200258 Hayashi Oct 2003 A1
20030202520 Witkowski et al. Oct 2003 A1
20030202709 Simard et al. Oct 2003 A1
20030204709 Rich Oct 2003 A1
20030204773 Petersen et al. Oct 2003 A1
20030204786 Dinker Oct 2003 A1
20030210694 Jayaraman et al. Nov 2003 A1
20030212738 Wookey et al. Nov 2003 A1
20030212792 Raymond Nov 2003 A1
20030216927 Sridhar Nov 2003 A1
20030216951 Ginis et al. Nov 2003 A1
20030217129 Knittel et al. Nov 2003 A1
20030218627 Gusler Nov 2003 A1
20030227934 White Dec 2003 A1
20030231624 Alappat et al. Dec 2003 A1
20030231647 Petrovykh Dec 2003 A1
20030233378 Butler et al. Dec 2003 A1
20030233446 Earl Dec 2003 A1
20030236745 Hartsell et al. Dec 2003 A1
20030236854 Rom Dec 2003 A1
20030236880 Srivastava Dec 2003 A1
20040003077 Bantz et al. Jan 2004 A1
20040003086 Parham et al. Jan 2004 A1
20040009751 Michaelis Jan 2004 A1
20040010544 Slater et al. Jan 2004 A1
20040010550 Gopinath Jan 2004 A1
20040010592 Carver et al. Jan 2004 A1
20040011761 Hensley Jan 2004 A1
20040013113 Singh et al. Jan 2004 A1
20040015579 Cooper et al. Jan 2004 A1
20040015973 Skovira Jan 2004 A1
20040017806 Yazdy et al. Jan 2004 A1
20040017808 Forbes et al. Jan 2004 A1
20040021678 Ullah Feb 2004 A1
20040024853 Cates Feb 2004 A1
20040030741 Wolton et al. Feb 2004 A1
20040030743 Hugly et al. Feb 2004 A1
20040030794 Hugly et al. Feb 2004 A1
20040030938 Barr et al. Feb 2004 A1
20040034873 Zenoni Feb 2004 A1
20040039815 Evans et al. Feb 2004 A1
20040042487 Ossman Mar 2004 A1
20040043755 Shimooka Mar 2004 A1
20040044718 Ferstl et al. Mar 2004 A1
20040044727 Abdelaziz et al. Mar 2004 A1
20040054630 Ginter et al. Mar 2004 A1
20040054777 Ackaouy et al. Mar 2004 A1
20040054780 Romero Mar 2004 A1
20040054807 Harvey et al. Mar 2004 A1
20040054999 Willen Mar 2004 A1
20040064511 Abdel-Aziz et al. Apr 2004 A1
20040064512 Arora et al. Apr 2004 A1
20040064568 Arora et al. Apr 2004 A1
20040064817 Shibayama et al. Apr 2004 A1
20040066782 Nassar Apr 2004 A1
20040068411 Scanlan Apr 2004 A1
20040068676 Larson et al. Apr 2004 A1
20040068730 Miller et al. Apr 2004 A1
20040071147 Roadknight et al. Apr 2004 A1
20040073650 Nakamura Apr 2004 A1
20040073854 Windl Apr 2004 A1
20040073908 Benejam et al. Apr 2004 A1
20040081148 Yamada Apr 2004 A1
20040083287 Gao et al. Apr 2004 A1
20040088347 Yeager et al. May 2004 A1
20040088348 Yeager et al. May 2004 A1
20040088369 Yeager et al. May 2004 A1
20040098391 Robertson et al. May 2004 A1
20040098424 Seidenberg May 2004 A1
20040098447 Verbeke et al. May 2004 A1
20040103078 Smedberg et al. May 2004 A1
20040103305 Ginter et al. May 2004 A1
20040103339 Chalasani et al. May 2004 A1
20040103413 Mandava et al. May 2004 A1
20040107123 Haffner Jun 2004 A1
20040107273 Biran et al. Jun 2004 A1
20040107281 Bose et al. Jun 2004 A1
20040109428 Krishnamurthy Jun 2004 A1
20040111307 Demsky et al. Jun 2004 A1
20040111612 Choi et al. Jun 2004 A1
20040117610 Hensley Jun 2004 A1
20040117768 Chang et al. Jun 2004 A1
20040121777 Schwarz et al. Jun 2004 A1
20040122970 Kawaguchi et al. Jun 2004 A1
20040128495 Hensley Jul 2004 A1
20040128670 Robinson et al. Jul 2004 A1
20040133620 Habelha Jul 2004 A1
20040133640 Yeager et al. Jul 2004 A1
20040133665 Deboer et al. Jul 2004 A1
20040133703 Habelha Jul 2004 A1
20040135780 Nims Jul 2004 A1
20040139202 Talwar et al. Jul 2004 A1
20040139464 Ellis et al. Jul 2004 A1
20040141521 George Jul 2004 A1
20040143664 Usa et al. Jul 2004 A1
20040148326 Nadgir Jul 2004 A1
20040148390 Cleary et al. Jul 2004 A1
20040150664 Baudisch Aug 2004 A1
20040151181 Chu Aug 2004 A1
20040153563 Shay et al. Aug 2004 A1
20040158637 Lee Aug 2004 A1
20040162871 Pabla et al. Aug 2004 A1
20040165588 Pandya Aug 2004 A1
20040172464 Nag Sep 2004 A1
20040179528 Powers et al. Sep 2004 A1
20040181370 Froehlich et al. Sep 2004 A1
20040181476 Smith et al. Sep 2004 A1
20040189677 Amann et al. Sep 2004 A1
20040193674 Kurosawa et al. Sep 2004 A1
20040194061 Fujino Sep 2004 A1
20040194098 Chung et al. Sep 2004 A1
20040196308 Blomquist Oct 2004 A1
20040199566 Carlson Oct 2004 A1
20040199621 Lau Oct 2004 A1
20040199646 Susai et al. Oct 2004 A1
20040199918 Skovira Oct 2004 A1
20040203670 King et al. Oct 2004 A1
20040204978 Rayrole Oct 2004 A1
20040205101 Radhakrishnan Oct 2004 A1
20040205206 Naik et al. Oct 2004 A1
20040210624 Andrzejak et al. Oct 2004 A1
20040210632 Carlson Oct 2004 A1
20040210663 Phillips Oct 2004 A1
20040210693 Zeitler et al. Oct 2004 A1
20040213395 Ishii et al. Oct 2004 A1
20040215780 Kawato Oct 2004 A1
20040215858 Armstrong Oct 2004 A1
20040215864 Arimilli et al. Oct 2004 A1
20040215991 McAfee et al. Oct 2004 A1
20040216121 Jones et al. Oct 2004 A1
20040218615 Griffin et al. Nov 2004 A1
20040221038 Clarke et al. Nov 2004 A1
20040236852 Birkestrand et al. Nov 2004 A1
20040243378 Schnatterly et al. Dec 2004 A1
20040243466 Trzybinski et al. Dec 2004 A1
20040244006 Kaufman et al. Dec 2004 A1
20040246900 Zhang et al. Dec 2004 A1
20040248576 Ghiglino Dec 2004 A1
20040260701 Lehikoinen Dec 2004 A1
20040260746 Brown et al. Dec 2004 A1
20040267486 Percer et al. Dec 2004 A1
20040267897 Hill Dec 2004 A1
20040267901 Gomez Dec 2004 A1
20040268035 Ueno Dec 2004 A1
20040268315 Gouriou Dec 2004 A1
20050005200 Matena Jan 2005 A1
20050010465 Drew et al. Jan 2005 A1
20050010608 Horikawa Jan 2005 A1
20050015378 Gammel et al. Jan 2005 A1
20050015621 Ashley et al. Jan 2005 A1
20050018604 Dropps et al. Jan 2005 A1
20050018606 Dropps et al. Jan 2005 A1
20050018663 Dropps et al. Jan 2005 A1
20050021291 Retlich Jan 2005 A1
20050021371 Basone et al. Jan 2005 A1
20050021606 Davies et al. Jan 2005 A1
20050021728 Sugimoto Jan 2005 A1
20050021759 Gupta et al. Jan 2005 A1
20050021862 Schroeder et al. Jan 2005 A1
20050022188 Tameshige et al. Jan 2005 A1
20050027863 Talwar et al. Feb 2005 A1
20050027864 Bozak et al. Feb 2005 A1
20050027865 Bozak et al. Feb 2005 A1
20050027870 Trebes et al. Feb 2005 A1
20050030954 Dropps et al. Feb 2005 A1
20050033742 Kamvar et al. Feb 2005 A1
20050033890 Lee Feb 2005 A1
20050034070 Meir et al. Feb 2005 A1
20050038808 Kutch Feb 2005 A1
20050038835 Chidambaran et al. Feb 2005 A1
20050039171 Avakian Feb 2005 A1
20050044167 Kobayashi Feb 2005 A1
20050044195 Westfall Feb 2005 A1
20050044205 Sankaranarayan et al. Feb 2005 A1
20050044226 McDermott et al. Feb 2005 A1
20050044228 Birkestrand et al. Feb 2005 A1
20050049884 Hunt et al. Mar 2005 A1
20050050057 Mital et al. Mar 2005 A1
20050050200 Mizoguchi Mar 2005 A1
20050050270 Horn et al. Mar 2005 A1
20050054354 Roman et al. Mar 2005 A1
20050055322 Masters et al. Mar 2005 A1
20050055442 Reeves Mar 2005 A1
20050055694 Lee Mar 2005 A1
20050055697 Buco Mar 2005 A1
20050055698 Sasaki et al. Mar 2005 A1
20050060360 Doyle et al. Mar 2005 A1
20050060608 Marchand Mar 2005 A1
20050065826 Baker et al. Mar 2005 A1
20050066302 Kanade Mar 2005 A1
20050066358 Anderson et al. Mar 2005 A1
20050068922 Jalali Mar 2005 A1
20050071843 Guo et al. Mar 2005 A1
20050076145 Ben-Zvi et al. Apr 2005 A1
20050077921 Percer et al. Apr 2005 A1
20050080845 Gopinath Apr 2005 A1
20050080891 Cauthron Apr 2005 A1
20050080930 Joseph Apr 2005 A1
20050081210 Day Apr 2005 A1
20050086300 Yeager et al. Apr 2005 A1
20050086356 Shah Apr 2005 A1
20050091505 Riley et al. Apr 2005 A1
20050097560 Rolia et al. May 2005 A1
20050102396 Hipp May 2005 A1
20050102400 Nakahara May 2005 A1
20050102683 Branson May 2005 A1
20050105538 Perera et al. May 2005 A1
20050108407 Johnson et al. May 2005 A1
20050108703 Hellier May 2005 A1
20050113203 Mueller et al. May 2005 A1
20050114460 Chen May 2005 A1
20050114478 Popescu et al. May 2005 A1
20050114551 Basu et al. May 2005 A1
20050114862 Bisdikian et al. May 2005 A1
20050120160 Plouffe et al. Jun 2005 A1
20050125213 Chen et al. Jun 2005 A1
20050125537 Martins et al. Jun 2005 A1
20050125538 Tawil Jun 2005 A1
20050131898 Fatula, Jr. Jun 2005 A1
20050132378 Horvitz et al. Jun 2005 A1
20050132379 Sankaran et al. Jun 2005 A1
20050138618 Gebhart Jun 2005 A1
20050141424 Lim et al. Jun 2005 A1
20050144315 George et al. Jun 2005 A1
20050144619 Newman Jun 2005 A1
20050149940 Calinescu Jul 2005 A1
20050154861 Arimilli et al. Jul 2005 A1
20050155033 Luoffo et al. Jul 2005 A1
20050156732 Matsumura Jul 2005 A1
20050160137 Ishikawa et al. Jul 2005 A1
20050160413 Broussard Jul 2005 A1
20050160424 Broussard Jul 2005 A1
20050163143 Kalantar et al. Jul 2005 A1
20050165925 Dan et al. Jul 2005 A1
20050169179 Antal Aug 2005 A1
20050172291 Das et al. Aug 2005 A1
20050177600 Eilam et al. Aug 2005 A1
20050187866 Lee Aug 2005 A1
20050188088 Fellenstein et al. Aug 2005 A1
20050188089 Lichtenstein et al. Aug 2005 A1
20050188091 Szabo et al. Aug 2005 A1
20050190236 Ishimoto Sep 2005 A1
20050192771 Fischer et al. Sep 2005 A1
20050193103 Drabik Sep 2005 A1
20050193225 Macbeth Sep 2005 A1
20050193231 Scheuren Sep 2005 A1
20050195075 McGraw Sep 2005 A1
20050197877 Kaiinoski Sep 2005 A1
20050198200 Subramanian et al. Sep 2005 A1
20050198516 Marr Sep 2005 A1
20050202922 Thomas Sep 2005 A1
20050203761 Barr Sep 2005 A1
20050204040 Ferri et al. Sep 2005 A1
20050206917 Ferlitsch Sep 2005 A1
20050209892 Miller Sep 2005 A1
20050210470 Chung et al. Sep 2005 A1
20050213507 Banerjee et al. Sep 2005 A1
20050213560 Duvvury Sep 2005 A1
20050222885 Chen et al. Oct 2005 A1
20050228852 Santos et al. Oct 2005 A1
20050228856 Swildens Oct 2005 A1
20050228892 Riley et al. Oct 2005 A1
20050234846 Davidson et al. Oct 2005 A1
20050235137 Barr et al. Oct 2005 A1
20050235150 Kaler et al. Oct 2005 A1
20050240688 Moerman et al. Oct 2005 A1
20050243867 Petite Nov 2005 A1
20050246705 Etelson et al. Nov 2005 A1
20050249341 Mahone et al. Nov 2005 A1
20050256942 McCardle et al. Nov 2005 A1
20050256946 Childress et al. Nov 2005 A1
20050259397 Bash et al. Nov 2005 A1
20050259683 Bishop Nov 2005 A1
20050262495 Fung et al. Nov 2005 A1
20050262508 Asano et al. Nov 2005 A1
20050267948 Mckinley et al. Dec 2005 A1
20050268063 Diao et al. Dec 2005 A1
20050278392 Hansen et al. Dec 2005 A1
20050278760 Dewar et al. Dec 2005 A1
20050283481 Rosenbach Dec 2005 A1
20050283534 Bigagli et al. Dec 2005 A1
20050283782 Lu Dec 2005 A1
20050283822 Appleby et al. Dec 2005 A1
20050288961 Tabrizi Dec 2005 A1
20050289540 Nguyen et al. Dec 2005 A1
20060002311 Iwanaga et al. Jan 2006 A1
20060008256 Khedouri et al. Jan 2006 A1
20060010445 Petersen et al. Jan 2006 A1
20060013132 Garnett et al. Jan 2006 A1
20060013218 Shore et al. Jan 2006 A1
20060015555 Douglass et al. Jan 2006 A1
20060015637 Chung Jan 2006 A1
20060015651 Freimuth Jan 2006 A1
20060015773 Singh et al. Jan 2006 A1
20060023245 Sato et al. Feb 2006 A1
20060028991 Tan et al. Feb 2006 A1
20060029053 Roberts et al. Feb 2006 A1
20060031379 Kasriel et al. Feb 2006 A1
20060031547 Tsui et al. Feb 2006 A1
20060031813 Bishop et al. Feb 2006 A1
20060036743 Deng et al. Feb 2006 A1
20060037016 Saha et al. Feb 2006 A1
20060039246 King et al. Feb 2006 A1
20060041444 Flores et al. Feb 2006 A1
20060047920 Moore et al. Mar 2006 A1
20060048157 Dawson et al. Mar 2006 A1
20060053215 Sharma Mar 2006 A1
20060053216 Deokar et al. Mar 2006 A1
20060056291 Baker et al. Mar 2006 A1
20060056373 Legg Mar 2006 A1
20060059253 Goodman et al. Mar 2006 A1
20060063690 Billiauw et al. Mar 2006 A1
20060069261 Bonneau Mar 2006 A1
20060069621 Chang Mar 2006 A1
20060069671 Conley et al. Mar 2006 A1
20060069774 Chen et al. Mar 2006 A1
20060069926 Ginter et al. Mar 2006 A1
20060074925 Bixby Apr 2006 A1
20060074940 Craft et al. Apr 2006 A1
20060088015 Kakivaya et al. Apr 2006 A1
20060089894 Balk et al. Apr 2006 A1
20060090003 Kakivaya et al. Apr 2006 A1
20060090025 Tufford et al. Apr 2006 A1
20060090136 Miller et al. Apr 2006 A1
20060092942 Newson May 2006 A1
20060095917 Black-Ziegelbein et al. May 2006 A1
20060097863 Horowitz et al. May 2006 A1
20060112184 Kuo May 2006 A1
20060112308 Crawford May 2006 A1
20060117064 Wilson May 2006 A1
20060117208 Davidson Jun 2006 A1
20060117317 Crawford et al. Jun 2006 A1
20060120322 Lindskog Jun 2006 A1
20060120411 Basu Jun 2006 A1
20060126619 Teisberg et al. Jun 2006 A1
20060126667 Smith et al. Jun 2006 A1
20060129667 Anderson Jun 2006 A1
20060129687 Goldszmidt et al. Jun 2006 A1
20060136235 Keohane et al. Jun 2006 A1
20060136570 Pandya Jun 2006 A1
20060136908 Gebhart et al. Jun 2006 A1
20060136928 Crawford et al. Jun 2006 A1
20060136929 Miller et al. Jun 2006 A1
20060140211 Huang et al. Jun 2006 A1
20060143350 Miloushev et al. Jun 2006 A1
20060149695 Bossman et al. Jul 2006 A1
20060153191 Rajsic et al. Jul 2006 A1
20060155740 Chen et al. Jul 2006 A1
20060155912 Singh et al. Jul 2006 A1
20060156273 Narayan et al. Jul 2006 A1
20060159088 Aghvami et al. Jul 2006 A1
20060161466 Trinon et al. Jul 2006 A1
20060161585 Clarke et al. Jul 2006 A1
20060165040 Rathod Jul 2006 A1
20060165074 Modi Jul 2006 A1
20060168107 Balan et al. Jul 2006 A1
20060168224 Midgley Jul 2006 A1
20060173730 Birkestrand Aug 2006 A1
20060174342 Zaheer et al. Aug 2006 A1
20060179241 Clark et al. Aug 2006 A1
20060182119 Li Aug 2006 A1
20060184939 Sahoo Aug 2006 A1
20060189349 Montulli et al. Aug 2006 A1
20060190775 Aggarwal et al. Aug 2006 A1
20060190975 Gonzalez Aug 2006 A1
20060200773 Nocera et al. Sep 2006 A1
20060206621 Toebes Sep 2006 A1
20060208870 Dousson Sep 2006 A1
20060212332 Jackson Sep 2006 A1
20060212333 Jackson Sep 2006 A1
20060212334 Jackson Sep 2006 A1
20060212740 Jackson Sep 2006 A1
20060218301 O'Toole et al. Sep 2006 A1
20060224725 Bali et al. Oct 2006 A1
20060224740 Sievers-Tostes Oct 2006 A1
20060224741 Jackson Oct 2006 A1
20060227810 Childress et al. Oct 2006 A1
20060229920 Favorel et al. Oct 2006 A1
20060230140 Aoyama et al. Oct 2006 A1
20060230149 Jackson Oct 2006 A1
20060236368 Raja et al. Oct 2006 A1
20060236371 Fish Oct 2006 A1
20060248141 Mukherjee Nov 2006 A1
20060248197 Evans et al. Nov 2006 A1
20060248359 Fung Nov 2006 A1
20060250971 Gammenthaler et al. Nov 2006 A1
20060251419 Zadikian et al. Nov 2006 A1
20060253570 Biswas et al. Nov 2006 A1
20060259734 Sheu et al. Nov 2006 A1
20060265508 Angel et al. Nov 2006 A1
20060265609 Fung Nov 2006 A1
20060268742 Chu Nov 2006 A1
20060271552 McChesney et al. Nov 2006 A1
20060271928 Gao et al. Nov 2006 A1
20060277278 Hegde Dec 2006 A1
20060282505 Hasha et al. Dec 2006 A1
20060282547 Hasha et al. Dec 2006 A1
20060294219 Ogawa Dec 2006 A1
20060294238 Naik et al. Dec 2006 A1
20070003051 Kiss et al. Jan 2007 A1
20070006001 Isobe et al. Jan 2007 A1
20070011224 Mena et al. Jan 2007 A1
20070011302 Groner et al. Jan 2007 A1
20070022425 Jackson Jan 2007 A1
20070028244 Landis et al. Feb 2007 A1
20070033292 Sull et al. Feb 2007 A1
20070033533 Sull et al. Feb 2007 A1
20070041335 Znamova et al. Feb 2007 A1
20070043591 Meretei Feb 2007 A1
20070044010 Sull et al. Feb 2007 A1
20070047195 Merkin et al. Mar 2007 A1
20070050777 Hutchinson et al. Mar 2007 A1
20070061441 Landis Mar 2007 A1
20070067366 Landis Mar 2007 A1
20070067435 Landis et al. Mar 2007 A1
20070067766 Tal Mar 2007 A1
20070076653 Park et al. Apr 2007 A1
20070081315 Mondor et al. Apr 2007 A1
20070083899 Compton et al. Apr 2007 A1
20070088822 Coile et al. Apr 2007 A1
20070094002 Berstis Apr 2007 A1
20070094486 Moore et al. Apr 2007 A1
20070094665 Jackson Apr 2007 A1
20070094691 Gazdzinski Apr 2007 A1
20070109968 Hussain et al. May 2007 A1
20070118496 Bornhoevd May 2007 A1
20070124344 Rajakannimariyan et al. May 2007 A1
20070130397 Tsu Jun 2007 A1
20070143824 Shahbazi Jun 2007 A1
20070150426 Asher et al. Jun 2007 A1
20070150444 Chesnais et al. Jun 2007 A1
20070155406 Dowling et al. Jul 2007 A1
20070174390 Silvain et al. Jul 2007 A1
20070180310 Johnson et al. Aug 2007 A1
20070180380 Khavari et al. Aug 2007 A1
20070204036 Mohaban et al. Aug 2007 A1
20070209072 Chen Sep 2007 A1
20070220520 Tajima Sep 2007 A1
20070226313 Li et al. Sep 2007 A1
20070226795 Conti et al. Sep 2007 A1
20070233828 Gilbert et al. Oct 2007 A1
20070237115 Bae Oct 2007 A1
20070240162 Coleman et al. Oct 2007 A1
20070253017 Czyszczewski et al. Nov 2007 A1
20070260716 Gnanasambandam et al. Nov 2007 A1
20070264986 Warrillow et al. Nov 2007 A1
20070266136 Esfahany et al. Nov 2007 A1
20070268909 Chen Nov 2007 A1
20070271375 Hwang Nov 2007 A1
20070280230 Park Dec 2007 A1
20070286009 Norman Dec 2007 A1
20070288585 Sekiguchi et al. Dec 2007 A1
20070297350 Eilam et al. Dec 2007 A1
20070299946 El-Damhougy et al. Dec 2007 A1
20070299947 El-Damhougy et al. Dec 2007 A1
20070299950 Kulkarni et al. Dec 2007 A1
20080013453 Chiang et al. Jan 2008 A1
20080016198 Johnston-Watt et al. Jan 2008 A1
20080034082 McKinney Feb 2008 A1
20080040463 Brown et al. Feb 2008 A1
20080052437 Loffink et al. Feb 2008 A1
20080059782 Kruse et al. Mar 2008 A1
20080065835 Lacobovici Mar 2008 A1
20080075089 Evans et al. Mar 2008 A1
20080082663 Mouli et al. Apr 2008 A1
20080089358 Basso et al. Apr 2008 A1
20080104231 Dey et al. May 2008 A1
20080104264 Duerk et al. May 2008 A1
20080126523 Tantrum May 2008 A1
20080140771 Vass et al. Jun 2008 A1
20080140930 Hotchkiss Jun 2008 A1
20080155070 El-Damhougy et al. Jun 2008 A1
20080155100 Ahmed et al. Jun 2008 A1
20080159745 Segal Jul 2008 A1
20080162691 Zhang et al. Jul 2008 A1
20080168451 Challenger et al. Jul 2008 A1
20080183865 Appleby et al. Jul 2008 A1
20080183882 Flynn et al. Jul 2008 A1
20080184248 Barua et al. Jul 2008 A1
20080186965 Zheng et al. Aug 2008 A1
20080196043 Feinleib Aug 2008 A1
20080199133 Takizawa et al. Aug 2008 A1
20080212273 Bechtolsheim Sep 2008 A1
20080212276 Bottom et al. Sep 2008 A1
20080215730 Sundaram et al. Sep 2008 A1
20080216082 Eilam et al. Sep 2008 A1
20080217021 Lembcke et al. Sep 2008 A1
20080222434 Shimizu et al. Sep 2008 A1
20080232378 Moorthy Sep 2008 A1
20080235443 Chow et al. Sep 2008 A1
20080235702 Eilam et al. Sep 2008 A1
20080239649 Bradicich Oct 2008 A1
20080243634 Dworkin et al. Oct 2008 A1
20080250181 Li et al. Oct 2008 A1
20080255953 Chang et al. Oct 2008 A1
20080259555 Bechtolsheim et al. Oct 2008 A1
20080259788 Wang et al. Oct 2008 A1
20080263131 Hinni et al. Oct 2008 A1
20080263558 Lin et al. Oct 2008 A1
20080266793 Lee Oct 2008 A1
20080270599 Tamir et al. Oct 2008 A1
20080270731 Bryant et al. Oct 2008 A1
20080279167 Cardei et al. Nov 2008 A1
20080288646 Hasha et al. Nov 2008 A1
20080288659 Hasha et al. Nov 2008 A1
20080288660 Balasubramanian et al. Nov 2008 A1
20080288664 Pettey et al. Nov 2008 A1
20080288683 Ramey Nov 2008 A1
20080288873 McCardle et al. Nov 2008 A1
20080289029 Kim et al. Nov 2008 A1
20080301226 Cleary et al. Dec 2008 A1
20080301379 Pong Dec 2008 A1
20080301794 Lee Dec 2008 A1
20080304481 Gurney Dec 2008 A1
20080310848 Yasuda et al. Dec 2008 A1
20080313293 Jacobs Dec 2008 A1
20080313369 Verdoorn et al. Dec 2008 A1
20080313482 Karlapalem et al. Dec 2008 A1
20080320121 Altaf et al. Dec 2008 A1
20080320161 Maruccia et al. Dec 2008 A1
20080320482 Dawson Dec 2008 A1
20090010153 Filsfils et al. Jan 2009 A1
20090021907 Mann et al. Jan 2009 A1
20090043809 Fakhouri et al. Feb 2009 A1
20090043888 Jackson Feb 2009 A1
20090044036 Merkin Feb 2009 A1
20090049443 Powers et al. Feb 2009 A1
20090055542 Zhoa et al. Feb 2009 A1
20090055691 Ouksel et al. Feb 2009 A1
20090063443 Arimilli et al. Mar 2009 A1
20090063690 Verthein et al. Mar 2009 A1
20090064287 Bagepalli et al. Mar 2009 A1
20090070771 Yuyitung et al. Mar 2009 A1
20090080428 Witkowski et al. Mar 2009 A1
20090083390 Abu-Ghazaleh et al. Mar 2009 A1
20090089410 Vicente et al. Apr 2009 A1
20090094380 Qiu et al. Apr 2009 A1
20090097200 Sharma et al. Apr 2009 A1
20090100133 Giulio et al. Apr 2009 A1
20090103501 Farrag et al. Apr 2009 A1
20090105059 Dorry et al. Apr 2009 A1
20090113056 Tameshige et al. Apr 2009 A1
20090113130 He et al. Apr 2009 A1
20090133129 Jeong et al. May 2009 A1
20090135751 Hodges et al. May 2009 A1
20090135835 Gallatin et al. May 2009 A1
20090138594 Fellenstein et al. May 2009 A1
20090150566 Malkhi Jun 2009 A1
20090158070 Gruendler Jun 2009 A1
20090172423 Song et al. Jul 2009 A1
20090178132 Hudis et al. Jul 2009 A1
20090182836 Aviles Jul 2009 A1
20090187425 Thompson et al. Jul 2009 A1
20090198958 Arimilli et al. Aug 2009 A1
20090204834 Hendin et al. Aug 2009 A1
20090204837 Raval et al. Aug 2009 A1
20090210356 Abrams et al. Aug 2009 A1
20090210495 Wolfson et al. Aug 2009 A1
20090216881 Lovy et al. Aug 2009 A1
20090216910 Duchesneau Aug 2009 A1
20090216920 Lauterbach et al. Aug 2009 A1
20090217329 Riedl et al. Aug 2009 A1
20090219827 Chen et al. Sep 2009 A1
20090222884 Shaji et al. Sep 2009 A1
20090225360 Shirai Sep 2009 A1
20090225751 Koenck et al. Sep 2009 A1
20090234917 Despotovic et al. Sep 2009 A1
20090234962 Strong et al. Sep 2009 A1
20090234974 Arndt et al. Sep 2009 A1
20090235104 Fung Sep 2009 A1
20090238349 Pezzutti Sep 2009 A1
20090240547 Fellenstein et al. Sep 2009 A1
20090248943 Jiang et al. Oct 2009 A1
20090251867 Sharma Oct 2009 A1
20090257440 Yan Oct 2009 A1
20090259606 Seah et al. Oct 2009 A1
20090259863 Williams et al. Oct 2009 A1
20090259864 Li et al. Oct 2009 A1
20090265045 Coxe, III Oct 2009 A1
20090271656 Yokota et al. Oct 2009 A1
20090276666 Haley et al. Nov 2009 A1
20090279518 Falk et al. Nov 2009 A1
20090282274 Langgood et al. Nov 2009 A1
20090282419 Mejdrich et al. Nov 2009 A1
20090285136 Sun et al. Nov 2009 A1
20090287835 Jacobson et al. Nov 2009 A1
20090292824 Marashi et al. Nov 2009 A1
20090300608 Ferris et al. Dec 2009 A1
20090313390 Ahuja et al. Dec 2009 A1
20090316687 Kruppa et al. Dec 2009 A1
20090319684 Kakivaya et al. Dec 2009 A1
20090323691 Johnson Dec 2009 A1
20090327079 Parker et al. Dec 2009 A1
20090327489 Swildens et al. Dec 2009 A1
20100005331 Somasundaram et al. Jan 2010 A1
20100008038 Coglitore Jan 2010 A1
20100008365 Porat Jan 2010 A1
20100026408 Shau Feb 2010 A1
20100036945 Allibhoy et al. Feb 2010 A1
20100040053 Gottumukkula et al. Feb 2010 A1
20100049822 Davies et al. Feb 2010 A1
20100049931 Jacobson et al. Feb 2010 A1
20100051391 Jahkonen Mar 2010 A1
20100070675 Pong Mar 2010 A1
20100082788 Mundy Apr 2010 A1
20100088205 Robertson Apr 2010 A1
20100088490 Chakradhar Apr 2010 A1
20100091676 Moran et al. Apr 2010 A1
20100103837 Jungck et al. Apr 2010 A1
20100106987 Lambert et al. Apr 2010 A1
20100114531 Korn et al. May 2010 A1
20100118880 Kunz et al. May 2010 A1
20100121932 Joshi et al. May 2010 A1
20100121947 Pirzada et al. May 2010 A1
20100122251 Karc May 2010 A1
20100125742 Ohtani May 2010 A1
20100125915 Hall et al. May 2010 A1
20100131324 Ferris et al. May 2010 A1
20100131624 Ferris May 2010 A1
20100138481 Behrens Jun 2010 A1
20100153546 Clubb et al. Jun 2010 A1
20100158005 Mukhopadhyay et al. Jun 2010 A1
20100161909 Nation et al. Jun 2010 A1
20100165983 Aybay et al. Jul 2010 A1
20100169477 Stienhans et al. Jul 2010 A1
20100169479 Jeong et al. Jul 2010 A1
20100169888 Hare et al. Jul 2010 A1
20100174604 Mattingly et al. Jul 2010 A1
20100174813 Hildreth et al. Jul 2010 A1
20100198972 Umbehocker Aug 2010 A1
20100198985 Kanevsky Aug 2010 A1
20100217801 Leighton et al. Aug 2010 A1
20100218194 Dallman et al. Aug 2010 A1
20100220732 Hussain et al. Sep 2010 A1
20100223332 Maxemchuk et al. Sep 2010 A1
20100228848 Kis et al. Sep 2010 A1
20100235234 Shuster Sep 2010 A1
20100250914 Abdul et al. Sep 2010 A1
20100262650 Chauhan Oct 2010 A1
20100265650 Chen et al. Oct 2010 A1
20100281166 Buyya et al. Nov 2010 A1
20100281246 Bristow et al. Nov 2010 A1
20100299548 Chadirchi et al. Nov 2010 A1
20100302129 Kastrup et al. Dec 2010 A1
20100308897 Evoy et al. Dec 2010 A1
20100312910 Lin et al. Dec 2010 A1
20100312969 Yamazaki et al. Dec 2010 A1
20100318665 Demmer et al. Dec 2010 A1
20100318812 Auradkar et al. Dec 2010 A1
20100325371 Jagadish et al. Dec 2010 A1
20100332262 Horvitz et al. Dec 2010 A1
20100333116 Prahlad Dec 2010 A1
20110023104 Franklin Jan 2011 A1
20110026397 Saltsidis et al. Feb 2011 A1
20110029644 Gelvin et al. Feb 2011 A1
20110029652 Chhuor et al. Feb 2011 A1
20110035491 Gelvin et al. Feb 2011 A1
20110055627 Zawacki et al. Mar 2011 A1
20110058573 Balakavi et al. Mar 2011 A1
20110075369 Sun et al. Mar 2011 A1
20110082928 Hasha et al. Apr 2011 A1
20110090633 Rabinovitz Apr 2011 A1
20110103391 Davis May 2011 A1
20110113083 Shahar May 2011 A1
20110113115 Chang et al. May 2011 A1
20110119344 Eustis May 2011 A1
20110123014 Smith May 2011 A1
20110138046 Bonnier et al. Jun 2011 A1
20110145393 Ben-Zvi et al. Jun 2011 A1
20110153953 Khemani et al. Jun 2011 A1
20110154318 Oshins et al. Jun 2011 A1
20110154371 Beale Jun 2011 A1
20110167110 Hoffberg et al. Jul 2011 A1
20110173295 Bakke et al. Jul 2011 A1
20110173612 El Zur et al. Jul 2011 A1
20110179134 Mayo et al. Jul 2011 A1
20110185370 Tamir et al. Jul 2011 A1
20110188378 Collins Aug 2011 A1
20110191514 Wu et al. Aug 2011 A1
20110191610 Agarwal et al. Aug 2011 A1
20110197012 Liao et al. Aug 2011 A1
20110210975 Wong et al. Sep 2011 A1
20110213869 Korsunsky et al. Sep 2011 A1
20110231510 Korsunsky et al. Sep 2011 A1
20110231564 Korsunsky et al. Sep 2011 A1
20110238841 Kakivaya et al. Sep 2011 A1
20110238855 Korsunsky et al. Sep 2011 A1
20110239014 Karnowski Sep 2011 A1
20110271159 Ahn et al. Nov 2011 A1
20110273840 Chen Nov 2011 A1
20110274108 Fan Nov 2011 A1
20110295991 Aida Dec 2011 A1
20110296141 Daffron Dec 2011 A1
20110307887 Huang et al. Dec 2011 A1
20110314465 Smith et al. Dec 2011 A1
20110320540 Oostlander et al. Dec 2011 A1
20110320690 Petersen et al. Dec 2011 A1
20120011500 Faraboschi et al. Jan 2012 A1
20120020207 Corti et al. Jan 2012 A1
20120036237 Hasha et al. Feb 2012 A1
20120042196 Aron Feb 2012 A1
20120050981 Xu et al. Mar 2012 A1
20120054469 Ikeya et al. Mar 2012 A1
20120054511 Brinks et al. Mar 2012 A1
20120072997 Carlson et al. Mar 2012 A1
20120081850 Regimbal et al. Apr 2012 A1
20120096211 Davis et al. Apr 2012 A1
20120099265 Reber Apr 2012 A1
20120102457 Tal Apr 2012 A1
20120110055 Van Biljon et al. May 2012 A1
20120110180 Van Biljon et al. May 2012 A1
20120110188 Van Biljon et al. May 2012 A1
20120110651 Van Biljon et al. May 2012 A1
20120117229 Van Biljon et al. May 2012 A1
20120131201 Matthews et al. May 2012 A1
20120137004 Smith May 2012 A1
20120151476 Vincent Jun 2012 A1
20120155168 Kim et al. Jun 2012 A1
20120158925 Shen Jun 2012 A1
20120159116 Lim et al. Jun 2012 A1
20120167083 Suit Jun 2012 A1
20120167084 Suit Jun 2012 A1
20120167094 Suit Jun 2012 A1
20120185334 Sarkar et al. Jul 2012 A1
20120191860 Traversat et al. Jul 2012 A1
20120198075 Crowe Aug 2012 A1
20120198252 Kirschtein et al. Aug 2012 A1
20120207165 Davis Aug 2012 A1
20120209989 Stewart Aug 2012 A1
20120218901 Jungck et al. Aug 2012 A1
20120222033 Byrum Aug 2012 A1
20120226788 Jackson Sep 2012 A1
20120239479 Amaro et al. Sep 2012 A1
20120278378 Lehane et al. Nov 2012 A1
20120278430 Lehane et al. Nov 2012 A1
20120278464 Lehane et al. Nov 2012 A1
20120296974 Tabe et al. Nov 2012 A1
20120297042 Davis et al. Nov 2012 A1
20120324005 Nalawade Dec 2012 A1
20130010639 Armstrong et al. Jan 2013 A1
20130024645 Cheriton et al. Jan 2013 A1
20130031331 Cheriton et al. Jan 2013 A1
20130036236 Morales et al. Feb 2013 A1
20130058250 Casado et al. Mar 2013 A1
20130060839 Van Biljon et al. Mar 2013 A1
20130066940 Shao Mar 2013 A1
20130073602 Meadway et al. Mar 2013 A1
20130073724 Parashar et al. Mar 2013 A1
20130086298 Alanis Apr 2013 A1
20130094499 Davis et al. Apr 2013 A1
20130097351 Davis Apr 2013 A1
20130097448 Davis et al. Apr 2013 A1
20130107444 Schnell May 2013 A1
20130111107 Chang et al. May 2013 A1
20130124417 Spears et al. May 2013 A1
20130145375 Kang Jun 2013 A1
20130148667 Hama et al. Jun 2013 A1
20130163605 Chandra et al. Jun 2013 A1
20130191612 Li Jul 2013 A1
20130247064 Jackson Sep 2013 A1
20130268653 Deng et al. Oct 2013 A1
20130275703 Schenfeld et al. Oct 2013 A1
20130286840 Fan Oct 2013 A1
20130290643 Lim Oct 2013 A1
20130290650 Chang et al. Oct 2013 A1
20130298134 Jackson Nov 2013 A1
20130305093 Jayachandran et al. Nov 2013 A1
20130312006 Hardman Nov 2013 A1
20130318255 Karino Nov 2013 A1
20130318269 Dalal et al. Nov 2013 A1
20140052866 Jackson Feb 2014 A1
20140082614 Klein et al. Mar 2014 A1
20140104778 Schnell Apr 2014 A1
20140122833 Davis et al. May 2014 A1
20140135105 Quan et al. May 2014 A1
20140143773 Ciano et al. May 2014 A1
20140143781 Yao May 2014 A1
20140189039 Dalton Jul 2014 A1
20140201761 Dalal et al. Jul 2014 A1
20140317292 Odom Oct 2014 A1
20140348182 Chandra Nov 2014 A1
20140359044 Davis et al. Dec 2014 A1
20140359323 Fullerton et al. Dec 2014 A1
20140365596 Kanevsky Dec 2014 A1
20140379836 Zilberboim Dec 2014 A1
20150012679 Davis et al. Jan 2015 A1
20150039840 Chandra et al. Feb 2015 A1
20150103826 Davis Apr 2015 A1
20150229586 Jackson Aug 2015 A1
20150236972 Jackson Aug 2015 A1
20150263913 De Temmerman Sep 2015 A1
20150293789 Jackson Oct 2015 A1
20150301880 Allu Oct 2015 A1
20150381521 Jackson Dec 2015 A1
20160154539 Buddhiraja Jun 2016 A1
20160161909 Wada Jun 2016 A1
20160306586 Dornemann Oct 2016 A1
20160378570 Ljubuncic Dec 2016 A1
20170111274 Bays Apr 2017 A1
20170115712 Davis Apr 2017 A1
20170127577 Rodriguez et al. May 2017 A1
20180018149 Cook Jan 2018 A1
20180054364 Jackson Feb 2018 A1
20190260689 Jackson Aug 2019 A1
20190286610 Dalton Sep 2019 A1
20200073722 Jackson Mar 2020 A1
20200159449 Davis et al. May 2020 A1
20200379819 Jackson Dec 2020 A1
20200382585 Abu-Ghazaleh et al. Dec 2020 A1
20210117130 Davis Apr 2021 A1
20210141671 Jackson May 2021 A1
20210250249 Jackson Aug 2021 A1
20210306284 Jackson Sep 2021 A1
20210311804 Jackson Oct 2021 A1
20220121545 Dalton Apr 2022 A1
20220206859 Jackson Jun 2022 A1
20220206861 Jackson Jun 2022 A1
20220214920 Jackson Jul 2022 A1
20220214921 Jackson Jul 2022 A1
20220214922 Jackson Jul 2022 A1
20220222119 Jackson Jul 2022 A1
20220222120 Jackson Jul 2022 A1
20220239606 Jackson Jul 2022 A1
20220239607 Jackson Jul 2022 A1
20220247694 Jackson Aug 2022 A1
20220300334 Jackson Sep 2022 A1
20220317692 Guim Bernat Oct 2022 A1
Foreign Referenced Citations (52)
Number Date Country
2496783 Mar 2004 CA
60216001 Jul 2007 DE
112008001875 Aug 2013 DE
0268435 May 1988 EP
0605106 Jul 1994 EP
0859314 Aug 1998 EP
1331564 Jul 2003 EP
1365545 Nov 2003 EP
1492309 Dec 2004 EP
1865684 Dec 2007 EP
2391744 Feb 2004 GB
2392265 Feb 2004 GB
8-212084 Aug 1996 JP
2002-207712 Jul 2002 JP
2005-165568 Jun 2005 JP
2005-223753 Aug 2005 JP
2005-536960 Dec 2005 JP
2006-309439 Nov 2006 JP
20040107934 Dec 2004 KR
M377621 Apr 2010 TW
201017430 May 2010 TW
WO1998011702 Mar 1998 WO
WO1998058518 Dec 1998 WO
WO1999015999 Apr 1999 WO
WO1999057660 Nov 1999 WO
WO2000014938 Mar 2000 WO
WO2000025485 May 2000 WO
WO2000060825 Oct 2000 WO
WO2001009791 Feb 2001 WO
WO2001014987 Mar 2001 WO
WO2001015397 Mar 2001 WO
WO2001039470 May 2001 WO
WO2001044271 Jun 2001 WO
WO2003046751 Jun 2003 WO
WO2003060798 Sep 2003 WO
WO2004021109 Mar 2004 WO
WO2004021641 Mar 2004 WO
WO2004046919 Jun 2004 WO
WO2004070547 Aug 2004 WO
WO2004092884 Oct 2004 WO
WO2005013143 Feb 2005 WO
WO2005017763 Feb 2005 WO
WO2005017783 Feb 2005 WO
WO2005089245 Sep 2005 WO
WO2005091136 Sep 2005 WO
WO2006036277 Apr 2006 WO
WO2006107531 Oct 2006 WO
WO2006108187 Oct 2006 WO
WO2006112981 Oct 2006 WO
WO2008000193 Jan 2008 WO
WO2011044271 Apr 2011 WO
WO2012037494 Mar 2012 WO
Non-Patent Literature Citations (601)
Entry
US 7,774,482 B1, 08/2010, Szeto et al. (withdrawn)
J. S. Chase, D. E. Irwin, L. E. Grit, J. D. Moore and S. E. Sprenkle, “Dynamic virtual clusters in a grid site manager,” High Performance Distributed Computing, 2003. Proceedings. 12th IEEE International Symposium on, 2003, pp. 90-100 (Year: 2003).
Liu, Simon: “Securing the Clouds: Methodologies and Practices.” Encyclopedia of Cloud Computing (2016): 220. (Year: 2016).
Notice of Allowance on U.S. Appl. No. 14/827,927 dated Apr. 25, 2022.
Notice of Allowance on U.S. Appl. No. 16/913,745, dated Jun. 9, 2022.
Notice of Allowance on U.S. Appl. No. 17/700,808, dated May 26, 2022 and Jun. 6, 2022.
Office Action on U.S. Appl. No. 16/913,745 dated Jan. 13, 2022.
Office Action on U.S. Appl. No. 17/089,207 dated Jan. 28, 2022.
Office Action on U.S. Appl. No. 17/201,245 dated Mar. 18, 2022.
Office Action on U.S. Appl. No. 17/697,235 dated May 25, 2022.
Office Action on U.S. Appl. No. 17/697,368 dated Jun. 7, 2022.
Office Action on U.S. Appl. No. 17/697,403 dated Jun. 7, 2022.
Office Action on U.S. Appl. No. 16/537,256 dated Dec. 23, 2021.
Office Action on U.S. Appl. No. 16/913,708 dated Jun. 7, 2022.
Office Action on U.S. Appl. No. 17/722,037 dated Jun. 13, 2022.
U.S. Appl. No. 11/279,007, filed Apr. 2006, Jackson.
U.S. Appl. No. 13/705,340, filed Apr. 2012, Davis et al.
U.S. Appl. No. 13/899,751, filed May 2013, Chandra.
U.S. Appl. No. 13/935,108, filed Jul. 2013, Davis.
U.S. Appl. No. 13/959,428, filed Aug. 2013, Chandra.
U.S. Appl. No. 60/662,240, filed Mar. 2005, Jackson.
U.S. Appl. No. 60/552,653, filed Apr. 2005, Jackson.
“Microsoft Computer Dictionary, 5th Ed.”; Microsoft Press; 3 pages; 2002.
“Random House Concise Dictionary of Science & Computers”; 3 pages; Helicon Publishing; 2004.
A Language Modeling Framework for Resource Selection and Results Merging Si et al. CIKM 2002, Proceedings of the eleventh international conference on Iformation and Knowledge Management.
Alhusaini et al. “A framework for mapping with resource co-allocation in heterogeneous computing systems,” Proceedings 9th Heterogeneous Computing Workshop (HCW 2000) (Cat. No. PR00556), Cancun, Mexico, 2000, pp. 273-286. (Year: 2000).
Ali et al., “Task Execution Time Modeling for Heterogeneous Computing System”, IEEE, 2000, pp. 1-15.
Amiri et al., “Dynamic Function Placement for Data-Intensive Cluster Computing,” Jun. 2000.
Bader et al.; “Applications”; The International Journal of High Performance Computing Applications, vol. 15, No. ; pp. 181-185; Summer 2001.
Banicescu et al., “Competitive Resource management in Distributed Computing Environments with Hectiling”, 1999, High Performance Computing Symposium, p. 1-7 (Year: 1999).
Banicescu et al., “Efficient Resource Management for Scientific Applications in Distributed Computing Environment” 1998, Mississippi State Univ. Dept. of Comp. Science, p. 45-54. (Year: 1998).
Buyya et al., “An Evaluation of Economy-based Resource Trading and Scheduling on Computational Power Grids for Parameter Sweep Applications,” Active Middleware Services, 2000, 10 pages.
Caesar et al., “Design and Implementation of a Routing Control Platform,” Usenix, NSDI '05 Paper, Technical Program , obtained from the Internet, on Apr. 13, 2021, at URL <https://www.usenix.org/legacy/event/nsdi05/tech/full_papers/caesar/ca-esar_html/>, 23 pages.
Chase et al., “Dynamic Virtual Clusters in a Grid Site Manager”, Proceedings of the 12.sup.th IEEE International Symposium on High Performance Distributed Computing (HPDC'03), 2003.
Chen et al., “A flexible service model for advance reservation”, Computer Networks, Elsevier science publishers, vol. 37, No. 3-4, pp. 251-262. Nov. 5, 2001.
Coomer et al.; “Introduction to the Cluster Grid—Part 1”; Sun Microsystems White Paper; 19 pages; Aug. 2002.
Exhibit 1002, Declaration of Dr. Andrew Wolfe, Ph.D., document filed on behalf of Unified Patents, LLC, in Case No. IPR2022-00136, 110 pages, Declaration dated Nov. 29, 2021.
Exhibit 1008, Declaration of Kevin Jakel, document filed on behalf of Unified Patents, LLC, in Case No. IPR2022-00136, 7 pages, Declaration dated Nov. 4, 2021.
Foster et al., “A Distributed Resource Management Architecture that Supports Advance Reservations and Co-Allocation,” Seventh International Workshop on Quality of Service (IWQoS '99), 1999, pp. 27-36.
Furmento et al. “An Integrated Grid Environment for Component Applications”, Proceedings of the Second International Workshop on Grid Computing table of contents, 2001, pp. 26-37.
He XiaoShan; QoS Guided Min-Min Heuristic for Grid Task Scheduling; Jul. 2003, vol. 18, No. 4, pp. 442-451 J. Comput. Sci. & Technol.
Huy Tuong LE, “The Data-AWare Resource Broker” Research Project Thesis, University of Adelaide, Nov. 2003, pp. 1-63.
IBM Tivoli “IBM Directory Integrator and Tivoli Identity Manager Integration” Apr. 2, 2003, pp. 1-13 online link “http:publib.boulder.ibm.com/tividd/td/ITIM/SC32-1683-00/en_US/HTML/idi_integration/index.html” (Year: 2003).
Intel, Architecture Guide: Intel® Active Management Technology, Intel.com, Oct. 10, 2008, pp. 1-23, (Year 2008).
Joseph et al.; “Evolution of grid computing architecture and grid adoption models”; IBM Systems Journal, vol. 43, No. 4; 22 pages; 2004.
Kafil et al., “Optimal Task Assignment in Herterogenous Computing Systems,” IEEE, 1997, pp. 135-146.
Kuan-Wei Cheng, Chao-Tung Yang, Chuan-Lin Lai and Shun-Chyi Change, “A parallel loop self-scheduling on grid computing environments,” 7th International Symposium on Parallel Architectures, Algorithms and Networks, 2004. Proceedings. 2004, pp. 409-414 (Year: 2004).
Luo Si et al. “A Language Modeling Framework for Resource Selection and Results Merging”, Conference on Information and Knowledge Management. 2002 ACM pp. 391-397.
Maheswaran et al., “Dynamic Matching and Scheduling of a Class of Independent Tasks onto Heterogeneous Computing Systems,” IEEE, 2000, pp. 1-15.
Mateescu et al., “Quality of service on the grid via metascheduling with resource co-scheduling and co-reservation,” The International Journal of High Performance Computing Applications, 2003, 10 pages.
Notice of Allowance on U.S. Appl. No. 10/530,577, dated Oct. 15, 2015.
Notice of Allowance on U.S. Appl. No. 11/207,438 dated Jan. 3, 2012.
Notice of Allowance on U.S. Appl. No. 11/276,852 dated Nov. 26, 2014.
Notice of Allowance on U.S. Appl. No. 11/276,853, dated Apr. 5, 2016.
Notice of Allowance on U.S. Appl. No. 11/276,854, dated Mar. 6, 2014.
Notice of Allowance on U.S. Appl. No. 11/276,855, dated Sep. 13, 2013.
Notice of Allowance on U.S. Appl. No. 11/616,156, dated Mar. 25, 2014.
Notice of Allowance on U.S. Appl. No. 11/718,867 dated May 25, 2012.
Notice of Allowance on U.S. Appl. No. 12/573,967, dated Jul. 20, 2015.
Notice of Allowance on U.S. Appl. No. 13/234,054, dated Sep. 19, 2017.
Notice of Allowance on U.S. Appl. No. 13/284,855, dated Jul. 14, 2014.
Notice of Allowance on U.S. Appl. No. 13/453,086, dated Jul. 18, 2013.
Notice of Allowance on U.S. Appl. No. 13/475,713, dated Feb. 5, 2015.
Notice of Allowance on U.S. Appl. No. 13/475,722, dated Feb. 27, 2015.
Notice of Allowance on U.S. Appl. No. 13/527,498, dated Feb. 23, 2015.
Notice of Allowance on U.S. Appl. No. 13/527,505, dated Mar. 6, 2015.
Notice of Allowance on U.S. Appl. No. 13/621,987 dated Jun. 4, 2015.
Notice of Allowance on U.S. Appl. No. 13/624,725, dated Mar. 30, 2016.
Notice of Allowance on U.S. Appl. No. 13/624,731, dated Mar. 5, 2015.
Notice of Allowance on U.S. Appl. No. 13/662,759 dated May 10, 2016.
Notice of Allowance on U.S. Appl. No. 13/692,741 dated Dec. 4, 2015.
Notice of Allowance on U.S. Appl. No. 13/705,286 dated Feb. 24, 2016.
Notice of Allowance on U.S. Appl. No. 13/705,340, dated Dec. 3, 2014.
Notice of Allowance on U.S. Appl. No. 13/705,340, dated Mar. 16, 2015.
Notice of Allowance on U.S. Appl. No. 13/705,386, dated Jan. 24, 2014.
Notice of Allowance on U.S. Appl. No. 13/705,414, dated Nov. 4, 2013.
Notice of Allowance on U.S. Appl. No. 13/728,308 dated Oct. 7, 2015.
Notice of Allowance on U.S. Appl. No. 13/728,428 dated Jul. 18, 2016.
Notice of Allowance on U.S. Appl. No. 13/758,164, dated Apr. 15, 2015.
Notice of Allowance on U.S. Appl. No. 13/760,600 dated Feb. 26, 2018.
Notice of Allowance on U.S. Appl. No. 13/760,600 dated Jan. 9, 2018.
Notice of Allowance on U.S. Appl. No. 13/855,241, dated Oct. 27, 2020.
Notice of Allowance on U.S. Appl. No. 13/855,241, dated Sep. 14, 2020.
Notice of Allowance on U.S. Appl. No. 14/052,723 dated Feb. 8, 2017.
Notice of Allowance on U.S. Appl. No. 14/106,254 dated May 25, 2017.
Notice of Allowance on U.S. Appl. No. 14/106,697 dated Oct. 24, 2016.
Notice of Allowance on U.S. Appl. No. 14/137,921 dated Aug. 12, 2021 and Jul. 16, 2021.
Notice of Allowance on U.S. Appl. No. 14/137,940 dated Jan. 30, 2019.
Notice of Allowance on U.S. Appl. No. 14/154,912 dated Apr. 25, 2019.
Notice of Allowance on U.S. Appl. No. 14/154,912, dated Apr. 3, 2019.
Notice of Allowance on U.S. Appl. No. 14/154,912, dated Feb. 7, 2019.
Notice of Allowance on U.S. Appl. No. 14/331,718 dated Jun. 7, 2017.
Notice of Allowance on U.S. Appl. No. 14/331,772, dated Jan. 10, 2018.
Notice of Allowance on U.S. Appl. No. 14/334,178 dated Aug. 19, 2016.
Notice of Allowance on U.S. Appl. No. 14/334,178 dated Jun. 8, 2016.
Notice of Allowance on U.S. Appl. No. 14/334,931 dated May 20, 2016.
Notice of Allowance on U.S. Appl. No. 14/454,049, dated Jan. 20, 2015.
Notice of Allowance on U.S. Appl. No. 14/590,102, dated Jan. 22, 2018.
Notice of Allowance on U.S. Appl. No. 14/704,231, dated Sep. 2, 2015.
Notice of Allowance on U.S. Appl. No. 14/709,642 dated Mar. 19, 2019.
Notice of Allowance on U.S. Appl. No. 14/709,642, dated May 9, 2019.
Notice of Allowance on U.S. Appl. No. 14/725,543 dated Jul. 21, 2016.
Notice of Allowance on U.S. Appl. No. 14/753,948 dated Jun. 14, 2017.
Notice of Allowance on U.S. Appl. No. 14/791,873 dated Dec. 20, 2018.
Notice of Allowance on U.S. Appl. No. 14/809,723 dated Jan. 11, 2018.
Notice of Allowance on U.S. Appl. No. 14/827,927 dated Jan. 21, 2022 and Dec. 9, 2021.
Notice of Allowance on U.S. Appl. No. 14/833,673, dated Dec. 2, 2016.
Notice of Allowance on U.S. Appl. No. 14/842,916 dated Oct. 2, 2017.
Notice of Allowance on U.S. Appl. No. 14/872,645 dated Oct. 13, 2016.
Notice of Allowance on U.S. Appl. No. 14/987,059, dated Feb. 14, 2020.
Notice of Allowance on U.S. Appl. No. 14/987,059, dated Jul. 8, 2019.
Notice of Allowance on U.S. Appl. No. 14/987,059, dated Nov. 7, 2019.
Notice of Allowance on U.S. Appl. No. 15/042,489 dated Jul. 16, 2018.
Notice of Allowance on U.S. Appl. No. 15/049,542 dated Feb. 28, 2018.
Notice of Allowance on U.S. Appl. No. 15/049,542 dated Jan. 4, 2018.
Notice of Allowance on U.S. Appl. No. 15/078,115 dated Jan. 8, 2018.
Notice of Allowance on U.S. Appl. No. 15/254,111 dated Nov. 13, 2017.
Notice of Allowance on U.S. Appl. No. 15/254,111 dated Sep. 1, 2017.
Notice of Allowance on U.S. Appl. No. 15/270,418 dated Nov. 2, 2017.
Notice of Allowance on U.S. Appl. No. 15/345,017 dated Feb. 2, 2021.
Notice of Allowance on U.S. Appl. No. 15/357,332 dated Jul. 12, 2018.
Notice of Allowance on U.S. Appl. No. 15/360,668, dated May 5, 2017.
Notice of Allowance on U.S. Appl. No. 15/430,959 dated Mar. 15, 2018.
Notice of Allowance on U.S. Appl. No. 15/478,467 dated May 30, 2019
Notice of Allowance on U.S. Appl. No. 15/672,418 dated Apr. 4, 2018.
Notice of Allowance on U.S. Appl. No. 15/717,392 dated Mar. 22, 2019.
Notice of Allowance on U.S. Appl. No. 15/726,509, dated Sep. 25, 2019.
Office Action issued on U.S. Appl. No. 11/276,855, dated Jul. 22, 2010.
Office Action on U.S. Appl. No. 10/530,577, dated May 29, 2015.
Office Action on U.S. Appl. No. 11/207,438 dated Aug. 31, 2010.
Office Action on U.S. Appl. No. 11/207,438 dated Mar. 15, 2010.
Office Action on U.S. Appl. No. 11/276,852, dated Feb. 10, 2009.
Office Action on U.S. Appl. No. 11/276,852, dated Jan. 16, 2014.
Office Action on U.S. Appl. No. 11/276,852, dated Jun. 26, 2012.
Office Action on U.S. Appl. No. 11/276,852, dated Mar. 17, 2011.
Office Action on U.S. Appl. No. 11/276,852, dated Mar. 4, 2010.
Office Action on U.S. Appl. No. 11/276,852, dated Mar. 5, 2013.
Office Action on U.S. Appl. No. 11/276,852, dated Oct. 4, 2010.
Office Action on U.S. Appl. No. 11/276,852, dated Oct. 5, 2011.
Office Action on U.S. Appl. No. 11/276,852, dated Oct. 16, 2009.
Office Action on U.S. Appl. No. 11/276,853, dated Apr. 4, 2014.
Office Action on U.S. Appl. No. 11/276,853, dated Aug. 7, 2009.
Office Action on U.S. Appl. No. 11/276,853, dated Dec. 28, 2009.
Office Action on U.S. Appl. No. 11/276,853, dated Dec. 8, 2008.
Office Action on U.S. Appl. No. 11/276,853, dated Jul. 12, 2010.
Office Action on U.S. Appl. No. 11/276,853, dated May 26, 2011.
Office Action on U.S. Appl. No. 11/276,853, dated Nov. 23, 2010.
Office Action on U.S. Appl. No. 11/276,853, dated Oct. 16, 2009.
Office Action on U.S. Appl. No. 11/276,854, dated Apr. 18, 2011.
Office Action on U.S. Appl. No. 11/276,854, dated Aug. 1, 2012.
Office Action on U.S. Appl. No. 11/276,854, dated Jun. 10, 2009.
Office Action on U.S. Appl. No. 11/276,854, dated Jun. 5, 2013.
Office Action on U.S. Appl. No. 11/276,854, dated Jun. 8, 2010.
Office Action on U.S. Appl. No. 11/276,854, dated Nov. 26, 2008.
Office Action on U.S. Appl. No. 11/276,854, dated Oct. 27, 2010.
Office Action on U.S. Appl. No. 11/276,855, dated Aug. 13, 2009.
Office Action on U.S. Appl. No. 11/276,855, dated Dec. 30, 2008.
Office Action on U.S. Appl. No. 11/276,855, dated Dec. 31, 2009.
Office Action on U.S. Appl. No. 11/276,855, dated Dec. 7, 2010.
Office Action on U.S. Appl. No. 11/276,855, dated Jan. 26, 2012.
Office Action on U.S. Appl. No. 11/276,855, dated Jul. 22, 2010
Office Action on U.S. Appl. No. 11/276,855, dated Jun. 27, 2011.
Office Action on U.S. Appl. No. 11/616,156, dated Jan. 18, 2011.
Office Action on U.S. Appl. No. 11/616,156, dated Oct. 13, 2011.
Office Action on U.S. Appl. No. 11/616,156, dated Sep. 17, 2013.
Office Action on U.S. Appl. No. 11/718,867 dated Dec. 29, 2009.
Office Action on U.S. Appl. No. 11/718,867 dated Jan. 8, 2009.
Office Action on U.S. Appl. No. 11/718,867 dated Jul. 11, 2008.
Office Action on U.S. Appl. No. 11/718,867 dated Jun. 15, 2009.
Office Action on U.S. Appl. No. 12/573,967, dated Apr. 1, 2014.
Office Action on U.S. Appl. No. 12/573,967, dated Aug. 13, 2012.
Office Action on U.S. Appl. No. 12/573,967, dated Mar. 1, 2012.
Office Action on U.S. Appl. No. 12/573,967, dated Nov. 21, 2014.
Office Action on U.S. Appl. No. 12/573,967, dated Oct. 10, 2013.
Office Action on U.S. Appl. No. 12/794,996, dated Jun. 19, 2013.
Office Action on U.S. Appl. No. 12/794,996, dated Sep. 17, 2012.
Office Action on U.S. Appl. No. 12/889,721 dated Aug. 2, 2016.
Office Action on U.S. Appl. No. 12/889,721, dated Apr. 17, 2014.
Office Action on U.S. Appl. No. 12/889,721, dated Feb. 24, 2016.
Office Action on U.S. Appl. No. 12/889,721, dated Jul. 2, 2013.
Office Action on U.S. Appl. No. 12/889,721, dated May 22, 2015.
Office Action on U.S. Appl. No. 12/889,721, dated Oct. 11, 2012.
Office Action on U.S. Appl. No. 12/889,721, dated Sep. 29, 2014.
Office Action on U.S. Appl. No. 13/234,054 dated May 31, 2017.
Office Action on U.S. Appl. No. 13/234,054 dated Oct. 20, 2016.
Office Action on U.S. Appl. No. 13/234,054, dated Apr. 16, 2015.
Office Action on U.S. Appl. No. 13/234,054, dated Aug. 6, 2015.
Office Action on U.S. Appl. No. 13/234,054, dated Jan. 26, 2016.
Office Action on U.S. Appl. No. 13/234,054, dated Oct. 23, 2014.
Office Action on U.S. Appl. No. 13/284,855, dated Dec. 19, 2013.
Office Action on U.S. Appl. No. 13/453,086, dated Mar. 12, 2013.
Office Action on U.S. Appl. No. 13/475,713, dated Apr. 1, 2014.
Office Action on U.S. Appl. No. 13/475,713, dated Oct. 17, 2014.
Office Action on U.S. Appl. No. 13/475,722, dated Jan. 17, 2014.
Office Action on U.S. Appl. No. 13/475,722, dated Oct. 20, 2014.
Office Action on U.S. Appl. No. 13/527,498, dated May 8, 2014.
Office Action on U.S. Appl. No. 13/527,498, dated Nov. 17, 2014.
Office Action on U.S. Appl. No. 13/527,505, dated Dec. 5, 2014.
Office Action on U.S. Appl. No. 13/527,505, dated May 8, 2014.
Office Action on U.S. Appl. No. 13/621,987 dated Feb. 27, 2015.
Office Action on U.S. Appl. No. 13/621,987 dated Oct. 8, 2014.
Office Action on U.S. Appl. No. 13/624,725 dated Mar. 10, 2016.
Office Action on U.S. Appl. No. 13/624,725, dated Apr. 23, 2015.
Office Action on U.S. Appl. No. 13/624,725, dated Jan. 10, 2013.
Office Action on U.S. Appl. No. 13/624,725, dated Nov. 4, 2015.
Office Action on U.S. Appl. No. 13/624,725, dated Nov. 13, 2013.
Office action on U.S. Appl. No. 13/624,731 dated Jan. 29, 2013.
Office Action on U.S. Appl. No. 13/624,731, dated Jul. 25, 2014.
Office Action on U.S. Appl. No. 13/662,759, dated Feb. 22, 2016.
Office Action on U.S. Appl. No. 13/662,759, dated Nov. 6, 2014.
Office Action on U.S. Appl. No. 13/692,741, dated Jul. 1, 2015.
Office Action on U.S. Appl. No. 13/692,741, dated Mar. 11, 2015.
Office Action on U.S. Appl. No. 13/692,741, dated Sep. 4, 2014.
Office Action on U.S. Appl. No. 13/705,286, dated May 13, 2013.
Office Action on U.S. Appl. No. 13/705,340, dated Aug. 2, 2013.
Office Action on U.S. Appl. No. 13/705,340, dated Mar. 12, 2014.
Office Action on U.S. Appl. No. 13/705,340, dated Mar. 29, 2013.
Office Action on U.S. Appl. No. 13/705,386, dated May 13, 2013.
Office Action on U.S. Appl. No. 13/705,414, dated Apr. 9, 2013.
Office Action on U.S. Appl. No. 13/705,414, dated Aug. 9, 2013.
Office Action on U.S. Appl. No. 13/705,428, dated Jul. 10, 2013.
Office Action on U.S. Appl. No. 13/728,308, dated May 14, 2015.
Office Action on U.S. Appl. No. 13/728,428 dated May 6, 2016.
Office Action on U.S. Appl. No. 13/728,428, dated Jun. 12, 2015.
Office Action on U.S. Appl. No. 13/760,600 dated Aug. 30, 2016.
Office Action on U.S. Appl. No. 13/760,600 dated Jan. 23, 2017.
Office Action on U.S. Appl. No. 13/760,600 dated Jun. 15, 2017.
Office Action on U.S. Appl. No. 13/760,600 dated Mar. 15, 2016.
Office Action on U.S. Appl. No. 13/760,600 dated Oct. 19, 2015.
Office Action on U.S. Appl. No. 13/760,600, dated Apr. 10, 2015.
Office Action on U.S. Appl. No. 13/855,241, dated Jan. 13, 2016.
Office Action on U.S. Appl. No. 13/855,241, dated Jul. 6, 2015.
Office Action on U.S. Appl. No. 13/855,241, dated Jun. 27, 2019.
Office Action on U.S. Appl. No. 13/855,241, dated Mar. 30, 2020.
Office Action on U.S. Appl. No. 13/855,241, dated Sep. 15, 2016.
Office Action on U.S. Appl. No. 14/052,723, dated Dec. 3, 2015.
Office Action on U.S. Appl. No. 14/052,723, dated May 1, 2015.
Office Action on U.S. Appl. No. 14/106,254 dated Aug. 12, 2016.
Office Action on U.S. Appl. No. 14/106,254 dated Feb. 15, 2017.
Office Action on U.S. Appl. No. 14/106,254, dated May 2, 2016.
Office Action on U.S. Appl. No. 14/106,697 dated Feb. 2, 2016.
Office Action on U.S. Appl. No. 14/106,697, dated Aug. 17, 2015.
Office Action on U.S. Appl. No. 14/106,698, dated Aug. 19, 2015.
Office Action on U.S. Appl. No. 14/106,698, dated Feb. 12, 2015.
Office Action on U.S. Appl. No. 14/137,921 dated Feb. 4, 2021.
Office Action on U.S. Appl. No. 14/137,921 dated Jun. 25, 2020.
Office Action on U.S. Appl. No. 14/137,921 dated May 31, 2017.
Office Action on U.S. Appl. No. 14/137,921 dated May 6, 2016.
Office Action on U.S. Appl. No. 14/137,921 dated Oct. 6, 2016.
Office Action on U.S. Appl. No. 14/137,921 dated Oct. 8, 2015.
Office Action on U.S. Appl. No. 14/137,940 dated Aug. 10, 2018.
Office Action on U.S. Appl. No. 14/137,940 dated Jan. 25, 2018.
Office Action on U.S. Appl. No. 14/137,940 dated Jun. 3, 2016.
Office Action on U.S. Appl. No. 14/137,940 dated Jun. 9, 2017.
Office Action on U.S. Appl. No. 14/137,940 dated Nov. 3, 2016.
Office Action on U.S. Appl. No. 14/154,912, dated Dec. 7, 2017.
Office Action on U.S. Appl. No. 14/154,912, dated Jul. 20, 2017.
Office Action on U.S. Appl. No. 14/154,912, dated May 8, 2018.
Office Action on U.S. Appl. No. 14/154,912, dated Oct. 11, 2018.
Office Action on U.S. Appl. No. 14/331,718 dated Feb. 28, 2017.
Office Action on U.S. Appl. No. 14/331,772, dated Aug. 11, 2017.
Office Action on U.S. Appl. No. 14/334,178 dated Dec. 18, 2015.
Office Action on U.S. Appl. No. 14/334,178, dated Nov. 4, 2015.
Office Action on U.S. Appl. No. 14/334,931 dated Dec. 11, 2015.
Office Action on U.S. Appl. No. 14/334,931, dated Jan. 5, 2015.
Office Action on U.S. Appl. No. 14/334,931, dated Jul. 9, 2015.
Office Action on U.S. Appl. No. 14/590,102, dated Aug. 15, 2017.
Office Action on U.S. Appl. No. 14/691,120 dated Mar. 10, 2022.
Office Action on U.S. Appl. No. 14/691,120 dated Mar. 30, 2020.
Office Action on U.S. Appl. No. 14/691,120 dated Oct. 3, 2019.
Office Action on U.S. Appl. No. 14/691,120 dated Oct. 20, 2020.
Office Action on U.S. Appl. No. 14/691,120 dated Sep. 29, 2021.
Office Action on U.S. Appl. No. 14/691,120, dated Aug. 27, 2018.
Office Action on U.S. Appl. No. 14/691,120, dated Feb. 12, 2018.
Office Action on U.S. Appl. No. 14/691,120, dated Mar. 2, 2017.
Office Action on U.S. Appl. No. 14/691,120, dated Mar. 22, 2019.
Office Action on U.S. Appl. No. 14/691,120, dated Sep. 13, 2017.
Office Action on U.S. Appl. No. 14/709,642 dated Feb. 7, 2018.
Office Action on U.S. Appl. No. 14/709,642 dated Feb. 17, 2016.
Office Action on U.S. Appl. No. 14/709,642 dated Jul. 12, 2017.
Office Action on U.S. Appl. No. 14/709,642 dated Sep. 12, 2016.
Office Action on U.S. Appl. No. 14/725,543 dated Apr. 7, 2016.
Office Action on U.S. Appl. No. 14/751,529 dated Aug. 9, 2017.
Office Action on U.S. Appl. No. 14/751,529 dated Oct. 3, 2018.
Office Action on U.S. Appl. No. 14/751,529, dated Jun. 6, 2016.
Office Action on U.S. Appl. No. 14/751,529, dated Nov. 14, 2016.
Office Action on U.S. Appl. No. 14/753,948 dated Nov. 4, 2016.
Office Action on U.S. Appl. No. 14/791,873 dated May 14, 2018.
Office Action on U.S. Appl. No. 14/809,723 dated Aug. 25, 2017.
Office Action on U.S. Appl. No. 14/809,723 dated Dec. 30, 2016.
Office Action on U.S. Appl. No. 14/827,927 dated Jan. 19, 2021.
Office Action on U.S. Appl. No. 14/827,927 dated Jan. 31, 2020.
Office Action on U.S. Appl. No. 14/827,927 dated May 16, 2018.
Office Action on U.S. Appl. No. 14/827,927 dated May 16, 2019.
Office Action on U.S. Appl. No. 14/827,927 dated Sep. 9, 2019.
Office Action on U.S. Appl. No. 14/827,927, dated Aug. 28, 2018.
Office Action on U.S. Appl. No. 14/827,927, dated Jan. 31, 2019.
Office Action on U.S. Appl. No. 14/833,673 dated Aug. 11, 2017.
Office Action on U.S. Appl. No. 14/833,673, dated Feb. 11, 2016.
Office Action on U.S. Appl. No. 14/833,673, dated Jun. 10, 2016.
Office Action on U.S. Appl. No. 14/833,673, dated Sep. 24, 2015.
Office Action on U.S. Appl. No. 14/842,916 dated May 5, 2017.
Office Action on U.S. Appl. No. 14/872,645 dated Feb. 16, 2016.
Office Action on U.S. Appl. No. 14/872,645 dated Jun. 29, 2016.
Office Action on U.S. Appl. No. 14/987,059, dated Jan. 31, 2019.
Office Action on U.S. Appl. No. 14/987,059, dated May 11, 2018.
Office Action on U.S. Appl. No. 14/987,059, dated Oct. 11, 2018.
Office Action on U.S. Appl. No. 15/042,489 dated Jan. 9, 2018.
Office Action on U.S. Appl. No. 15/078,115 dated Sep. 5, 2017.
Office Action on U.S. Appl. No. 15/254,111 dated Jun. 20, 2017.
Office Action on U.S. Appl. No. 15/281,462 dated Apr. 6, 2018.
Office Action on U.S. Appl. No. 15/281,462 dated Dec. 15, 2017.
Office Action on U.S. Appl. No. 15/281,462 dated Feb. 10, 2017.
Office Action on U.S. Appl. No. 15/281,462 dated Jun. 13, 2017.
Office Action on U.S. Appl. No. 15/345,017 dated Aug. 24, 2020.
Office Action on U.S. Appl. No. 15/345,017 dated Aug. 9, 2019.
Office Action on U.S. Appl. No. 15/345,017 dated Jan. 31, 2019.
Office Action on U.S. Appl. No. 15/345,017 dated Jul. 11, 2018.
Office Action on U.S. Appl. No. 15/345,017 dated Mar. 20, 2020.
Office Action on U.S. Appl. No. 15/345,017 dated Nov. 29, 2019.
Office Action on U.S. Appl. No. 15/357,332 dated May 9, 2018.
Office Action on U.S. Appl. No. 15/357,332 dated Nov. 9, 2017.
Office Action on U.S. Appl. No. 15/478,467, dated Jan. 11, 2019.
Office Action on U.S. Appl. No. 15/478,467, dated Jul. 13, 2018.
Office Action on U.S. Appl. No. 15/717,392 dated Dec. 3, 2018.
Office Action on U.S. Appl. No. 15/717,392 dated Jul. 5, 2018.
Office Action on U.S. Appl. No. 15/726,509, dated Jun. 3, 2019.
Office Action on U.S. Appl. No. 13/624,731, dated Nov. 12, 2013.
Office Action on U.S. Appl. No. 15/270,418 dated Apr. 21, 2017.
PCT/US2005/008296—International Search Report dated Aug. 3, 2005 for PCT Application No. PCT/US2005/008296, 1 page.
PCT/US2005/008297—International Search Report for Application No. PCT/US2005/008297, dated Sep. 29, 2005.
PCT/US2005/040669—International Preliminary Examination Report for PCT/US2005/040669, dated Apr. 29, 2008.
PCT/US2005/040669—Written Opinion for PCT/US2005/040669, dated Sep. 13, 2006.
PCT/US2009/044200—International Preliminary Report on Patentability for PCT/US2009/044200, dated Nov. 17, 2010.
PCT/US2009/044200—International Search Report and Written Opinion on PCT/US2009/044200, dated Jul. 1, 2009.
PCT/US2010/053227—International Preliminary Report on Patentability for PCT/US2010/053227, dated May 10, 2012.
PCT/US2010/053227—International Search Report and Written Opinion for PCT/US2010/053227, dated Dec 16, 2010.
PCT/US2011/051996—International Search Report and Written Opinion for PCT/US2011/051996, dated Jan. 19, 2012.
PCT/US2012/038986—International Preliminary Report on Patentability for PCT/US2012/038986 dated Nov. 26, 2013.
PCT/US2012/038986—International Search Report and Written Opinion on PCT/US2012/038986, dated Mar. 14, 2013.
PCT/US2012/038987—International Search Report and Written Opinion for PCT/US2012/038987, dated Aug. 16, 2012.
PCT/US2012/061747—International Preliminary Report on Patentability for PCT/US2012/061747, dated Apr. 29, 2014.
PCT/US2012/061747—International Search Report and Written Opinion for PCT/US2012/061747, dated Mar. 1, 2013.
PCT/US2012/062608—International Preliminary Report on Patentability issued on PCT/US2012/062608, dated May 6, 2014.
PCT/US2012/062608—International Search Report and Written Opinion for PCT/US2012/062608, dated Jan. 18, 2013.
Petition for Inter Partes Review of U.S. Pat. No. 8,271,980, Challenging Claims 1-5 and 14-15, document filed on behalf of Unified Patents, LLC, in Case No. IPR2022-00136, 92 pages, Petition document dated Nov. 29, 2021.
Roblitz et al., “Resource Reservations with Fuzzy Requests”, Con-currency and computation: Practice and Experience, 2005.
Smith et al.; “Grid computing”; MIT Sloan Management Review, vol. 46, Iss. 1.; 5 pages; Fall 2004.
Snell et al., “The Performance Impact of Advance Reservation Meta-Scheduling”, Springer-Verlag, Berlin, 2000, pp. 137-153.
Stankovic et al., “The Case for Feedback Control Real-Time Scheduling” 1999, IEEE pp. 1-13.
Takahashi et al. “A Programming Interface for Network Resource Management,” 1999 IEEE, pp. 34-44.
Tanaka et al. “Resource Manager for Globus-Based Wide-Area Cluster Computing,” 1999 IEEE, 8 pages.
U.S. Appl. No. 60/552,653, filed Apr. 19, 2005.
U.S. Appl. No. 60/662,240, filed Mar. 16, 2005, Jackson.
Abdelwahed, Sherif et al., “A Control-Based Framework for Self-Managing Distributed Computing Systems”, WOSS'04 Oct. 31-Nov. 1, 2004 Newport Beach, CA, USA. Copyright 2004 ACM 1-58113-989-6/04/0010.
Abdelzaher, Tarek, et al., “Performance Guarantees for Web Server End-Systems: A Control-Theoretical Approach”, IEEE Transactions on Parallel and Distributed Systems, vol. 13, No. 1, Jan. 2002.
Appleby, K., et al., “Oceano-SLA Based Management of a Computing Utility”, IBM T.J. Watson Research Center, P.O.Box 704, Yorktown Heights, New York 10598, USA. Proc. 7th IFIP/IEEE Int'l Symp. Integrated Network Management, IEEE Press 2001.
Aweya, James et al., “An adaptive load balancing scheme for web servers”, International Journal of Network Management 2002; 12: 3-39 (DOI: 10.1002/nem.421), Copyright 2002 John Wiley & Sons, Ltd.
Baentsch, Michael et al., “World Wide Web Caching: The Application-Level View of the Internet”, Communications Magazine, IEEE, vol. 35, Issue 6, pp. 170-178, Jun. 1997.
Banga, Gaurav et al., “Resource Containers: A New Facility for Resource Management in Server Systems”, Rice University, originally published in the Proceedings of the 3.sup.rd Symposium on Operating Systems Design and Implementation, New Orleans, Louisiana, Feb. 1999.
Belloum, A. et al., “A Scalable Web Server Architecture”, World Wide Web: Internet and Web Information Systems, 5, 5-23, 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. 2000.
Benkner, Siegfried, et al., “VGE—A Service-Oriented Grid Environment for On-Demand Supercomputing”, Institute for Software Science, University of Vienna, Nordbergstrasse 15/C/3, A-1090 Vienna, Austria. Proceedings of the 5th IEEE/ACM International Workshop on Grid Computing. pp. 11-18. 2004.
Bent, Leeann et al., “Characterization of a Large Web Site Population with Implications for Content Delivery”, WWW2004, May 17-22, 2004, New York, New York, USA ACM 1-58113-844-X/04/0005, pp. 522-533.
Bian, Qiyong, et al., “Dynamic Flow Switching, A New Communication Service for ATM Networks”, 1997.
Braumandl, R. et al., “ObjectGlobe: Ubiquitous query processing on the Internet”, Universitat Passau, Lehrstuhl fur Informatik, 94030 Passau, Germany. Technische Universitaat Muunchen, Institut fur Informatik, 81667 Munchen, Germany. Edited by F. Casati, M.-C. Shan, D. Georgakopoulos. Published online Jun. 7, 2001--.sub.--cSpringer-Verlag 2001.
C. Huang, S. Sebastine and T. Abdelzaher, “An Architecture for Real-Time Active Content Distribution”, In Proceedings of the 16.sup.th Euromicro Conference on Real-Time Systems (ECRTS 04), pp. 271-280, 2004.
Cardellini, Valeria et al., “Geographic Load Balancing for Scalable Distributed Web Systems”, Proceedings of the 8th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems, pp. 20-27. 2000.
Cardellini, Valeria et al., “The State of the Art in Locally Distributed Web-Server Systems”, ACM Computing Surveys, vol. 34, No. 2, Jun. 2002, pp. 263-311.
Casalicchio, Emiliano, et al., “Static and Dynamic Scheduling Algorithms for Scalable Web Server Farm”, University of Roma Tor Vergata, Roma, Italy, 00133.2001.
Chandra, Abhishek et al., “Dynamic Resource Allocation for Shared Data Centers Using Online Measurements” Proceedings of the 11th international conference on Quality of service, Berkeley, CA, USA pp. 381-398. 2003.
Chandra, Abhishek et al., “Quantifying the Benefits of Resource Multiplexing in On-Demand Data Centers”, Department of Computer Science, University of Massachusetts Amherst, 2003.
Chawla, Hamesh et al., “HydraNet: Network Support for Scaling of Large-Scale Services”, Proceedings of 7th International Conference on Computer Communications and Networks, 1998. Oct. 1998.
Chellappa, Ramnath et al., “Managing Computing Resources in Active Intranets”, International Journal of Network Management, 2002, 12:117-128 (DOI:10.1002/nem.427).
Chen, Thomas, “Increasing the Observability of Internet Behavior”, Communications of the ACM, vol. 44, No. 1, pp. 93-98, Jan. 2001.
Chen, Xiangping et al., “Performance Evaluation of Service Differentiating Internet Servers”, IEEE Transactions on Computers, vol. 51, No. 11, pp. 1368-1375, Nov. 2002.
Chu, Wesley et al., “Taks Allocation and Precedence Relations for Distributed Real-Time Systems”, IEEE Transactions on Computers, vol. C-36, No. 6, pp. 667-679. Jun. 1987.
Colajanni, Michele et al., “Analysis of Task Assignment Policies in Scalable Distributed Web-server Systems”, IEEE Transactions on Parallel and Distributed Systes, vol. 9, No. 6, Jun. 1998.
Conti, Marco et al., “Quality of Service Issues in Internet Web Services”, IEEE Transactions on Computers, vol. 51, No. 6, pp. 593-594, Jun. 2002.
Conti, Marco, et al., “Client-side content delivery policies in replicated web services: parallel access versus single server approach”, Istituto di Informatica e Telematica (IIT), Italian National Research Council (CNR), Via G. Moruzzi, I. 56124 Pisa, Italy, Performance Evaluation 59 (2005) 137-157, Available online Sep. 11, 2004.
D. Villela, P. Pradhan, and D. Rubenstein, “Provisioning Servers in the Application Tier for E-commerce Systems”, In Proceedings of the 12.sup.th IEEE International Workshop on Quality of Service (IWQoS '04), pp. 57-66, Jun. 2004.
D.P. Vidyarthi, A. K. Tripathi, B. K. Sarker, A. Dhawan, and L. T. Yang, “Cluster-Based Multiple Task Allocation in Distributed Computing System”, In Proceedings of the 18.sup.th International Parallel and Distributed Processing Symposium (IPDPS'04), p. 239, Santa Fe, New Mexico, Apr. 2004.
Dilley, John, et al., “Globally Distributed Content Delivery”, IEEE Internet Computing, 1089-7801/02/$17.00 .COPYRGT. 2002 IEEE, pp. 50-58, Sep.-Oct. 2002.
Ercetin, Ozgur et al., “Market-Based Resource Allocation for Content Delivery in the Internet”, IEEE Transactions on Computers, vol. 52, No. 12, pp. 1573-1585, Dec. 2003.
Fan, Li, et al., “Summary Cache: A Scalable Wide-Area Web Cache Sharing Protocol”, IEEE/ACM Transactions on networking, vol. 8, No. 3, Jun. 2000.
Feldmann, Anja, et al., “Efficient Policies for Carrying Web Traffic Over Flow-Switched Networks”, IEEE/ACM Transactions on Networking, vol. 6, No. 6, Dec. 1998.
Feldmann, Anja, et al., “Reducing Overhead in Flow-Switched Networks: An Empirical Study of Web Traffic”, AT&T Labs-Research, Florham Park, NJ, 1998.
Feng, Chen, et al., “Replicated Servers Allocation for Multiple Information Sources in a Distributed Environment”, Department of Computer Science, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, Sep. 2000.
Fong, L.L. et al., “Dynamic Resource Management in an eUtility”, IBM T. J. Watson Research Center, 0-7803-7382-0/02/$17.00 .COPYRGT. 2002 IEEE.
Foster, Ian et al., “The Anatomy of the Grid—Enabling Scalable Virtual Organizations”, To appear: Intl J. Supercomputer Applications, 2001.
Fox, Armando et al., “Cluster-Based Scalable Network Services”, University of California at Berkeley, SOSP—Oct. 16, 1997 Saint-Malo, France, ACM 1997.
Garg, Rahul, et al., “A SLA Framework for QoS Provisioning and Dynamic Capacity Allocation”, 2002.
Gayek, P., et al., “A Web Content Serving Utility”, IBM Systems Journal, vol. 43, No. 1, pp. 43-63. 2004.
Genova, Zornitza et al., “Challenges in URL Switching for Implementing Globally Distributed Web Sites”, Department of Computer Science and Engineering, University of South Florida, Tampa, Florida 33620. 0-7695-077 I-9/00 $10.00-IEEE. 2000.
Grajcar, Martin, “Genetic List Scheduling Algorithm for Scheduling and Allocation on a Loosely Coupled Heterogeneous Multiprocessor System”, Proceedings of the 36.sup.th annual ACM/IEEE Design Automation Conference, New Orleans, Louisiana, pp. 280-285. 1999.
Grimm, Robert et al., “System Support for Pervasive Applications”, ACM Transactions on Computer Systems, vol. 22, No. 4, Nov. 2004, pp. 421-486.
Gupta, A., Kleinberg, J., Kumar, A., Rastogi, R. & Yener, B. “Provisioning a virtual private network: a network design problem for multicommodity flow,” Proceedings of the thirty-third annual ACM symposium on Theory of computing [online], Jul. 2001, pp. 389-398, abstract [retrieved on Jun. 14, 2007].Retrieved from the Internet:<URL:http://portal.acm.org/citation.cfm?id=380830&dl=ACM&coll- -=GUIDE>.
Hadjiefthymiades, Stathes et al., “Using Proxy Cache Relocation to Accelerate Web Browsing in Wireless/Mobile Communications”, University of Athens, Dept. of Informatics and Telecommunications, Panepistimioupolis, Ilisia, Athens, 15784, Greece. WWW10, May 1-5, 2001, Hong Kong.
Hu, E.C. et al., “Adaptive Fast Path Architecture”, Copyright 2001 by International Business Machines Corporation, pp. 191-206, IBM J. Res. & Dev. vol. 45 No. 2 Mar. 2001.
I. Haddad and E. Paquin, “MOSIX: A Cluster Load-Balancing Solution for Linux”, In Linux Journal, vol. 2001 Issue 85es, Article No. 6, May 2001.
J. Guo, L. Bhuyan, R. Kumar and S. Basu, “QoS Aware Job Scheduling in a Cluster-Based Web Server for Multimedia Applications”, In Proceedings of the 19.sup.th IEEE International Parallel and Distributed Processing Symposium (IPDPS'05), Apr. 2005.
J. Rolia, S. Singhal, and R. Friedrich, “Adaptive Internet data centers”, In Proceedings of the International Conference on Advances in Infrastructure for Electronic Business, Science, and Education on the Internet (SSGRR '00), Jul. 2000.
J. Rolia, X. Zhu, and M. Arlitt, “Resource Access Management for a Utility Hosting Enterprise Applications”, In Proceedings of the 8th IFIP/IEEE International Symposium on Integrated Network Management (IM), pp. 549-562, Colorado Springs, Colorado, USA, Mar. 2003.
Jann, Joefon et al., “Web Applications and Dynamic Reconfiguration in UNIX Servers”, IBM, Thomos J. Watson Research Center, Yorktown' Heights, New York 10598, 0-7803-7756-7/03/$17.00. 2003 IEEE. pp. 186-194.
Jeffrey S. Chase, David E. Irwin, Laura E. Grit, Justin D. Moore, Sara E. Sprenkle, “Dynamic Virtual Clusters in a Grid Site Manager”, In Proceedings of the 12.sup.th IEEE International Symposium on High Performance Distributed Computing (HPDC'03), p. 90, Jun. 2003.
Jiang, Xuxian et al., “SODA: a Service-On-Demand Architecture for Application Service Hosting Utility Platforms”, Proceedings of the 12th IEEE International Symposium on High Performance Distributed Computing (HPDC'03) 1082-8907/03 $17.00 .COPYRGT. 2003 IEEE.
K. Azuma, T. Okamoto, G. Hasegawa, and M. Murata, “Design, Implementation and Evaluation of Resource Management System for Internet Servers”, IOS Press, Journal of High Speed Networks, vol. 14 Issue 4, pp. 301-316, Oct. 2005.
K. Shen, H. Tang, T. Yang, and L. Chu, “Integrated Resource Management for Cluster-based Internet Services”, In Proceedings of the 5.sup.th Symposium on Operating Systems Design and Implementation (OSDI '02), pp. 225-238, Dec. 2002.
K. Shen, L. Chu, and T. Yang, “Supporting Cluster-based Network Services on Functionally Symmetric Software Architecture”, In Proceedings of the ACM/IEEE SC2004 Conference, Nov. 2004.
Kant, Krishna et al., “Server Capacity Planning for Web Traffic Workload”, IEEE Transactions on Knowledge and Data Engineering, vol. 11, No. 5, Sep./Oct. 1999, pp. 731-474.
Koulopoulos, D. et al., “PLEIADES: An Internet-based parallel/distributed system”, Software-Practice and Experience 2002; 32:1035-1049 (DOI:10.1002/spe.468).
Kuz, Ihor et al., Delft University of Technology Vrije Universiteit Vrije Universiteit Delft, The Netherlands, 0-7695-0819-7/00 $10.00 0 2000 IEEE.
L. Amini, A. Shaikh, and H. Schulzrinne, “Effective Peering for Multi-provider Content Delivery Services”, In Proceedings of 23.sup.rd Annual IEEE Conference on Computer Communications (INFOCOM'04), pp. 850-861, 2004.
L. Bradford, S. Milliner, and M. Dumas, “Experience Using a Coordination-based Architecture for Adaptive Web Content Provision”, In Coordination, pp. 140-156. Springer, 2005.
Liao, Raymond, et al., “Dynamic Core Provisioning for Quantitative Differentiated Services”, IEEE/ACM Transactions on Networking, vol. 12, No. 3, pp. 429-442, Jun. 2004.
Lowell, David et al., “Devirtualizable Virtual Machines Enabling General, Single-Node, Online Maintenance”, ASPLOS'04, Oct. 9-13, 2004, Boston, Massachusetts, USA. pp. 211-223, Copyright 2004 ACM.
Lu, Chenyang et al., “A Feedback Control Approach for Guaranteeing Relative Delays in Web Servers”, Department of Computer Science, University of Virginia, Charlottesville, VA 22903, 0-7695-1134-1/01 $10.00.2001 IEEE.
M. Clarke and G. Coulson, “An Architecture for Dynamically Extensible Operating Systems”, In Proceedings of the 4th International Conference on Configurable Distributed Systems (ICCDS'98), Annapolis, MD, May 1998.
M. Colajanni, P. Yu, V. Cardellini, M. Papazoglou, M. Takizawa, B. Cramer and S. Chanson, “Dynamic Load Balancing in Geographically Distributed Heterogeneous Web Servers”, In Proceedings of the 18.sup.th International Conference on Distributed Computing Systems, pp. 295-302, May 1998.
M. Devarakonda, V.K. Naik, N. Rajamanim, “Policy-based multi-datacenter resource management”, In 6.sup.th IEEE International Workshop on Policies for Distributed Systems and Networks, pp. 247-250, Jun. 2005.
Mahon, Rob et al., “Cooperative Design in Grid Services”, The 8th International Conference on Computer Supported Cooperative Work in Design Proceedings. pp. 406-412. IEEE 2003.
McCann, Julie, et al., “Patia: Adaptive Distributed Webserver (A Position Paper)”, Department of Computing, Imperial College London, SW1 2BZ, UK. 2003.
Montez, Carlos et al., “Implementing Quality of Service in Web Servers”, LCMI—Depto de Automacao e Sistemas—Univ. Fed. de Santa Catarina, Caixa Postal 476-88040-900—Florianopolis—SC—Brasil, 1060-9857/02 $17.00. 2002 IEEE.
Pacifici, Giovanni et al., “Performance Management for Cluster Based Web Services”, IBM TJ Watson Research Center, May 13, 2003.
R. Doyle, J. Chase, O. Asad, W. Jin, and A. Vahdat, “Model-Based Resource Provisioning in a Web Service Utility”, In Proceedings of the Fourth USENIX Symposium on Internet Technologies and Systems (USITS), Mar. 2003.
R. Kapitza, F. J. Hauck, and H. P. Reiser, “Decentralized, Adaptive Services: The AspectIX Approach for a Flexible and Secure Grid Environment”, In Proceedings of the Grid Services Engineering and Management Conferences (GSEM, Erfurt, Germany, Nov. 2004), pp. 107-118, LNCS 3270, Springer, 2004.
Rashid, Mohammad, et al., “An Analytical Approach to Providing Controllable Differentiated Quality of Service in Web Servers”, IEEE Transactions on Parallel and Distributed Systems, vol. 16, No. 11, pp. 1022-1033, Nov. 2005.
Raunak, Mohammad et al., “Implications of Proxy Caching for Provisioning Networks and Servers”, IEEE Journal on Selected Areas in Communications, vol. 20, No. 7, pp. 1276-1289, Sep. 2002.
Reed, Daniel et al., “The Next Frontier: Interactive and Closed Loop Performance Steering”, Department of Computer Science, University of Illinois, Urbana, Illinois 61801, International Conference on Parallel Processing Workshop, 1996.
Reumann, John et al., “Virtual Services: A New Abstraction for Server Consolidation”, Proceedings of 2000 USENIX Annual Technical Conference, San Diego, California, Jun. 18-23, 2000.
Russell, Clark, et al., “Providing Scalable Web Service Using Multicast Delivery”, College of Computing, Georgia Institute of Technology, Atlanta, GA 30332-0280, 1995.
Ryu, Kyung Dong et al., “Resource Policing to Support Fine-Grain Cycle Stealing in Networks of Workstations”, IEEE Transactions on Parallel and Distributed Systems, vol. 15, No. 10, pp. 878-892, Oct. 2004.
S. Nakrani and C. Tovey, “On Honey Bees and Dynamic Server Allocation in Internet Hosting Centers”, Adaptive Behavior, vol. 12, No. 3-4, pp. 223-240, Dec. 2004.
S. Ranjan, J. Rolia, H. Fu, and E. Knightly, “QoS-driven Server Migration for Internet Data Centers”, In Proceedings of the Tenth International Workshop on Quality of Service (IWQoS 2002), May 2002.
S. Taylor, M. Surridge, and D. Marvin, “Grid Resources for Industrial Applications”, In Proceedings of the IEEE International Conference on Web Services (ICWS 04), pp. 402-409, San Diego, California, Jul. 2004.
Sacks, Lionel et al., “Active Robust Resource Management in Cluster Computing Using Policies”, Journal of Network and Systems Management, vol. 11, No. 3, pp. 329-350, Sep. 2003.
Shaikh, Anees et al., “Implementation of a Service Platform for Online Games”, Network Software and Services, IBM T.J. Watson Research Center, Hawthorne, NY 10532, SIGCOMM'04 Workshops, Aug. 30 & Sep. 3, 2004, Portland, Oregon, USA. Copyright 2004 ACM.
Sit, Yiu-Fai et al., “Cyclone: A High-Performance Cluster-Based Web Server with Socket Cloning”, Department of Computer Science and Information Systems, The University of Hong Kong, Cluster Computing 7, 21-37, 2004, Kluwer Academic Publishers.
Sit, Yiu-Fai et al., “Socket Cloning for Cluster-BasedWeb Servers”, Department of Computer Science and Information Systems, The University of Hong Kong, Proceedings of the IEEE International Conference on Cluster Computing, IEEE 2002.
Snell, Quinn et al., “An Enterprise-Based Grid Resource Management System”, Brigham Young University, Provo, Utah 84602, Proceedings of the 11th IEEE International Symposium on High Performance Distributed Computing, 2002.
Soldatos, John, et al., “On the Building Blocks of Quality of Service in Heterogeneous IP Networks”, IEEE Communications Surveys, The Electronic Magazine of Original Peer-Reviewed Survey Articles, vol. 7, No. 1. First Quarter 2005.
Tang, Wenting et al., “Load Distribution via Static Scheduling and Client Redirection for Replicated Web Servers”, Department of Computer Science and Engineering, 3115 Engineering Building, Michigan State University, East Lansing, MI 48824-1226, Proceedings of the 2000 International Workshop on Parallel Processing, pp. 127-133, IEEE 2000.
Urgaonkar, Bhuvan, et al., “Sharc: Managing CPU and Network Bandwidth in Shared Clusters”, IEEE Transactions on Parallel and Distributed Systems, vol. 15, No. 1, pp. 2-17, Jan. 2004.
V. K. Naik, S. Sivasubramanian and S. Krishnan, “Adaptive Resource Sharing in a Web Services Environment”, In Proceedings of the 5.sup.th ACM/IFIP/USENIX International Conference on Middleware (Middleware '04), pp. 311-330, Springer-Verlag New York, Inc. New York, NY, USA, 2004.
Wang, Z., et al., “Resource Allocation for Elastic Traffic: Architecture and Mechanisms”, Bell Laboratories, Lucent Technologies, Network Operations and Management Symposium, 2000. 2000 IEEE/IFIP, pp. 157-170. Apr. 2000.
Workshop on Performance and Architecture of Web Servers (PAWS-2000) Jun. 17-18, 2000, Santa Clara, CA (Held in conjunction with SIGMETRICS-2000).
Xu, Jun, et al., “Sustaining Availability of Web Services under Distributed Denial of Service Attacks”, IEEE Transactions on Computers, vol. 52, No. 2, pp. 195-208, Feb. 2003.
Xu, Zhiwei et al., “Cluster and Grid Superservers: The Dawning Experiences in China”, Institute of Computing Technology, Chinese Academy of Sciences, PO. Box 2704, Beijing 100080, China. Proceedings of the 2001 IEEE International Conference on Cluster Computing. IEEE 2002.
Y. Amir and D. Shaw, “WALRUS—A Low Latency, High Throughput Web Service Using Internet-wide Replication”, In Proceedings of the 19.sup.th International Conference on Distributed Computing Systems Workshop, 1998.
Yang, Chu-Sing, et al., “Building an Adaptable, Fault Tolerant, and Highly Manageable Web Server on Clusters of Non-dedicated Workstations”, Department of Computer Science and Engineering, National Sun Yat-Sen University, Kaohsiung, Taiwan, R.O.C.. 2000.
Zeng, Daniel et al., “Efficient Web Content Delivery Using Proxy Caching Techniques”, IEEE Transactions on Systems, Man, and Cybernetics—Part C: Applications and Reviews, vol. 34, No. 3, pp. 270-280, Aug. 2004.
Zhang, Qian et al., “Resource Allocation for Multimedia Streaming Over the Internet”, IEEE Transactions on Multimedia, vol. 3, No. 3, pp. 339-355, Sep. 2001.
Notice of Allowance on U.S. Appl. No. 16/913,708 dated Aug. 24, 2022.
Notice of Allowance on U.S. Appl. No. 17/089,207, dated Oct. 31, 2022.
Notice of Allowance on U.S. Appl. No. 17/700,847, dated Oct. 26, 2022.
Notice of Allowance on U.S. Appl. No. 17/722,037, dated Oct. 27, 2022.
Advanced Switching Technology Tech Brief, published 2005, 2 pages.
Chapter 1 Overview of the Origin Family Architecture from Origin and Onyx2 Theory of Operations Manual, published 1997, 18 pages.
Chen and G. Agrawal, “Resource Allocation in a Middleware for Streaming Data”, In Proceedings of the 2.sup.nd Workshop on Middleware for Grid Computing (MGC '04), pp. 5-10, Toronto, Canada, Oct. 2004.
Cisco MDS 9000 Family Multiprotocol Services Module, published 2006, 13 pages.
Comparing the I2C BUS to the SMBUS, Maxim Integrated, Dec. 1, 2000, p. 1.
Das et al., “Unifying Packet and Circuit Switched Networks,” IEEE Globecom Workshops 2009, Nov. 30, 2009, pp. 1-6.
Deering, “IP Multicast Extensions for 4.3BSD UNIX and related Systems,” Jun. 1999, 5 pages.
Elghany et al., “High Throughput High Performance NoC Switch,” NORCHIP 2008, Nov. 2008, pp. 237-240.
fpga4fun.com, “What is JTAG?”, 2 pages, Jan. 31, 2010.
From AT to BTX: Motherboard Form Factor, Webopedia, Apr. 29, 2005, p. 1.
Furmento et al., “Building computational communities from federated resources.” European Conference on Parallel, Springer, Berlin, Heidelberg, pp. 855-863. (Year: 2001).
Grecu et al., “A Scalable Communication-Centric SoC Interconnect Architecture” Proceedings 5th International Symposium on Quality Electronic Design, 2005, pp. 343, 348 (full article included).
He XiaoShan; QoS Guided Min-Min Heuristic for Grud Task Scheduling; Jul. 2003, vol. 18, No. 4, pp. 442-451 J. Comput. Sci. & Technol.
Hossain et al., “Extended Butterfly Fat Tree Interconnection (EFTI) Architecture for Network on CHIP,” 2005 IEEE Pacific Rim Conference on Communicatinos, Computers and Signal Processing, Aug. 2005, pp. 613-616.
HP “OpenView OS Manager using Radia software”, 5982-7478EN, Rev 1, Nov. 2005; (HP_Nov_2005.pdf; pp. 1-4).
HP ProLiant SL6500 Scalable System, Family data sheet, HP Technical sheet, Sep. 2010 4 pages.
HP Virtual Connect Traffic Flow—Technology brief, Jan. 2012, 22 pages.
IBM Tivoli Workload Scheduler job Scheduling Console User's Guide Feature Level 1.2 (Maintenance Release Oct. 2003). Oct. 2003, IBM Corporation, http://publib.boulder.ibm.com/tividd/td/TWS/SH19-4552-01/en.sub.--US/PDF/-jsc.sub.--user.pdf.
J. Chase, D. Irwin, L. Grit, J. Moore and S. Sprenkle, “Dynamic Virtual Clusters in a Grid Site Manager”, In Proceedings of the 12.sup.th IEEE International Symposium on High Performance Distributed Computing, pp. 90-100, 2003.
Jansen et al., “SATA-IO to Develop Specification for Mini Interface Connector” Press Release Sep. 21, 2009, Serial ATA3 pages.
Kavas et al., “Comparing Windows NT, Linux, and QNX as the Basis for Cluster Systems”, Concurrency and Computation Practice & Experience Wiley UK, vol. 13, No. 15, pp. 1303-1332, Dec. 25, 2001.
Lars C. Wolf et al. “Concepts for Resource Reservation in Advance” Multimedia Tools and Applications. [Online] 1997, pp. 255-278, XP009102070 The Netherlands Retreived from the Internet: URL: http://www.springerlink.com/content/h25481221mu22451/fulltext.pdf [retrieved on Jun. 23, 2008].
Leinberger, W. et al., “Gang Scheduling for Distributed Memory Systems”, University of Minnesota—Computer Science and Engineering—Technical Report, Feb. 16, 2000, vol. TR 00-014.
Liu et al. “Design and Evaluation of a Resouce Selection Framework for Grid Applicaitons” High Performance Distributed Computing. 2002. HPDC-11 2002. Proceeding S. 11.sup.th IEEE International Symposium on Jul. 23-26, 2002, Piscataway, NJ, USA IEEE, Jul. 23, 2002, pp. 63-72, XP010601162 ISBN: 978-0-7695-1686-8.
Nawathe et al., “Implementation of an 8-Core, 64-Thread, Power Efficient SPARC Server on a Chip”, IEEE Journal of Solid-State Circuits, vol. 43, No. 1, Jan. 2008, pp. 6-20.
Notice of Allowance on U.S. Appl. No. 17/089,207, dated Jul. 7, 2022.
Notice of Allowance on U.S. Appl. No. 17/700,847, dated Jul. 7, 2022.
Office Action on U.S. Appl. No. 13/728,362, dated Feb. 21, 2014.
Office Action on U.S. Appl. No. 16/537,256 dated Jul. 7, 2022.
Pande et al., “Design of a Switch for Network on Chip Applications,” May 25-28, 2003 Proceedings of the 2003 International Symposium on Circuits and Systems, vol. 5, pp. V217-V220.
Roy, Alain, “Advance Reservation API”, University of Wisconsin-Madison, GFD-E.5, Scheduling Working Group, May 23, 2002.
Si et al., “Language Modeling Framework for Resource Selection and Results Merging”, SIKM 2002, Proceedings of the eleventh international conference on Information and Knowledge Management.
Stone et al., UNIX Fault Management: A Guide for System Administration, Dec. 1, 1999, ISBN 0-13-026525-X, http://www.informit.com/content/images/013026525X/samplechapter/013026525-.pdf.
Supercluster Research and Development Group, “Maui Administrator's Guide”, Internet citation, 2002.
Venaas, “IPv4 Multicast Address Space Registry,” 2013, http://www.iana.org/assignments/multicast-addresses/multicast-addresses.x-html.
Wolf et al. “Concepts for Resource Reservation in Advance” Multimedia Tools and Applications, 1997.
Office Action on U.S. Appl. No. 17/412,832, dated Oct. 14, 2022.
Office Action on U.S. Appl. No. 17/697,368 dated Oct. 18, 2022.
Office Action on U.S. Appl. No. 17/697,403 dated Oct. 18, 2022.
Notice of Allowance on U.S. Appl. No. 17/700,767 dated Jul. 11 2022.
Notice of Allowance on U.S. Appl. No. 17/700,767 dated Oct. 14 2022.
Notice of Allowance on U.S. Appl. No. 17/722,062 dated Jun. 15, 2022.
IQSearchText-202206090108.txt, publication dated Apr. 6, 2005, 2 pages.
Office Action on U.S. Appl. No. 17/711,242, dated Jul. 28, 2022.
Office Action on U.S. Appl. No. 17/171,152 dated Aug. 16, 2022.
Office Action on U.S. Appl. No. 17/711,214, dated Nov. 16, 2022.
Notice of Allowance on U.S. Appl. No. 16/913,745, dated Sep. 27, 2022.
Notice of Allowance on U.S. Appl. No. 17/201,245, dated Sep. 22, 2022.
Notice of Allowance on U.S. Appl. No. 17/700,808, dated Sep. 26, 2022.
Office Action on U.S. Appl. No. 17/835,159 dated Aug. 31, 2022.
Office Action on U.S. Appl. No. 17/088,954, dated Sep. 13, 2022.
Notice of Allowance on U.S. Appl. No. 17/201,245 dated Sep. 14, 2022.
Office Action on U.S. Appl. No. 17/697,235 dated Sep. 20, 2022.
Notice of Allowance on U.S. Appl. No. 17/700,808, dated Sep. 14, 2022.
Notice of Allowance on U.S. Appl. No. 17/700,767 dated Jun. 27, 2022.
Office Action on U.S. Appl. No. 17/722,076 dated Jun. 22, 2022.
Edited by William Gropp, Ewing Lusk and Thomas Sterling, “Beowulf Cluster Computing with Linux,” Massachusetts Institute of Technology, 2003.
Jarek Nabrzyski, Jennifer M. Schopf and Jan Weglarz, “Grid Resources Management, State of the Art and Future Trends,” Kluwer Academic Publishers, 2004.
Notice of Allowance on U.S. Appl. No. 17/722,037, dated Jul. 18, 2022.
Office Action on U.S. Appl. No. 17/201,231 dated Oct. 5, 2022.
Notice of Allowance on U.S. Appl. No. 17/722,062 dated Oct. 7, 2022.
Chen, Liang et al., “Resource Allocation in a Middleware for Streaming Data”, 2nd Workshop on Middleware for Grid Computing Toronto, Canada, pp. 5-10, Copyright 2004 ACM.
Extended European Search Report for EP 10827330.1, dated Jun. 5, 2013.
Jackson et al., “Grid Computing: Beyond Enablement”,; Cluster Resource, Inc., Jan. 21, 2005.
Office Action on Taiwan Application 101139729, dated May 25, 2015 (English translation not available).
Office Action on U.S. Appl. No. 17/711,214, dated Jul. 8, 2022.
Reexamination Report on Japanese Application 2012-536877, dated Jan. 22, 2015, including English Translation.
Search Report on EP Application 10827330.1, dated Feb. 12, 2015.
Office Action on U.S. Appl. No. 14/691,120, dated Sep. 8, 2022.
Office Action on U.S. Appl. No. 17/088,954, dated Mar. 15, 2023.
Office Action, Advisory Action, on U.S. Appl. No. 17/711,242, dated Mar. 3, 2023.
Office Action on U.S. Appl. No. 14/691,120, dated Nov. 18, 2022.
Office Action on U.S. Appl. No. 14/691,120, dated Feb. 9, 2023.
Notice of Allowance on U.S. Appl. No. 16/537,256 dated Jan. 12, 2023.
Office Action on U.S. Appl. No. 17/171,152 dated Dec. 21, 2022.
Notice of Allowance on U.S. Appl. No. 17/171,152 dated Feb. 6, 2023.
Notice of Allowance on U.S. Appl. No. 17/171,152 dated Feb. 27, 2023.
Notice of Allowance on U.S. Appl. No. 17/201,231 dated Feb. 6, 2023.
Office Action on U.S. Appl. No. 17/508,661 dated Feb. 27, 2023.
Advisory Action on U.S. Appl. No. 17/697,235 dated Dec. 5, 2022.
Office Action on U.S. Appl. No. 17/697,235 dated Feb. 28, 2023.
Advisory Action on U.S. Appl. No. 17/697,368 dated Jan. 13, 2023.
Advisory Action on U.S. Appl. No. 17/697,403 dated Jan. 13, 2023.
Office Action on U.S. Appl. No. 17/697,403 dated Feb. 28, 2023.
Office Action, Advisory Action, on U.S. Appl. No. 17/711,214, dated Feb. 14, 2023.
Office Action on U.S. Appl. No. 17/711,242, dated Dec. 12, 2022.
Office Action on U.S. Appl. No. 17/722,076, dated Nov. 28, 2022.
Office Action, Advisory Action, on U.S. Appl. No. 17/722,076, dated Feb. 17, 2023.
Office Action on U.S. Appl. No. 17/835,159 dated Jan. 13, 2023.
Office Action in U.S. Appl. No. 17/508,661 dated Jul. 27, 2023.
Office Action in U.S. Appl. No. 17/960,251 dated Aug. 2, 2023.
Notice of Allowance in U.S. Appl. No. 17/985,252 dated Jul. 31, 2023.
Office Action in U.S. Appl. No. 17/697,368 dated Dec. 19, 2023.
Notice of Allowance in U.S. Appl. No. 17/697,403 dated Dec. 18, 2023.
Office Action in U.S. Appl. No. 18/133,048 dated Dec. 18, 2023.
Office Action in U.S. Appl. No. 17/088,954, dated Sep. 19, 2023.
Office Action, Advisory Action, in U.S. Appl. No. 17/697,235 dated Sep. 26, 2023.
Office Action, Advisory Action, in U.S. Appl. No. 17/697,403 dated Sep. 26, 2023.
Office Action in U.S. Appl. No. 18/120,123 dated Sep. 27, 2023.
Office Action on U.S. Appl. No. 17/697,368 dated Aug. 8, 2023.
Office Action on U.S. Appl. No. 17/697,368 dated Mar. 29, 2023.
Office Action on U.S. Appl. No. 17/697,235 dated Jul. 14, 2023.
Office Action on U.S. Appl. No. 17/697,403 dated Jul. 14, 2023.
Notice of Allowance in U.S. Appl. No. 17/980,865, dated Jul. 18, 2023.
Notice of Allowance in U.S. Appl. No. 17/532,667, dated Apr. 26, 2023.
Office Action in U.S. Appl. No. 18/194,783 dated Nov. 14, 2023.
Notice of Allowance on U.S. Appl. No. 17/470,209, dated Mar. 21, 2023.
Office Action on U.S. Appl. No. 17/722,076, dated Mar. 21, 2023.
Office Action, Advisory Action, on U.S. Appl. No. 17/835,159 dated Mar. 22, 2023.
Notice of Allowance in U.S. Appl. No. 17/411,616, dated Mar. 29, 2023.
Notice of Allowance in U.S. Appl. No. 17/985,241, dated Apr. 3, 2023.
Office Action in U.S. Appl. No. 17/508,661 dated Jan. 26, 2024.
Office Action, Advisory Action, on U.S. Appl. No. 17/711,242, dated Dec. 20, 2023.
Office Action in U.S. Appl. No. 17/835,159 dated Jan. 12, 2024.
Notice of Allowance (Corrected) in U.S. Appl. No. 18/132,507, dated Feb. 27, 2024.
Notice of Allowance in U.S. Appl. No. 18/132,507, dated Feb. 12, 2024.
Office Action in U.S. Appl. No. 18/295,344 dated Feb. 12, 2024.
Office Action in U.S. Appl. No. 17/960,244 dated Oct. 23, 2023.
Office Action in U.S. Appl. No. 18/295,344 dated Oct. 23, 2023.
Notice of Allowance in U.S. Appl. No. 17/960,228, dated Sep. 12, 2023.
Notice of Allowance (Corrected NOA) in U.S. Appl. No. 17/411,616, dated Apr. 6, 2023.
Office Action on U.S. Appl. No. 17/412,832, dated Apr. 20, 2023.
Office Action in U.S. Appl. No. 17/697,235 dated Nov. 7, 2023.
Notice of Allowance in U.S. Appl. No. 17/980,844, dated Jul. 5, 2023.
Office Action on U.S. Appl. No. 17/412,832, dated Dec. 5, 2023.
Office Action on U.S. Appl. No. 17/711,214, dated Dec. 4, 2023.
Office Action on U.S. Appl. No. 17/960,251 dated Dec. 11, 2023.
Office Action on U.S. Appl. No. 17/711,214, dated Apr. 25, 2023.
Notice of Allowance, Corrected NOA, in U.S. Appl. No. 17/532,667, dated May 9, 2023.
Office Action on U.S. Appl. No. 17/711,242, dated Jun. 7, 2023.
Office Action in U.S. Appl. No. 14/691,120, dated Aug. 18, 2023.
Office Action in U.S. Appl. No. 17/835,159 dated Aug. 22, 2023.
Notice of Allowance in U.S. Appl. No. 17/985,267 dated Aug. 18, 2023.
Advisory Action on U.S. Appl. No. 17/697,368 dated Oct. 12, 2023.
Office Action on U.S. Appl. No. 17/711,242, dated Oct. 12, 2023.
Notice of Allowance in U.S. Appl. No. 18/194,783 dated Mar. 15, 2024.
Notice of Allowance in U.S. Appl. No. 18/232,512 dated Mar. 15, 2024.
Office Action in U.S. Appl. No. 14/691,120, dated Mar. 27, 2024.
Office Action in U.S. Appl. No. 17/088,954, dated Apr. 9, 2024.
Office Action in U.S. Appl. No. 17/697,235 dated Mar. 18, 2024.
Office Action in U.S. Appl. No. 17/711,242, dated Feb. 27, 2024.
Office Action in U.S. Appl. No. 17/902,525 dated Mar. 26, 2024.
Office Action in U.S. Appl. No. 17/960,244 dated May 20, 2024.
Office Action in U.S. Appl. No. 18/120,123 dated Apr. 9, 2024.
Office Action in U.S. Appl. No. 18/234,021 dated Apr. 19, 2024.
Office Action in U.S. Appl. No. 18/234,045 dated Apr. 19, 2024.
Related Publications (1)
Number Date Country
20220247694 A1 Aug 2022 US
Provisional Applications (1)
Number Date Country
60662240 Mar 2005 US
Continuations (4)
Number Date Country
Parent 14827927 Aug 2015 US
Child 17722076 US
Parent 13758164 Feb 2013 US
Child 14827927 US
Parent 12752622 Apr 2010 US
Child 13758164 US
Parent 11276856 Mar 2006 US
Child 12752622 US