Data centers—including virtualized data centers—are a core foundation of the modern information technology (IT) infrastructure. Virtualization provides several advantages. One advantage is that virtualization can provide significant improvements to efficiency, as physical machines have become sufficiently powerful with the advent of multicore architectures with a large number of cores per physical CPU. Further, memory has become extremely cheap today. Thus, one can consolidate a large number of virtual machines on to one physical machine. A second advantage is that virtualization provides significant control over the infrastructure. As computing resources become fungible resources, such as in the cloud model, provisioning and management of the compute infrastructure becomes easier. Thus, enterprise IT staff prefer virtualized clusters in data centers for their management advantages in addition to the efficiency and better return on investment (ROI) that virtualization provides.
Various kinds of virtual machines exist, each with different functions. System virtual machines (also known as full virtualization VMs) provide a complete substitute for the targeted real machine and a level of functionality required for the execution of a complete operating system. A hypervisor uses native execution to share and manage hardware, allowing multiple different environments, isolated from each other, to be executed on the same physical machine. Modern hypervisors use hardware-assisted virtualization, which provides efficient and full virtualization by using virtualization-specific hardware capabilities, primarily from the host CPUs. Process virtual machines are designed to execute a single computer program by providing an abstracted and platform-independent program execution environment. Some virtual machines are designed to also emulate different architectures and allow execution of software applications and operating systems written for another CPU or architecture. Operating-system-level virtualization allows the resources of a computer to be partitioned via the kernel's support for multiple isolated user space instances, which are usually called containers and may look and feel like real machines to the end users.
In some examples, this disclosure describes operations performed by a policy controller, host device, or other network device in accordance with one or more aspects of this disclosure. In one specific example, this disclosure describes a method comprising monitoring, by a policy engine included within a network, operation of an application across each of a plurality of host devices, including a first host device and a second host device; pushing, by the policy engine, a policy associated with the application to a first probe agent executing on the first host device, and to a second probe agent executing on the second host device; monitoring, by the first probe agent, a metric associated with the first host device, wherein monitoring the metric is performed across each of a plurality of virtual computing instances executing on the first host device; analyzing the metric, by the first probe agent, to determine if conditions of a rule are met; taking an action, by the first probe agent and on the first host device, to implement the policy on the first host device in response to determining that the conditions of the rule are met, wherein taking the action includes performing an adjustment to the plurality of virtual computing instances executing on the first host device; and monitoring, by the first probe agent and after taking the action, the metric across each of the plurality of virtual computing instances resulting from performing the adjustment to the plurality of virtual computing instances executing on the first host device.
In another example, this disclosure describes a system comprising a storage system; and processing circuitry having access to the storage device and configured to: monitor, within a network, operation of an application across each of a plurality of host devices, including a first host device and a second host device, push a policy associated with the application to a first probe agent executing on the first host device, and to a second probe agent executing on the second host device, monitor a metric associated with the first host device, wherein monitoring the metric is performed across each of a plurality of virtual computing instances executing on the first host device, analyze the metric to determine if conditions of a rule are met, take an action, on the first host device, to implement the policy on the first host device in response to determining that the conditions of the rule are met, wherein taking the action includes performing an adjustment to the plurality of virtual computing instances executing on the first host device, and monitor, after taking the action, the metric across each of the plurality of virtual computing instances resulting from performing the adjustment to the plurality of virtual computing instances executing on the first host device.
In another example, this disclosure describes a computer-readable storage medium comprising instructions that, when executed, configure processing circuitry of a computing system to push a policy associated with the application to a first probe agent executing on the first host device, and to a second probe agent executing on the second host device; monitor a metric associated with the first host device, wherein monitoring the metric is performed across each of a plurality of virtual computing instances executing on the first host device; analyze the metric to determine if conditions of a rule are met; take an action, on the first host device, to implement the policy on the first host device in response to determining that the conditions of the rule are met, wherein taking the action includes performing an adjustment to the plurality of virtual computing instances executing on the first host device; and monitor, after taking the action, the metric across each of the plurality of virtual computing instances resulting from performing the adjustment to the plurality of virtual computing instances executing on the first host device.
The present invention relates to systems and methods for cloud infrastructure policy implementation and management in order to allow real-time monitoring and optimization of virtualized resources.
The accompanying drawings are an integral part of the disclosure and are incorporated in and constitute a part of this specification. The drawings illustrate example, non-limiting embodiments and, in conjunction with the description and claims set forth herein, serve to explain at least some of the principles of this disclosure.
The present invention addresses the need for improved cloud infrastructure policy implementation and management in order allow real-time monitoring and optimization of hos resources. In conventional approaches for monitoring hosted resources, a static rule would be deployed to host servers that would cause an entity on the host to capture the metrics specified in the rule and export the metrics to some external data source. From there, the exported data would be used for analysis and implementation of a policy based on the analysis of the exported data. This paradigm of deploying a static rule, capturing data, storing the data, processing the data, analyzing the data, and then displaying results to the user has several shortcomings that the present invention addresses.
In the present invention, instead of a static rule that is implemented in one place, a policy is set at a high level for an application. As the demand for an application increases, the instances of the application scale and are spun up on the hosting devices through more virtual machines (VMs) and/or containers for example. Then as that application is deployed across many VMs and/or containers (which can be distributed across many servers), the systems and methods of the present invention helps determine the appropriate policy and ensure that the policy is available on the right set of servers for the application. The systems and methods of the present invention treat the analysis and control aspect as a policy that follows the application no matter where it goes. Accordingly, even though application VMs and/or containers can move, the present invention ensures that the right policy moves automatically with the application across VMs and/or containers. This provides a policy at the application level so that for a given application, if a condition happens anywhere in the infrastructure, the appropriate action is taken.
Moreover, where conventional approaches can introduce latency and can be limited with respect to richness of details of the monitored performance metrics, the present invention allows generation of real-time or nearly real-time events and/or alarms based at least on an operational state of a host device. In the present invention, the policy is implemented on the host, directly at the source of the data and treats the whole aspect of analysis as a policy itself. The present invention detects a rule violation directly at the host and takes the appropriate action, including (if appropriate) action directly on the host.
As illustrated in
Similarly, as shown in
As shown in
In an environment in which the VMs 101, containers 201, or non-virtualized applications 302 share the host device, the real-time probe agent 420, 520, 620 can monitor and analyze resource utilization attributed to each of the VMs 101, containers 201, and/or applications 302 thus providing a stream of real-time metrics of resource consumption according to computing component that consumes the resource. Analysis of the monitored information can be utilized to update first control information indicative of occurrence of an event and/or second control information indicative of presence or absence of an alarm condition. The control information can be sent to a remote device to update information and/or intelligence related to performance conditions of the host device. In each case, the source of the data is a host. For example, physical sever contain hardware and other components including many CPUs, memory bank, hard drives, network cards, motherboard, operating systems on the source, VMs and containers. The present invention may collect information from any of these components at the host such as what is happening to the physical hardware (hardware, temperature, errors), the system layer (operating system layer), how much memory being used, how the memory is being shared, or the swap consumption.
The system 500 includes a policy engine 555 communicatively coupled to the host devices 550, 551, 552. The policy engine 555 contains a policy 560 associated with the specific application 502. The policy engine 555 is programmed to determine on which of the host devices 550, 551, 552 the specific application 502 is deployed as well as to monitor changes in deployment of the application 502 across the host devices 550, 551, 552 and to push the policy 560 to the real-time probe agent 520 on each of the host devices 550, 551 on which the application is deployed. As shown in
As shown with
As shown in
The system 800 includes a policy engine 555 communicatively coupled to the host devices 650, 651, 652. The policy engine 555 contains a policy 560 associated with the specific application 502. The policy engine 555 is programmed to determine on which of the host devices 650, 651, 652 the specific application 502 is deployed as well as to monitor changes in deployment of the application 502 across the host devices 650, 651, 652 and to push the policy 560 to the real-time probe agent 720 on each of the host devices 650, 651, 652 on which the application is deployed. As shown in
As shown in
The analytics engine 580 may be programmed to receive the information about the one or more metrics from each of the host devices 550, 551 and to determine if conditions of a rule for the one or more metrics are met. The analytics engine 580 may be further programmed to report information about whether the conditions of the rule are met to a client interface 590 that is communicatively coupled to the analytics engine. In addition or alternatively, the analytics engine 580 may be further programmed to report information about whether the conditions of the rule are met to a notification service 610 communicatively coupled to the analytics engine or the policy engine 555.
In another embodiment, the policy 560 includes instructions to cause the real-time probe agent 720 in each of the host devices 550, 551 to monitor one or more metrics generated by each of the host devices 550, 551 on which the application 502 is deployed, to cause the real-time probe agent 520 to analyze the one or more metrics to determine if conditions of a rule for the one or more metrics are met, and to cause the real-time probe agent 720 to report information about whether the conditions of the rule are met to the data manager 570.
The analytics engine 580 may be programmed to receive the information about whether the conditions of the rule are met from each of the host devices 550, 551 and to determine if conditions of a second rule for the one or more metrics are met. The analytics engine 580 may be programmed to report information about whether the conditions of the second rule are met to the client interface 590, a notification service 610, or the policy engine 555.
The system 1000 includes a policy engine 755 communicatively coupled to the host devices 750. The policy engine 755 contains a policy 760 associated with the specific application 702. The policy engine 755 is programmed to determine on which of the host devices 750 the specific application 702 is deployed as well as to monitor changes in deployment of the application 702 across the host devices 750 and to push the policy 760 to the real-time probe agent 1020 on each of the host devices 750 on which the application is deployed. The policy engine 755 is also programmed to retract the policy 760 if it determines that a host device is no longer running the specific application 702 associated with the policy 760.
In one example, one or more of the host devices 750 provide full virtualization virtual machines and the policy engine 755 comprises a virtual machine adapter 761 to monitor the changes in the deployment of the application 702 across the virtual machines 701 in the host devices 750. As illustrated in
In another example, one or more of the host devices 750 provide operating system level virtualization and the policy engine 755 comprises a container adapter 762 to monitor the changes in the deployment of the application 702 across the containers 701 in the host devices 750. As illustrated in
As the infrastructure changes, the system 1000 keeps the mapping true and automatically adapts to changes in the location of the application 702 including changes due to usage growth, usage reduction, transitions between hosts, and crashes. The policy engine 755 may also include a database 765 such as a NoSQL database.
The system 700 may include a data platform section 730 that includes the real-time probe agents 1020 and the data manager 770. In some embodiments, the policy 760 includes instructions to cause the real-time probe agents 1020 in each of the host devices 750 to monitor one or more metrics generated by each of plurality of host devices 750 on which the application 702 is deployed and to cause the real-time probe agent 1020 to report information about the one or more metrics to the data manager 770. The distributed data platform section 730 includes a message bus 731 used to communicate information received from the real-time probe agent 1020 (including metrics and alarms) to the policy engine 755, the analytics engine 780, and/or the client interface 790. The data platform section 730 may also include a database 732 such as a NoSQL database.
To transfer information to the policy engine 755, the analytics engine 780, and/or the client interface 790, the data manager 770 may cause the information to be placed on the message bus 731 and then communicate to the policy engine 755, the analytics engine 780, and/or the client interface 790 that the information is available for retrieval on the message bus 731. The analytics engine 780 is communicatively coupled to the host devices 750, the data platform 730, and the policy engine 755. The analytics engine 780 may aggregate data from a plurality of the host devices 750 to determine if an applicable rule is met. Accordingly, the system 700 can run a second order analysis of all signals from all hosts 750 to capture the broader picture across all hosts. The analytics engine 780 may include a reports module 781 and a health SLA module 782 to enable capacity planning a health monitoring for the servers.
Alternatively, the method 1200 may include monitoring with the real-time probe agent in each of the plurality of host devices one or more metrics generated by each of the plurality of host devices on which the application is deployed 1203, analyzing with the real-time probe agent the one or more metrics to determine if conditions of a rule for the one or more metrics are met 1204, and reporting with the real-time probe agent information about whether the conditions of the rule are met to a data manager communicatively coupled to the plurality of host devices 1205.
The systems and methods of the present invention may also include a TCP accelerator (or vTCP) as disclosed in related patent applications U.S. patent application Ser. No. 14/149,621, entitled “SYSTEM AND METHOD FOR IMPROVING TCP PERFORMANCE IN VIRTUALIZED ENVIRONMENTS,” filed on Jan. 7, 2014, and U.S. patent application Ser. No. 14/290,509, entitled “SYSTEM AND METHOD FOR IMPROVING TCP PERFORMANCE IN VIRTUALIZED ENVIRONMENTS,” filed on May 29, 2014. The vTCP (1) makes available metrics that are not otherwise available and (2) lets allows modification the TCP parameters in real-time. That is, the vTCP enables novel monitoring and novel control of the TCP parameters according to the appropriate policy. The monitoring and control parameters include:
The monitoring and control application-level parameters include:
As described in greater detail in related application U.S. patent application Ser. No. 14/811,957, entitled “ASSESSMENT OF OPERATIONAL STATES OF A COMPUTING ENVIRONMENT,” filed on Jul. 29, 2015, embodiments of the disclosure can permit or otherwise facilitate monitoring locally at a host device a diverse group of performance metrics associated with the host device according to the appropriate policy. In addition, information generated from the monitoring can be analyzed locally at the host device in order to determine (at the host device) an operational state of a host device. In view of the localized nature of the monitoring and analysis of this disclosure, the assessment of operational conditions of the host device can be performed in real-time or nearly real-time. In addition, such an assessment can permit or otherwise facilitate detecting events and/or transitions between alarm conditions without the latency commonly present in conventional monitoring systems. The assessment in accordance with this disclosure can be based on rich, yet flexible, test condition that can be applied to information indicative of performance metrics. In certain implementations, the test condition can be applied to a defined computing component, e.g., a host device, an application executing in the host device, a virtual machine instantiated in the host device, or a container instantiated in the host device or in a virtual machine. Thus, embodiments of the disclosure can permit monitoring resource utilization attributed to each of the virtual machines or containers that shares resources of a host device. As such, a stream of real-time or nearly real-time metrics of resource consumption ordered by the computing component can be analyzed. Such specificity in the testing associated with assessment of operational states of a host device can permit or otherwise facilitate the detection of performance bottlenecks and/or determination of root-cause(s) of the bottleneck.
Implementation of aspects of this disclosure can provide, in at least certain embodiments, improvements over conventional technologies for monitoring operational conditions of a computing device (e.g., a host device, such as a server device) in a computing environment. In one example, assessment of an operational condition of the computing device is implemented locally at the computing device. Therefore, performance metrics associated with the assessment can be accessed at a higher frequency, which can permit or otherwise facilitate performing the assessment faster. Implementing the assessment locally avoids the transmission of information indicative of performance metrics associated with assessment to a remote computing device for analysis. As such, latency related to the transmission of such information can be mitigated or avoided entirely, which can result in substantial performance improvement in scenarios in which the number of performance metrics included in the assessment increases. In another example, the amount of information that is sent from the computing device can be significantly reduced in view that information indicative or otherwise representative of alarms and/or occurrence of an event is to be sent, as opposed to raw data obtained during the assessment of operational conditions. In yet another example, the time it takes to generate the alarm can be reduced in view of efficiency gains related to latency mitigation.
The policies of the present invention may include input information indicative or otherwise representative of a selection of performance metrics to be analyzed at the one or more host devices. The input information also can be indicative or otherwise representative of one or more rules associated with a test that can be utilized to perform or otherwise facilitate the analysis at the host device. The test can be associated with the selection of performance metrics in that the test can be applied to at least one of the performance metrics. The input information can be received from an end-user or from a computing device operationally coupled to the data manager.
In some embodiments, the host device(s) can embody or can constitute a server farm. For instance, the host device(s) can embody a cluster of 10 server devices separated in two groups. One or more of the host devices can be configured to execute an application, a virtual machine, and/or a containerized application (or a container). As such, the performance metrics that can be conveyed according to the policy include one or more of the following: (a) performance metrics associated with computing component (e.g., a host device, an instance of a virtual machine executing in the host device, an instance of a container executing in the host device, or the like), such as one or more of hard disk drive (HDD) space usage (expressed in percentage or in absolute magnitude); input/output (110) rate; memory space usage (expressed as a percentage or in absolute magnitude); network incoming bandwidth available, network outgoing bandwidth available, number of incoming packets, number of outgoing packets, packet size distribution, number of incoming packets lost, number of outgoing packets lost; round trip time (RTT) of all flows for a Instance; flow duration for a Instance; number of TCP Sessions Requested (SYN); number of TCP Sessions Confirmed (SYN-ACK); number of TCP Sessions Rejected (RST); central processing unit (CPU) usage (expressed as a percentage or as usage time interval); or 1/0 wait time, which includes the time the CPU is waiting on 1/0 requests, (b) performance metrics associated with execution of an application at a host device, such as one or more of number of packets reordered; number of packets dropped or lost; response-time (e.g., time taken by the application to respond to a request); request rate (e.g., number of requests that the application receives); response rate (e.g., number of responses performed or otherwise facilitated by the application); latency (e.g., RTT of some or all flows or threads for the application); flow size (e.g., total number of bytes transferred); flow duration for the application (e.g., total time of a flow, or the like.
Further or in other embodiments, a rule associated with a test can specify one or more matching criteria that can be utilized to determine if a computing component (e.g., a host device, a virtual machine, a container, or the like) under assessment satisfies at least one condition for (a) generating information indicative of occurrence of an event or (b) generating an alarm or information related thereto (e.g., alarm is in active state or an alarm is in an inactive state). A matching criterion can include a non-empty set of parameters and/or a non-empty set of operators. At least one operator of the non-empty set of operators can operate on at least one of the non-empty set of parameters. In addition or in one implementation, the at least one operator can operate on information indicative of a performance metric associated with the computing component. In some embodiments, the non-empty set of operators can include a function having a domain that can include one or more of the parameters and/or other parameter(s) (such as time).
A parameter included in a matching criterion can be a specific number (e.g., an integer or real number) indicative or otherwise representative of a threshold. Application of a rule associated with a test can include a comparison between the threshold and information indicative of a performance metric. For example, for CPU usage (one of several performance metrics contemplated in this disclosure), a rule can specify application of a relational operator (e.g., “greater than,” “less than,” “equal to”) to the CPU usage and a numeric threshold (e.g., a defined percentage): If Host CPU usage>50% then raise Alert.
In certain scenarios, rather than being a predetermined parameter, a threshold can be a result of application of a function to information indicative of a performance metric. The function can be a scalar operator of a non-empty set of operators of a matching criterion. As such, in some implementations, the threshold can adopt a value that is an output of a defined algorithm. In one example, the function can represent the baseline standard deviation <
Here, xi is a real number, i=1, 2 . . . N, N is a natural number that defines a sampling interval) and μ is the mean of first N samples of the performance metrics (e.g., CPU usage). Therefore, the value of σ that can be computed for a specific sampling of information conveying CPU usage can be utilized to define a threshold associated with a rule, for example: If Host CPU Usage>2σ then raise Alert.
It is noted that <J is one example presented for the sake of illustration and other functions and/or operators can be utilized to define certain thresholds. For example, Min({•}) and Max({•}) of a sampling can be utilized. In addition or in the alternative, one or more of the moments, or a function thereof, of a sampling can be utilized as a function to determine a threshold value. For instance, the average (or first non-centered moment) of a sampling can be utilized as a threshold. It is noted that one of the parameters included in a rule can determine interval duration (11 Ts, which can be expressed in seconds or other unit of time) for collection (or sampling) of information indicative of a performance metric (e.g., CPU usage or other metrics).
Two types of rules can be configured: singleton rule and compound rule. A singleton rule tracks a single performance metric and compares the performance metric to a matching criterion. Control information associated with an event or an alarm can be generated in response to outcome of such a comparison. Multiple singleton rules can be defined based on different performance metrics for a given resource (e.g., a host device, an instance of virtual machine, an instance of a container, an instance of an application in execution). In addition, multiple singleton rules can be implemented concurrently or nearly concurrently for different instances. As an illustration, an Instance level alert can be generated based at least on the outcome of the application of the singleton rules. For instance, four single rules can be defined for two different instances (e.g., Instance 1 and Instance 2):
A compound rule is a collection of two or more singleton rules. An order of the singleton rule(s) also defines the compound rule. Control information associated with an event or an alarm can be generated in response to outcomes of the two or more rules and, optionally, an order in which the outcomes occur. More specifically, example compound rules can be formed from the following two singleton rules: (A) Singleton Rule 1: if Host CPU Usage>50%; and (B) Singleton Rule 2: if Memory Usage>75% Raise Alert. A first compound rule can be the following:
A second compound rule can be the following:
Concurrency of the rules also can provide an order in which the singleton rule can be applied nearly simultaneously and can be determined to be satisfied independently. Therefore, a third compound rule can be the following:
Other example compound rules can be formed using singleton rules for different instances of virtual machines configured to execute in a host device: (I) Singleton Rule 1: If Instance 1 Disk Usage>80% then raise Alert; and (II) Singleton Rule 2: If Instance 2 Disk Usage>80% then raise Alert. Example compound rules can be the following:
Compound Rule 3=When (Instance 1 CPU Usage>50%) AND (Instance 2 CPU Usage>50%) then raise Alert.
It is noted that such Compound Rule 2 correlates across two different metrics while measuring one on a host device and the second within an Instance (e.g., an instantiated VM or an instantiated container).
While for illustration purposes in the foregoing rule examples described herein a single operator is applied to information indicative of a performance metric and a predetermined threshold is relied upon as a matching criterion, the disclosure is not so limited. In some embodiments, parameters and functions associated with a rule can permit applying rich tests to information indicative of a performance metric. As an example, a rule can include an aggregation function that can generate information indicative of a performance metric (e.g., HDD usage) over a sampling period. The sample period can be a configurable parameter includes in the rule. In addition, the rule can include a relational operator (e.g., “greater than,” “less than,” “equal to,” or the like) that can compare output of the aggregation function over the sampling period to a threshold (predetermined or computed from sampled information). Based on an outcome of the comparison the rule can generate a cumulative value indicative of a number of outcomes that satisfy a condition defined by the relational operator. In addition, the rule can stipulate that event is deemed to have occurred or that an alarm is to be generated in response to the determined cumulative value satisfying a defined criterion. Specifically, in one example, the test can be specified as follows:
If the aggregation function of the information sampling aggregated over the sampling period satisfies the relational operator with respect to the aggregated data and a threshold, then the sampling interval is marked as satisfying an exception condition. In addition, when it ascertained that the number of marked sampling intervals in a predetermined number of intervals is greater than or equal to a second threshold, then control information can be updated (e.g., generated or modified). For example, in event mode, updating the information can include generating control information indicative of an event having occurred. In another example, in alert mode, updating the information can include generating control information indicative of an alarm condition being active. It is noted that in alert mode, in case the alarm condition is active prior to ascertaining that the number of marked sampling intervals in the predetermined number of intervals is greater than or equal to the second threshold, an update of control information can be bypassed.
In addition, in event mode and in a scenario in which ascertaining that the number of marked sampling intervals in the predetermined number of intervals is less than the second threshold, updating the control information can include generating control information indicative of an event not having occurred. In view that the assessment described herein can be performed continually or nearly continually, updating the control information can include generating information that the event has ceased to occur. In alert mode, ascertaining that the number of marked sampling intervals in the predetermined number of intervals is less than the second threshold, updating the control information can include generating control information indicative of an alarm condition being inactive.
In some implementations, as described herein, a test in accordance with aspects of this disclosure can specify a group of computing components associated with one or more of the host devices on which the test is to be implemented. Such a subset can be referred to as the scope of the test. A computing component can be embodied in or can include a host device, an application executing in the host device, a virtual machine executing in the host device, or a containerized application (or container) executing in the host device. Implementation of the test at a host device associated with a computing component specified in the scope of the test can permit or otherwise facilitate assessment of performance state of the computing component. Therefore, it is noted that the scope of the test can mitigate or avoid operational overhead at the host device associated with the computing component by focusing the implementation of the test on a pertinent computing component.
In the present description, for purposes of explanation, numerous specific details have been set forth in order to provide a thorough understanding of the disclosure. It may be evident, however, that the subject disclosure may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing the subject disclosure.
As used in this disclosure, including the annexed drawings, the terms “component,” “system,” “platform,” “environment,” “unit,” “interface,” and the like are intended to refer to a computer-related entity or an entity related to an operational apparatus with one or more specific functionalities, wherein the entity can be either hardware, a combination of hardware and software, software, or software in execution. One or more of such entities are also referred to as “functional elements.” As an example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server or network controller, and the server or network controller can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. Also, these components can execute from various computer readable media having various data structures stored thereon. The components can communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems via the signal). As another example, a component can be an apparatus with specific functionality provided by mechanical parts operated by electric or electronic circuitry, which is operated by a software, or firmware application executed by a processor, wherein the processor can be internal or external to the apparatus and executes at least a part of the software or firmware application. As yet another example, a component can be an apparatus that provides specific functionality through electronic components without mechanical parts, the electronic components can include a processor therein to execute software or firmware that provides at least in part the functionality of the electronic components. As further yet another example, interface(s) can include 1/0 components as well as associated processor, application, or Application Programming Interface (API) components. While the foregoing examples are directed to aspects of a component, the exemplified aspects or features also apply to a system, platform, interface, node, coder, decoder, and the like.
In addition, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances. Moreover, articles “a” and “an” as used in the subject specification and annexed drawings should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form.
The term “processor,” as utilized in this disclosure, can refer to any computing processing unit or device comprising, but not limited to comprising, single-core processors; single-processors with software multithread execution capability; multi-core processors; multicore processors with software multithread execution capability; multi-core processors with hardware multithread technology; parallel platforms; and parallel platforms with distributed shared memory. Additionally, a processor can refer to an integrated circuit, an application specific integrated circuit (ASIC), a digital signal processor (DSP), a field programmable gate array (FPGA), a programmable logic controller (PLC), a complex programmable logic device (CPLD), a discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. Processors can exploit nano-scale architectures such as, but not limited to, molecular and quantum-dot based transistors, switches and gates, in order to optimize space usage or enhance performance of user equipment. A processor may also be implemented as a combination of computing processing units.
In addition, terms such as “store,” “data store,” data storage,” “database,” and substantially any other information storage component relevant to operation and functionality of a component, refer to “memory components,” or entities embodied in a “memory” or components comprising the memory. It will be appreciated that the memory components described herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. Moreover, a memory component can be removable or affixed to a functional element (e.g., device, server).
By way of illustration, and not limitation, nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory. Volatile memory can include random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct Rambus RAM (DRRAM). Additionally, the disclosed memory components of systems or methods herein are intended to comprise, without being limited to comprising, these and any other suitable types of memory.
Various embodiments described herein can be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques. In addition, various of the aspects disclosed herein also can be implemented by means of program modules or other types of computer program instructions stored in a memory device and executed by a processor, or other combination of hardware and software, or hardware and firmware. Such program modules or computer program instructions can be loaded onto a general purpose computer, a special purpose computer, or another type of programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create a means for implementing the functionality of disclosed herein.
The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer readable media can include but are not limited to magnetic storage devices (e.g., hard drive disk, floppy disk, magnetic strips . . . ), optical discs (e.g., compact disc (CD), digital versatile disc (DVD), Blu-ray disc (BD) . . . ), smart cards, and flash memory devices (e.g., card, stick, key drive . . . ).
What has been described above includes examples of systems and methods that provide advantages of the subject disclosure. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the subject disclosure, but one of ordinary skill in the art may recognize that many further combinations and permutations of the claimed subject matter are possible. Furthermore, to the extent that the terms “includes,” “has,” “possesses,” and the like are used in the detailed description, claims, appendices and drawings such terms are intended to be inclusive in a manner similar to the term “comprising” as “comprising” is interpreted when employed as a transitional word in a claim.
This application is a continuation application of and claims priority to U.S. patent application Ser. No. 15/084,927, entitled “REAL-TIME CLOUD-INFRASTRUCTURE POLICY IMPLEMENTATION AND MANAGEMENT,” filed on Mar. 30, 2016, which is a continuation-in-part of U.S. patent application Ser. No. 14/811,957, entitled “ASSESSMENT OF OPERATIONAL STATES OF A COMPUTING ENVIRONMENT,” filed on Jul. 29, 2015 and is also a continuation-in-part of U.S. patent application Ser. No. 14/149,621, entitled “SYSTEM AND METHOD FOR IMPROVING TCP PERFORMANCE IN VIRTUALIZED ENVIRONMENTS,” filed on Jan. 7, 2014, and U.S. patent application Ser. No. 14/290,509, entitled “SYSTEM AND METHOD FOR IMPROVING TCP PERFORMANCE IN VIRTUALIZED ENVIRONMENTS,” filed on May 29, 2014 (which both claim the benefit of priority under 35 U.S.C. § 119(e) of U.S. Provisional Patent Application Ser. No. 61/882,768, entitled “SYSTEM AND METHOD FOR IMPROVING TCP PERFORMANCE IN VIRTUALIZED ENVIRONMENTS,” filed on Sep. 26, 2013). All of these applications are hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
D379695 | Africa | Jun 1997 | S |
6182157 | Schlener et al. | Jan 2001 | B1 |
6493316 | Chapman et al. | Dec 2002 | B1 |
6678835 | Shah et al. | Jan 2004 | B1 |
6741563 | Packer | May 2004 | B2 |
6754228 | Ludwig | Jun 2004 | B1 |
6968291 | Desai | Nov 2005 | B1 |
7032022 | Shanumgam et al. | Apr 2006 | B1 |
7389462 | Wang et al. | Jun 2008 | B1 |
7433304 | Galloway et al. | Oct 2008 | B1 |
7802234 | Sarukkai et al. | Sep 2010 | B2 |
8102881 | Vincent | Jan 2012 | B1 |
8332688 | Tompkins | Dec 2012 | B1 |
8332953 | Lemieux | Dec 2012 | B2 |
8527982 | Sapuntzakis et al. | Sep 2013 | B1 |
8601471 | Beaty et al. | Dec 2013 | B2 |
8738972 | Bakman et al. | May 2014 | B1 |
9258313 | Knappe et al. | Feb 2016 | B1 |
9275172 | Ostermeyer et al. | Mar 2016 | B2 |
9319286 | Panuganty | Apr 2016 | B2 |
9385959 | Kompella et al. | Jul 2016 | B2 |
9465734 | Myrick et al. | Oct 2016 | B1 |
9501309 | Doherty et al. | Nov 2016 | B2 |
9600307 | Pulkayath et al. | Mar 2017 | B1 |
9641435 | Sivaramakrishnan | May 2017 | B1 |
9817695 | Clark | Nov 2017 | B2 |
9830190 | Pfleger, Jr. | Nov 2017 | B1 |
9900262 | Testa et al. | Feb 2018 | B2 |
9906454 | Prakash et al. | Feb 2018 | B2 |
9929962 | Prakash et al. | Mar 2018 | B2 |
9940111 | Labocki et al. | Apr 2018 | B2 |
10044723 | Fischer et al. | Aug 2018 | B1 |
10061657 | Chopra et al. | Aug 2018 | B1 |
10116574 | Kompella et al. | Oct 2018 | B2 |
10191778 | Yang et al. | Jan 2019 | B1 |
10284627 | Lang et al. | May 2019 | B2 |
10291472 | Banka | May 2019 | B2 |
10355997 | Kompella et al. | Jul 2019 | B2 |
10581687 | Singh et al. | Mar 2020 | B2 |
20020031088 | Packer | Mar 2002 | A1 |
20040054791 | Chakraborty et al. | Mar 2004 | A1 |
20040073596 | Kloninger et al. | Apr 2004 | A1 |
20040088412 | John et al. | May 2004 | A1 |
20040088606 | Robison et al. | May 2004 | A1 |
20050058131 | Samuels et al. | Mar 2005 | A1 |
20050091657 | Priem | Apr 2005 | A1 |
20050132032 | Bertrand | Jun 2005 | A1 |
20060101144 | Wiryaman et al. | May 2006 | A1 |
20060200821 | Cherkasova et al. | Sep 2006 | A1 |
20060259733 | Yamazaki et al. | Nov 2006 | A1 |
20060271680 | Shalev et al. | Nov 2006 | A1 |
20070014246 | Aloni et al. | Jan 2007 | A1 |
20070024898 | Uemura et al. | Feb 2007 | A1 |
20070106769 | Liu | May 2007 | A1 |
20070248017 | Hinata et al. | Oct 2007 | A1 |
20070266136 | Esfahany et al. | Nov 2007 | A1 |
20080140820 | Snyder | Jun 2008 | A1 |
20080222633 | Kami | Sep 2008 | A1 |
20080250415 | Illikkal et al. | Oct 2008 | A1 |
20080253325 | Park et al. | Oct 2008 | A1 |
20080270199 | Chess et al. | Oct 2008 | A1 |
20080320147 | Delima et al. | Dec 2008 | A1 |
20090028061 | Zaencker | Jan 2009 | A1 |
20090172315 | Iyer et al. | Jul 2009 | A1 |
20090183173 | Becker et al. | Jul 2009 | A1 |
20100011270 | Yamamoto et al. | Jan 2010 | A1 |
20100037291 | Tarkhanyan | Feb 2010 | A1 |
20100095300 | West et al. | Apr 2010 | A1 |
20100125477 | Mousseau et al. | May 2010 | A1 |
20110072486 | Hadar et al. | Mar 2011 | A1 |
20110128853 | Nishimura | Jun 2011 | A1 |
20110219447 | Horovitz et al. | Sep 2011 | A1 |
20110231857 | Zaroo et al. | Sep 2011 | A1 |
20110276699 | Pederson | Nov 2011 | A1 |
20120002669 | Dietterle et al. | Jan 2012 | A1 |
20120054330 | Loach | Mar 2012 | A1 |
20120054763 | Srinivasan | Mar 2012 | A1 |
20120096167 | Free et al. | Apr 2012 | A1 |
20120096320 | Caffrey | Apr 2012 | A1 |
20120131225 | Chiueh et al. | May 2012 | A1 |
20120151061 | Bartfai-Walcott | Jun 2012 | A1 |
20120210318 | Sanghvi et al. | Aug 2012 | A1 |
20120272241 | Nonaka et al. | Oct 2012 | A1 |
20120303923 | Behera et al. | Nov 2012 | A1 |
20120311098 | Inamdar et al. | Dec 2012 | A1 |
20120311138 | Inamdar et al. | Dec 2012 | A1 |
20120324445 | Dow et al. | Dec 2012 | A1 |
20120331127 | Wang et al. | Dec 2012 | A1 |
20130003553 | Samuels | Jan 2013 | A1 |
20130042003 | Franco et al. | Feb 2013 | A1 |
20130044629 | Biswas et al. | Feb 2013 | A1 |
20130066939 | Shao | Mar 2013 | A1 |
20130163428 | Lee et al. | Jun 2013 | A1 |
20130205037 | Biswas | Aug 2013 | A1 |
20130263209 | Panuganty | Oct 2013 | A1 |
20130297802 | Laribi | Nov 2013 | A1 |
20130346973 | Oda | Dec 2013 | A1 |
20140007094 | Jamjoom et al. | Jan 2014 | A1 |
20140007097 | Chin et al. | Jan 2014 | A1 |
20140019807 | Harrison et al. | Jan 2014 | A1 |
20140025890 | Bert et al. | Jan 2014 | A1 |
20140026133 | Parker | Jan 2014 | A1 |
20140067779 | Ojha et al. | Mar 2014 | A1 |
20140075013 | Agrawal | Mar 2014 | A1 |
20140092744 | Sundar et al. | Apr 2014 | A1 |
20140123133 | Luxenberg | May 2014 | A1 |
20140130039 | Chaplik et al. | May 2014 | A1 |
20140189684 | Zaslaysky et al. | Jul 2014 | A1 |
20140192639 | Smirnov | Jul 2014 | A1 |
20140196038 | Kottomtharayil et al. | Jul 2014 | A1 |
20140241159 | Kakadia et al. | Aug 2014 | A1 |
20140258535 | Zhang | Sep 2014 | A1 |
20140304320 | Taneja et al. | Oct 2014 | A1 |
20140313904 | Brunet et al. | Oct 2014 | A1 |
20140334301 | Billaud et al. | Nov 2014 | A1 |
20140372513 | Jones | Dec 2014 | A1 |
20150067404 | Eilam et al. | Mar 2015 | A1 |
20150127912 | Solihin | May 2015 | A1 |
20150169306 | Labocki et al. | Jun 2015 | A1 |
20150193245 | Cropper | Jul 2015 | A1 |
20150195182 | Mathur | Jul 2015 | A1 |
20150215214 | Ng et al. | Jul 2015 | A1 |
20150215946 | Raleigh | Jul 2015 | A1 |
20150234674 | Zhong | Aug 2015 | A1 |
20150277957 | Shigeta | Oct 2015 | A1 |
20150378743 | Zellermayer et al. | Dec 2015 | A1 |
20150381711 | Singh et al. | Dec 2015 | A1 |
20160092257 | Wang et al. | Mar 2016 | A1 |
20160103669 | Gamage | Apr 2016 | A1 |
20160139948 | Beveridge et al. | May 2016 | A1 |
20160154665 | Iikura et al. | Jun 2016 | A1 |
20160182345 | Herdrich et al. | Jun 2016 | A1 |
20160239331 | Tamura | Aug 2016 | A1 |
20160246647 | Harris et al. | Aug 2016 | A1 |
20160259941 | Vasudevan et al. | Sep 2016 | A1 |
20160277249 | Singh | Sep 2016 | A1 |
20160359897 | Yadav | Dec 2016 | A1 |
20160366233 | Le et al. | Dec 2016 | A1 |
20160378519 | Gaurav et al. | Dec 2016 | A1 |
20170033995 | Banka et al. | Feb 2017 | A1 |
20170093918 | Banerjee et al. | Mar 2017 | A1 |
20170094377 | Herdrich et al. | Mar 2017 | A1 |
20170094509 | Mistry et al. | Mar 2017 | A1 |
20170116014 | Yang et al. | Apr 2017 | A1 |
20170160744 | Chia et al. | Jun 2017 | A1 |
20170171245 | Lee et al. | Jun 2017 | A1 |
20170235677 | Sakaniwa et al. | Aug 2017 | A1 |
20170262375 | Jenne et al. | Sep 2017 | A1 |
20170315836 | Langer | Nov 2017 | A1 |
20180088997 | Min | Mar 2018 | A1 |
20180097728 | Bodi Reddy et al. | Apr 2018 | A1 |
20180123919 | Naous et al. | May 2018 | A1 |
20180139100 | Nagpal et al. | May 2018 | A1 |
20180157511 | Krishnan et al. | Jun 2018 | A1 |
20180173549 | Browne et al. | Jun 2018 | A1 |
20180176088 | Ellappan | Jun 2018 | A1 |
20180285166 | Roy et al. | Oct 2018 | A1 |
20180287902 | Chitalia et al. | Oct 2018 | A1 |
20180300182 | Hwang et al. | Oct 2018 | A1 |
20190268228 | Banka et al. | Aug 2019 | A1 |
20200195509 | Singh | Jun 2020 | A1 |
Number | Date | Country |
---|---|---|
1787424 | Jun 2006 | CN |
102045537 | May 2011 | CN |
102254021 | Nov 2011 | CN |
102255935 | Nov 2011 | CN |
102664786 | Sep 2012 | CN |
103210618 | Jul 2013 | CN |
106888254 | Jun 2014 | CN |
105897946 | Aug 2016 | CN |
0831617 | Mar 1998 | EP |
2687991 | Jan 2014 | EP |
2009089051 | Jul 2009 | WO |
2013101843 | Jul 2013 | WO |
2013184846 | Dec 2013 | WO |
2015048326 | Apr 2015 | WO |
Entry |
---|
“AppFormix Metrics,” AppFormix, Aug. 6, 2017, 6 pp. |
“Creating Projects in OpenStack for Configuring Tenants in Contrail,” Juniper Networks, Inc., Jan. 16, 2015, 2 pp. |
“Host Aggregates,” OpenStack Docs, accessed from https://docs.openstack.org/nova/latest/user/aggregates.html, accessed on Feb. 14, 2018, 3 pp. |
“Improving Real-Time Performance by Utilizing Cache Allocation Technology—Enhancing Performance via Allocation of the Processor's Cache,” White Paper, Intel® Corporation, Apr. 2015, 16 pp. |
“OpenStack Docs: Manage projects, users, and roles,” Openstack Keystone service version 12.0. 1.dev1 9, Jul. 26, 2018, 7 pp. |
“Transmission Control Protocol,” DARPA Internet Program Protocol Specification, Sep. 1981, RFC 793, 90 pp. |
Examination Report from counterpart European Application No. 14847344.0, dated Aug. 28, 2018, 9 pp. |
Examination Report from counterpart European Application No. 17163963.6, dated Mar. 9, 2020, 9 pp. |
Extended Search Report from counterpart European Application No. 14847344.0, dated May 2, 2017, 8 pp. |
Extended Search Report from counterpart European Application No. 17163963.6, dated Jan. 5, 2018, 11 pp. |
First Office Action and Search Report, and translation thereof, from counterpart Chinese Application No. 2014800588702, dated Sep. 28, 2018, 23 pp. |
Gamage et al., “Opportunistic flooding to improve TCP transmit performance in Virtualized clouds,” Proceedings of the 2nd ACM Symposium on Cloud Computing, Oct. 26, 2011, 14 pp. |
Gamage et al., “Protocol Responsibility Offloading to Improve TCP Throughput in Virtualized Environments,” ACM Transactions on Computer Systems; 31(3) Article 7, Aug. 2013, pp. 7:1-7:34. |
Hopps, “Analysis of an Equal-Cost Multi-Path Algorithm,” RFC 2992, Network Working Group, Nov. 2000, 8 pp. |
International Preliminary Report on Patentability from International Application No. PCT/US2014/057514, dated Mar. 29, 2016, 11 pp. |
International Search Report and Written Opinion for International Application Serial No. PCT/US2014/057514, dated Dec. 31, 2014, 10 pgs. |
Kangarlou et al., “vSnoop: Improving TCP Throughput in Virtualized Environments via Acknowledgement Offload,” International Conference for High Performance Computing, Networking, Storage and Analysis (SC), Nov. 2010, pp. 1-11. |
Klein et al., “Improved TCP Performance in Wireless IP Networks through Enhanced Opportunistic Scheduling Algorithms,” IEEE Global Telecommunications Conference, vol. 5, 2004, pp. 2744-2748. |
Ramakrishnan et al., “The Addition of Explicit Congestion Notification (ECN) to IP,” Network Working Group: RFC 3168, Sep. 2001, 63 pp. |
Response to Examination Report dated Aug. 28, 2018 from counterpart European Application No. 14847344.0, filed Jan. 2, 2019, 19 pp. |
Response to Extended European Search Report dated May 2, 2017 from counterpart European Application No. 14847344.0, filed Nov. 17, 2017, 19 pp. |
Response to Extended Search Report dated Jan. 5, 2018, from counterpart European Application No. 17163963.6, filed Jul. 31, 2018, 16 pp. |
Roy, “AppFormix and Intel RDT Integration: Orchestrating Virtual Machines on OpenStack,” AppFormix Blog, Apr. 1, 2016, 7 pp. |
Roy, “Meet Your Noisy Neighbor, Container,” AppFormix Blog, Mar. 31, 2016, 11 pp. |
Roy, “CPU shares insufficient to meet application SLAs,” APPFORMIX-TR-2016-1, Mar. 2016, 3 pp. |
Second Office Action, and translation thereof, from counterpart Chinese Application No. 2014800588702, dated May 27, 2019, 11 pp. |
Singh, “AppFormix: Realize the Performance of Your Cloud Infrastructure—Solution Brief,” AppFormix, Intel® Corporation, Mar. 27, 2016, 7 pp. |
U.S. Appl. No. 15/946,645, by Juniper Networks, Inc. (Inventors: Chitalia et al.), filed Apr. 5, 2018. |
Zhang et al., “Hypernetes: Bringing Security and Multi-tenancy to Kubernetes,” Retrieved from: https://kubernetes.io/blog/2016/05/hypernetes-secu rityand-multi-tenancy-in-kubernetes/, May 24, 2016, 9 pp. |
Prosecution History from U.S. Appl. No. 14/149,621, dated May 6, 2016 through Apr. 29, 2019, 227 pp. |
Prosecution History from U.S. Appl. No. 14/290,509, dated Nov. 10, 2015 through May 23, 2016, 34 pp. |
Prosecution History from U.S. Appl. No. 15/084,927, dated Jun. 12, 2018 through Oct. 23, 2019, 104 pp. |
Prosecution History from U.S. Appl. No. 15/162,589, dated Jul. 6, 2016 through Sep. 26, 2018, 49 pp. |
Tusa et al., “AAA in a Cloud-Based Virtual DIME Network Architecture (DNA),” 2011 20th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, Jun. 27-29, 2011, pp. 110-115. |
First Office Action and Search Report, and translation thereof, from counterpart Chinese Application No. 201710204483.0, dated Sep. 26, 2019, 14 pp. |
Response to Examination Report dated Mar. 9, 2020 from counterpart European Application No. 17163963.6, filed Jul. 7, 2020, 10 pp. |
Decision of Rejection, and translation thereof, from counterpart Chinese Application No. 201710204483.0, dated Dec. 7, 2020, 11 pp. |
Second Office Action and Search Report, and translation thereof, from counterpart Chinese Application No. 201710204483.0, dated Jul. 7, 2020, 11 pp. (Atty docket no. 1014-930CN03/JNA0082-Cn). |
Nguyen, “Intel's Cache Monitoring Technology Software-Visible Interfaces,” Intel, last retrieved Sep. 27, 2020 from: https:1/software.intel.com/content/www/us/en/develop/blogs/intel-s-cache-monitoring-technology-software-visible-interfaces.html, Dec. 12, 2014, 8 pp. |
Number | Date | Country | |
---|---|---|---|
20200195509 A1 | Jun 2020 | US |
Number | Date | Country | |
---|---|---|---|
61882768 | Sep 2013 | US | |
61882768 | Sep 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15084927 | Mar 2016 | US |
Child | 16783160 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14811957 | Jul 2015 | US |
Child | 15084927 | US | |
Parent | 14149621 | Jan 2014 | US |
Child | 14811957 | US | |
Parent | 14290509 | May 2014 | US |
Child | 14149621 | US |