Modern networking equipment typically has the capabilities to monitor network traffic and collect metrics on traffic data. For example, routers, switches, load balancers, etc. often have certain built-in counters for collecting in-line traffic as data arrives at or exists from the networking equipment. Typically, the built-in counters such as number of packets received are predefined on the system. In practice, operators frequently want to specify their own counters to collect specific types of data, such as number of connections originating from a certain country, number of connections destined for a specific address, average number of packets received per hour, etc. To configure such counters, additional logic is required on the part of the equipment. Specifically, additional code needs to be added to the software (usually by the third-party equipment maker) to compute the desired metrics. The new code is recompiled, tested, and reinstalled on the equipment. Such a process, however, is cumbersome, inflexible, and expensive to implement. It can also be difficult to implement such a process on a distributed system where multiple devices can collect the same type of data.
Another approach is to collect raw data and provide the collected data to the equipment maker for off-line analysis. This approach, however, is often infeasible because the operators may not wish to expose sensitive network data to a third-party. Furthermore, because the analysis would be restricted to off-line use, the results cannot be easily applied to real-time data and affect desired changes on the network in response.
It is therefore desirable to have a more flexible technique to provide desired metrics data to operators. It is also useful for the technique to be able to provide metrics data for in-line traffic and for distributed systems.
Various embodiments of the invention are disclosed in the following detailed description and the accompanying drawings.
The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term ‘processor’ refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions.
A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.
Providing user-specified custom metrics in a distributed networking environment is described. In some embodiments, a packet is accessed, and processed using a packet processing pipeline of a service engine in a distributed network service platform, including: reaching a pre-specified point in the packet processing pipeline; inserting, in the packet processing pipeline, script code that corresponds to the pre-specified point; executing the script code to collect at least metric-related data associated with a user-specified metric object; and executing remaining packet processing pipeline.
Processor 102 is coupled bi-directionally with memory 110, which can include a first primary storage, typically a random access memory (RAM), and a second primary storage area, typically a read-only memory (ROM). As is well known in the art, primary storage can be used as a general storage area and as scratch-pad memory, and can also be used to store input data and processed data. Primary storage can also store programming instructions and data, in the form of data objects and text objects, in addition to other data and instructions for processes operating on processor 102. Also as is well known in the art, primary storage typically includes basic operating instructions, program code, data, and objects used by the processor 102 to perform its functions (e.g., programmed instructions). For example, memory 110 can include any suitable computer-readable storage media, described below, depending on whether, for example, data access needs to be bi-directional or uni-directional. For example, processor 102 can also directly and very rapidly retrieve and store frequently needed data in a cache memory (not shown).
A removable mass storage device 112 provides additional data storage capacity for the computer system 100, and is coupled either bi-directionally (read/write) or uni-directionally (read only) to processor 102. For example, storage 112 can also include computer-readable media such as magnetic tape, flash memory, PC-CARDS, portable mass storage devices, holographic storage devices, and other storage devices. A fixed mass storage 120 can also, for example, provide additional data storage capacity. The most common example of mass storage 120 is a hard disk drive. Mass storages 112, 120 generally store additional programming instructions, data, and the like that typically are not in active use by the processor 102. It will be appreciated that the information retained within mass storages 112 and 120 can be incorporated, if needed, in standard fashion as part of memory 110 (e.g., RAM) as virtual memory.
In addition to providing processor 102 access to storage subsystems, bus 114 can also be used to provide access to other subsystems and devices. As shown, these can include a display monitor 118, a network interface 116, a keyboard 104, and a pointing device 106, as well as an auxiliary input/output device interface, a sound card, speakers, and other subsystems as needed. For example, the pointing device 106 can be a mouse, stylus, track ball, or tablet, and is useful for interacting with a graphical user interface.
The network interface 116 allows processor 102 to be coupled to another computer, computer network, or telecommunications network using a network connection as shown. For example, through the network interface 116, the processor 102 can receive information (e.g., data objects or program instructions) from another network or output information to another network in the course of performing method/process steps. Information, often represented as a sequence of instructions to be executed on a processor, can be received from and outputted to another network. An interface card or similar device and appropriate software implemented by (e.g., executed/performed on) processor 102 can be used to connect the computer system 100 to an external network and transfer data according to standard protocols. For example, various process embodiments disclosed herein can be executed on processor 102, or can be performed across a network such as the Internet, intranet networks, or local area networks, in conjunction with a remote processor that shares a portion of the processing. Additional mass storage devices (not shown) can also be connected to processor 102 through network interface 116.
An auxiliary I/O device interface (not shown) can be used in conjunction with computer system 100. The auxiliary I/O device interface can include general and customized interfaces that allow the processor 102 to send and, more typically, receive data from other devices such as microphones, touch-sensitive displays, transducer card readers, tape readers, voice or handwriting recognizers, biometrics readers, cameras, portable mass storage devices, and other computers.
In addition, various embodiments disclosed herein further relate to computer storage products with a computer readable medium that includes program code for performing various computer-implemented operations. The computer-readable medium is any data storage device that can store data which can thereafter be read by a computer system. Examples of computer-readable media include, but are not limited to, all the media mentioned above: magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROM disks; magneto-optical media such as optical disks; and specially configured hardware devices such as application-specific integrated circuits (ASICs), programmable logic devices (PLDs), and ROM and RAM devices. Examples of program code include both machine code, as produced, for example, by a compiler, or files containing higher level code (e.g., script) that can be executed using an interpreter.
The computer system shown in
In the example shown, a networking layer 255 comprising networking devices such as routers, switches, etc. forwards requests from client devices 252 to a distributed network service platform 204. In this example, distributed network service platform 204 includes a number of servers configured to provide a distributed network service. A physical server (e.g., 202, 204, 206, etc.) has hardware components and software components, and can be implemented using a device such as 100. In this example, hardware (e.g., 208) of the server supports operating system software in which a number of virtual machines (VMs) (e.g., 218, 219, 220, etc.) are configured to execute. A VM is a software implementation of a machine (e.g., a computer) that simulates the way a physical machine executes programs. The part of the server's operating system that manages the VMs is referred to as the hypervisor. The hypervisor interfaces between the physical hardware and the VMs, providing a layer of abstraction to the VMs. Through its management of the VMs' sharing of the physical hardware resources, the hypervisor makes it appear as though each VM were running on its own dedicated hardware. Examples of hypervisors include the VMware Workstation® and Oracle VM VirtualBox®. Although physical servers supporting VM architecture are shown and discussed extensively for purposes of example, physical servers supporting other architectures such as container-based architecture (e.g., Kubernetes®, Docker®, Mesos®), standard operating systems, etc., can also be used and techniques described herein are also applicable. In a container-based architecture, for example, the applications are executed in special containers rather than virtual machines.
In some embodiments, instances of applications are configured to execute on the VMs. In some embodiments, a single application corresponds to a single virtual service. Examples of such virtual services include web applications such as shopping cart, user authentication, credit card authentication, email, file sharing, virtual desktops, voice/video streaming, online collaboration, and many others. In some embodiments, a set of applications is collectively referred to as a virtual service. For example, a web merchant can offer shopping cart, user authentication, credit card authentication, product recommendation, and a variety of other applications in a virtual service. Multiple instances of the same virtual service can be instantiated on different devices. For example, the same shopping virtual service can be instantiated on VM 218 and VM 220. The actual distribution of the virtual services depends on system configuration, run-time conditions, etc. Running multiple instances of the virtual service on separate VMs provide better reliability and more efficient use of system resources.
One or more service engines (e.g., 214, 224, etc.) are instantiated on a physical device. In some embodiments, a service engine is implemented as software executing in a virtual machine. The service engine is executed to provide distributed network services for applications executing on the same physical server as the service engine, and/or for applications executing on different physical servers. In some embodiments, the service engine is configured to enable appropriate service components. For example, a load balancer component is executed to provide load balancing logic to distribute traffic load amongst instances of applications executing on the local physical device as well as other physical devices; a firewall component is executed to provide firewall logic to instances of the applications on various devices; a metrics agent component is executed to gather metrics associated with traffic, performance, etc. associated with the instances of the applications, etc. Many other service components may be implemented and enabled as appropriate. When a specific service is desired, a corresponding service component is configured and invoked by the service engine to execute in a VM. In some embodiments, the service engine also implements a packet processing pipeline that processes packets between the clients and the virtual services. Details of the packet processing pipeline are described below.
In the example shown, traffic received on a physical port of a server (e.g., a communications interface such as Ethernet port 215) is sent to a virtual switch (e.g., 212). In some embodiments, the virtual switch is configured to use an API provided by the hypervisor to intercept incoming traffic designated for the application(s) in an inline mode, and send the traffic to an appropriate service engine. In inline mode, packets are forwarded on without being replicated. As shown, the virtual switch passes the traffic to a service engine in the distributed network service layer (e.g., the service engine on the same physical device), which transforms the packets if needed and redirects the packets to the appropriate application. The service engine, based on factors such as configured rules and operating conditions, redirects the traffic to an appropriate application executing in a VM on a server.
Controller 290 is configured to control, monitor, program, and/or provision the distributed network services and virtual machines. In particular, the controller includes a metrics manager 292 configured to collect traffic-related metrics and perform analytical operations. In some embodiments, the controller also performs anomaly detection based on the metrics analysis. The controller can be implemented as software, hardware, firmware, or any combination thereof. In some embodiments, the controller is implemented on a system such as 100. In some cases, the controller is implemented as a single entity logically, but multiple instances of the controller are installed and executed on multiple physical devices to provide high availability and increased capacity. In embodiments implementing multiple controllers, known techniques such as those used in distributed databases are applied to synchronize and maintain coherency of data among the controller instances.
In this example, user-specified metrics are configured and collected by the service engines. As will be described in greater detail below, a service engine collects user-specified metrics that correspond to respective virtual services, using its packet processing pipeline. The metrics data is sent to controller 290 to be aggregated, fed back to the service engines and/or output to a requesting application.
Within data center 250, one or more controllers 290 gather metrics data from various nodes operating in the data center. As used herein, a node refers to a computing element that is a source of metrics information. Examples of nodes include virtual machines, networking devices, service engines, or any other appropriate elements within the data center.
Many different types of metrics can be collected by the controller. For example, since traffic (e.g., hypertext transfer protocol (HTTP) connection requests and responses, etc.) to and from an application will pass through a corresponding service engine, metrics relating to the performance of the application and/or the VM executing the application can be directly collected by the corresponding service engine. Additionally, infrastructure metrics relating to the performance of other components of the service platform (e.g., metrics relating to the networking devices, metrics relating to the performance of the service engines themselves, metrics relating to the host devices such as data storage as well as operating system performance, etc.) can be collected by the controller. Specific examples of the metrics include round trip time, latency, bandwidth, number of connections, etc.
The components and arrangement of distributed network service platform 204 described above are for purposes of illustration only. The technique described herein is applicable to network service platforms having different components and/or arrangements.
The packet processing pipeline typically includes multiple stages.
The functions in the pipeline are marked according to certain pre-specified points. In this example, the pre-specified points correspond to events that occurred during packet processing, such as completing the parsing of an HTTP request headers, completing the parsing of a DNS request, receiving data from the back end server, etc. These points serve as hooks for inserting user-specified script code. When no special user configuration is present, pipeline 300 processes packets normally.
The following is an example list of pre-specified events. Other events can be specified in different embodiments.
A user or administrator can use an editor to create or modify a configuration file, which specifies, among other things, objects such as virtual services, script sets, etc. APIs associated with the objects are provided to support the creation and operations of the objects. For example, a script set can define script code that is invoked when a particular event is triggered. The script set can include logic operations (e.g., determine whether certain conditions are met) and obtain associated metrics information. Additional operations such as outputting the data can also be performed. The configuration file includes such API calls.
In some embodiments, custom metrics APIs that create and maintain metric objects such as counters, status variables, etc. are implemented and provided for the user to incorporation into the script. For example, avi.vs.analytics.counter provides the following API call:
In this function, “counter-name” is the identifier for the counter object. If the counter object does not already exist, the API will create a counter object with counter-name and initialize its default value to 0. “Operation” specifies the type of operation to be performed on the counter, such as incrementing the counter value, reading the counter value, etc. “Value” is an optional parameter that specifies the steps size for incrementing the counter value; omitting this parameter will set the default steps size to 1. Thus, the first call using the API, avi.vs.analytics.counter (“Foo”, USERDEFINED COUNTER OP INCREMENT, 42) will create a counter named “Foo”, set its value to 42, and return the value of 42. The next call using the API, avi.vs.analytics.counter (“Foo”, USERDEFINED COUNTER OP INCREMENT) will increment the counter by 1 and return the value of 43. Many other APIs are implemented in various embodiments to support gathering metrics information.
The script object specifies the event and corresponding script that is executed when the event is detected. For example, in the following configuration file, when an event VS_DATASCRIPT_EVT_HTTP_REQ (i.e., client side HTTP Request headers are fully parsed for a packet) is detected, script code is invoked. In this example, the script code is in DataScript™, which is a Lua™-based scripting language and environment. The script collects metrics data by using custom metrics API calls. In particular, the script makes an API call (avi.http.getpath) that gets the path in the request, determines whether the path string matches a counter name, and if so, makes another API call (avi.vs.analytics.absolute) to get the counter value for counter named “counter_1”. The counter value is returned in an HTTP response via another API call (avi.http.response). The script execution environment provides the mechanisms to locate the entry points for API calls, manage memory, perform operations, as well as connecting the script code/API calls to the packet processing pipeline code.
vsdatascriptset_object {
At 402, a packet is accessed. The packet can be a packet that is received from a client and to be forwarded by the service engine to a destination virtual service. The packet can also be a packet that is transmitted by a virtual service and to be forwarded by the service engine to a client. In both cases, the packet is intercepted and processed by the service engine before being forwarded to its destination.
The packet is processed using the packet processing pipeline in the service engine. In this example, a packet processing pipeline with the custom configuration such as 350 is used. At 404, a pre-specified point in the packet processing pipeline is reached. Specifically, an event is detected. Such an event can be any event enumerated in the VSDataScriptEvent example above.
At 406, in response to the pre-specified event being detected, script code that corresponds to the pre-specified event is inserted into the pipeline. In particular, the script code includes API calls to access user-specified metric objects such as counters, status variables (e.g., measurements), etc. The APIs will create and initialize the metric objects as needed. In some embodiments, the insertion includes pointing to the address of the script code.
At 408, the script code is executed to collect metric-related data associated with one or more user-specified metric object. According to the script, metrics related operations (including logic operations) are performed. In some instances the operations are performed using API calls as described above. For example, an API can be invoked for a counter object that counts the number of HTTP requests with a particular path can be updated, and returns the counter value; another API can be invoked for a status object such as percentage of CPU usage to take the measurement and return the measurement value, etc.
At 410, the process returns to the packet processing pipeline to continue executing the remaining portion of the pipeline. In particular, the process returns to the point in the pipeline where the script code was inserted.
Process 402-410 can be repeated for multiple packets.
In some embodiments, a custom metric object such as a counter, a measurement, etc. includes a timestamp used to facilitate garbage collection. The timestamp is created when the object is created, and updated each time the object is accessed. A garbage collector is periodically invoked by the service engine to check the custom metric objects to identify the ones that have not been used for a at least pre-specified amount of time (e.g., at least two hours), and automatically destroying the older objects to free up memory.
In some embodiments, the collected metric-related data is sent to a controller of the distributed network service platform, to be aggregated and outputted as appropriate. The metrics data can be sent as it is collected, or sent in batches periodically. In some embodiments, the metric-related data is formatted and sent to the controller according to a pre-established service engine-controller communication protocol.
In this example, service engine 502 and 504 are on separate devices. The service engines are configured to collect metrics data for virtual services running on the respective devices, and send the collected data to controller 506. Each service engine is configured to collect metrics data associated with a counter named C1. The counters, however, have different associated contexts which relate to different virtual services. This is because the same script set object can be used by multiple virtual services according to the configuration file. For example, the configuration file can specify that virtual service VS1 includes a counter object C1, and virtual service VS2 also includes C1. Since the counter objects C1s have different contexts, they are considered to be separate objects, and the collected values are stored separately in the controller's database to allow context-specific queries (e.g., the total number of packets for VS1 and the total number of packets for VS2.
In some embodiments, the metrics are output to the service engine, according to the format of an existing protocol between the service engine and the controller. For example, the script executed by the service engine may require the service engine to check the sum for a counter across all the service engines, and send an alert if the sum exceeds a certain threshold. Thus, the service engine queries the controller (periodically or as needed) to obtain the sum of the counter.
In some cases, the metrics are output in response to a query. For example, a user can make a query to the controller to obtain data pertaining to a counter, such as the sum, the average, the maximum, the minimum, etc. The query can be made via a metrics application 510 that provides a user interface configured to invoke a metrics API. The metrics application can be implemented as a browser-based application or a standalone application. In response, the controller determines the query result based on the values in the database and returns the query result to the user. A predefined metrics API is used to perform the query and obtain a response.
In some embodiments, the API functions are implemented using Representation State Transfer (REST) APIs, which is an HTTP-based API architecture. The user can invoke the APIs via a metrics application (which can be a web browser or other user interface application). The API calls are in the form of HTTP requests to the controller. The controller, which runs as a web server, response to the request by invoking an appropriate function to obtain the requested data and send the result back in an HTTP response. The response can be stored, displayed, or otherwise output to the metrics application. In one example, the API has the following format:
http://<controller-ip>/api/analytics/metrics/virtualservice/<vs_uuid>?metric_id=<metric_id>&obj_id=<obj_id>&step=<step>&limit=<limit>
where controller-ip is the IP address of the controller; <vs_uuid> is the identifier of the virtual service and if omitted, all matching objects are returned; metric_id is the identifier for the type of the metric data (e.g., sum, average, minimum, maximum, etc.); obj_id is the identifier of the metric object (e.g., counter name); step is the granularity of the data to be obtained (e.g., at 5-second interval or 5-minute interval), and limit is the number of values to be obtained.
For purposes of illustration, the following specific APIs for four types of metric data are described, although other APIs for accessing metrics data can also be implemented:
user.sum_counter: this is an aggregated counter value in a given time period across all service engines. For example, to obtain the latest 5-minute (300 seconds) sample of aggregated counter value for a counter with the identifier of “Foo”, the following API call is used:
user.avg_counter: this is an average rate of the counter value with respect to time. For example, to obtain the latest 5-minute sample of the average rate of counter Foo per second, the following API call is used:
http://<controller-ip>/api/analytics/metrics/virtualservice/<vs_uuid>?metric_id=user.avg_counter&obj_id=Foo&step=300&limit=1
user.max_counter and user.min_counter: These counter values represent the maximum and minimum counter values for a given time period. For example, to obtain the maximum and minimum values of counter Foo for the latest 5-minute interval, the following APIs calls are used:
In response to the API request, the controller locates the appropriate object, computes the value, and returns the data in a REST API-based response.
In some embodiments, the controller implements multiple levels of aggregated storage based on a time-series database.
Providing custom metrics in a distributed network service platform has been disclosed. The use of configuration file with script to handle custom metrics is highly flexible and allows custom metrics to be created and managed easily, without having to recompile service engine code. Further, the technique is highly scalable. For example, when a service engine and its associated virtual services have exceeded capacity and a new service engine and virtual services are launched, the new service engine can import the same configuration file and execute the same script as used by the existing service engine, such that counters for the same virtual services are created and handled in the same way.
Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive.
Number | Name | Date | Kind |
---|---|---|---|
5109486 | Seymour | Apr 1992 | A |
5781703 | Desai et al. | Jul 1998 | A |
6148335 | Haggard et al. | Nov 2000 | A |
6449739 | Landan | Sep 2002 | B1 |
6515968 | Combar et al. | Feb 2003 | B1 |
6714979 | Brandt et al. | Mar 2004 | B1 |
6792458 | Muret et al. | Sep 2004 | B1 |
6792460 | Oulu et al. | Sep 2004 | B2 |
6901051 | Hou et al. | May 2005 | B1 |
6996778 | Rajarajan et al. | Feb 2006 | B2 |
7076695 | McGee et al. | Jul 2006 | B2 |
7130812 | Iyer et al. | Oct 2006 | B1 |
7246159 | Aggarwal et al. | Jul 2007 | B2 |
7353272 | Robertson et al. | Apr 2008 | B2 |
7430610 | Pace et al. | Sep 2008 | B2 |
7636708 | Garcea et al. | Dec 2009 | B2 |
7701852 | Hohn et al. | Apr 2010 | B1 |
7933988 | Nasuto et al. | Apr 2011 | B2 |
7990847 | Leroy | Aug 2011 | B1 |
8032896 | Li et al. | Oct 2011 | B1 |
8112471 | Wei et al. | Feb 2012 | B2 |
8588069 | Todd et al. | Nov 2013 | B2 |
8856797 | Siddiqui et al. | Oct 2014 | B1 |
8874725 | Ganjam et al. | Oct 2014 | B1 |
9032078 | Beerse et al. | May 2015 | B2 |
9210056 | Choudhary et al. | Dec 2015 | B1 |
9288193 | Gryb et al. | Mar 2016 | B1 |
9300552 | Dube et al. | Mar 2016 | B2 |
9459980 | Arguelles | Oct 2016 | B1 |
9467476 | Shieh et al. | Oct 2016 | B1 |
9477784 | Bhave et al. | Oct 2016 | B1 |
9483286 | Basavaiah | Nov 2016 | B2 |
9495222 | Jackson | Nov 2016 | B1 |
9531614 | Nataraj et al. | Dec 2016 | B1 |
9558465 | Arguelles et al. | Jan 2017 | B1 |
9571516 | Curcic et al. | Feb 2017 | B1 |
9608880 | Goodall | Mar 2017 | B1 |
9613120 | Kharatishvili et al. | Apr 2017 | B1 |
9626275 | Hitchcock et al. | Apr 2017 | B1 |
9674302 | Khalid et al. | Jun 2017 | B1 |
9680699 | Cohen et al. | Jun 2017 | B2 |
9692811 | Tajuddin et al. | Jun 2017 | B1 |
9697316 | Taylor et al. | Jul 2017 | B1 |
9712410 | Char | Jul 2017 | B1 |
9716617 | Ahuja et al. | Jul 2017 | B1 |
9729414 | Oliveira et al. | Aug 2017 | B1 |
9749888 | Colwell et al. | Aug 2017 | B1 |
9830192 | Crouchman et al. | Nov 2017 | B1 |
9935829 | Miller et al. | Apr 2018 | B1 |
10003550 | Babcock et al. | Jun 2018 | B1 |
10212041 | Rastogi et al. | Feb 2019 | B1 |
10313211 | Rastogi et al. | Jun 2019 | B1 |
10372600 | Mathur | Aug 2019 | B2 |
10547521 | Roy et al. | Jan 2020 | B1 |
10594562 | Rastogi et al. | Mar 2020 | B1 |
10693734 | Rastogi et al. | Jun 2020 | B2 |
10728121 | Chitalia et al. | Jul 2020 | B1 |
10873541 | Callau et al. | Dec 2020 | B2 |
20020078150 | Thompson et al. | Jun 2002 | A1 |
20020198984 | Goldstein et al. | Dec 2002 | A1 |
20020198985 | Fraenkel et al. | Dec 2002 | A1 |
20030191837 | Chen | Oct 2003 | A1 |
20030236877 | Allan | Dec 2003 | A1 |
20040054680 | Kelley et al. | Mar 2004 | A1 |
20040064552 | Chong et al. | Apr 2004 | A1 |
20040103186 | Casati | May 2004 | A1 |
20040243607 | Tummalapalli | Dec 2004 | A1 |
20050010578 | Doshi | Jan 2005 | A1 |
20050060574 | Klotz et al. | Mar 2005 | A1 |
20050108444 | Flauaus et al. | May 2005 | A1 |
20050120160 | Plouffe et al. | Jun 2005 | A1 |
20050172018 | Devine et al. | Aug 2005 | A1 |
20050188221 | Motsinger et al. | Aug 2005 | A1 |
20060167939 | Seidman et al. | Jul 2006 | A1 |
20060242282 | Mullarkey | Oct 2006 | A1 |
20060271677 | Mercier | Nov 2006 | A1 |
20070226554 | Greaves et al. | Sep 2007 | A1 |
20080104230 | Nasuto et al. | May 2008 | A1 |
20080126534 | Mueller et al. | May 2008 | A1 |
20090154366 | Rossi | Jun 2009 | A1 |
20090199196 | Peracha | Aug 2009 | A1 |
20100279622 | Shuman et al. | Nov 2010 | A1 |
20110126111 | Gill et al. | May 2011 | A1 |
20110196890 | Pfeifle | Aug 2011 | A1 |
20120101800 | Miao et al. | Apr 2012 | A1 |
20120110185 | Ganesan | May 2012 | A1 |
20120254443 | Ueda | Oct 2012 | A1 |
20130013953 | Eck et al. | Jan 2013 | A1 |
20130086230 | Guerra et al. | Apr 2013 | A1 |
20130086273 | Wray et al. | Apr 2013 | A1 |
20130179289 | Calder et al. | Jul 2013 | A1 |
20130179881 | Calder et al. | Jul 2013 | A1 |
20130179894 | Calder et al. | Jul 2013 | A1 |
20130179895 | Calder et al. | Jul 2013 | A1 |
20130211559 | Lawson et al. | Aug 2013 | A1 |
20130212257 | Murase et al. | Aug 2013 | A1 |
20130343213 | Reynolds et al. | Dec 2013 | A1 |
20130346594 | Banerjee et al. | Dec 2013 | A1 |
20140006862 | Jain et al. | Jan 2014 | A1 |
20140143406 | Malhotra et al. | May 2014 | A1 |
20140173675 | Ahmed et al. | Jun 2014 | A1 |
20140215058 | Vicat-Blanc et al. | Jul 2014 | A1 |
20140215621 | Xaypanya et al. | Jul 2014 | A1 |
20140229706 | Kuesel | Aug 2014 | A1 |
20140282160 | Zarpas | Sep 2014 | A1 |
20140304414 | Yengalasetti et al. | Oct 2014 | A1 |
20140344439 | Kempf et al. | Nov 2014 | A1 |
20150058265 | Padala et al. | Feb 2015 | A1 |
20150074679 | Fenoglio et al. | Mar 2015 | A1 |
20150081880 | Eaton et al. | Mar 2015 | A1 |
20150124640 | Chu et al. | May 2015 | A1 |
20150199219 | Kim et al. | Jul 2015 | A1 |
20150212829 | Kupershtok et al. | Jul 2015 | A1 |
20150288682 | Bisroev et al. | Oct 2015 | A1 |
20150293954 | Hsiao et al. | Oct 2015 | A1 |
20150295780 | Hsiao et al. | Oct 2015 | A1 |
20150295796 | Hsiao et al. | Oct 2015 | A1 |
20150358391 | Moon et al. | Dec 2015 | A1 |
20150370852 | Shastry et al. | Dec 2015 | A1 |
20160064277 | Park et al. | Mar 2016 | A1 |
20160094431 | Hall et al. | Mar 2016 | A1 |
20160094483 | Johnston et al. | Mar 2016 | A1 |
20160105335 | Choudhary et al. | Apr 2016 | A1 |
20160127204 | Ozaki et al. | May 2016 | A1 |
20160164738 | Pinski et al. | Jun 2016 | A1 |
20160182399 | Zadka et al. | Jun 2016 | A1 |
20160217022 | Velipasaoglu et al. | Jul 2016 | A1 |
20160294722 | Bhatia et al. | Oct 2016 | A1 |
20160323377 | Einkauf et al. | Nov 2016 | A1 |
20170041386 | Bhat et al. | Feb 2017 | A1 |
20170063933 | Shieh et al. | Mar 2017 | A1 |
20170093986 | Kim | Mar 2017 | A1 |
20170134481 | DeCusatis et al. | May 2017 | A1 |
20170331907 | Jagannath et al. | Nov 2017 | A1 |
20180004582 | Hallenstål | Jan 2018 | A1 |
20180018244 | Yoshimura et al. | Jan 2018 | A1 |
20180041408 | Dagum et al. | Feb 2018 | A1 |
20180041470 | Schultz | Feb 2018 | A1 |
20180046482 | Karve et al. | Feb 2018 | A1 |
20180088935 | Church et al. | Mar 2018 | A1 |
20180089328 | Bath | Mar 2018 | A1 |
20180136931 | Hendrich et al. | May 2018 | A1 |
20180287902 | Chitalia et al. | Oct 2018 | A1 |
20180309637 | Gill | Oct 2018 | A1 |
20180335946 | Wu et al. | Nov 2018 | A1 |
20190123970 | Rastogi et al. | Apr 2019 | A1 |
20200136939 | Rastogi et al. | Apr 2020 | A1 |
20200136942 | Rastogi et al. | Apr 2020 | A1 |
20200169479 | Ireland | May 2020 | A1 |
20200287794 | Rastogi et al. | Sep 2020 | A1 |
Number | Date | Country |
---|---|---|
2020086956 | Apr 2020 | WO |
Entry |
---|
Author Unknown, “Autoscaler,” Compute Engine—Google Cloud Platform, Jun. 29, 2015, 6 pages, retrieved at http://web.archive.org/web/20150629041026/https://cloud.google.com/compute/docs/autoscaler/. |
Author Unknown, “Autoscaling,” Aug. 20, 2015, 4 pages, Amazon Web Services, retrieved from http://web.archive.org/web/20150820193921/https://aws.amazon.com/autoscaling/. |
Catania, V., et al., “PMT: A Tool to Monitor Performances in Distributed Systems,” Proceedings of the 3rd IEEE International Symposium on High Performance Distributed Computing, Aug. 2-5, 1994, 8 pages, San Francisco, CA, USA. |
Davis, David, “Post #8—Understanding vCenter Operations Badges,” David Davis Blog, Apr. 29, 2014, 5 pages, retrieved from http://blogs.vmware.com/management/2014/04/david-davis-on-vcenter-operations-post-8-understanding-vcenter-operations-badges.html. |
De George, Andy, “How to Scale an Application,” Jun. 16, 2015, 8 pages, Github.com. |
Liu, Feng, et al., “Monitoring of Grid Performance Based-on Agent,” 2007 2nd International Conference on Pervasive Computing and Applications, Jul. 26-27, 2007, 6 pages, IEEE, Birmingham, UK. |
Non-Published commonly Owned U.S. Appl. No. 15/130,499, filed Apr. 15, 2016, 49 pages, Avi Networks. |
Non-Published commonly Owned U.S. Appl. No. 15/453,258, filed Mar. 8, 2017, 34 pages, Avi Networks. |
Sevcik, Peter, et al., “Apdex Alliance,” May 24, 2014, 5 pages, www.apdex.org. |
Wallace, Paul, et al., “Feature Brief: Stingray's Autoscaling Capability,” Brocade Community Forums, May 1, 2013, 5 pages, retrieved from http://community.brocade.com/t5/vADC-Docs/Feature-Brief-Stingray-s-Autoscaling-capability/ta-p/73843. |
Yar, Mohammed, et al., “Prediction Intervals for the Holt-Winters Forecasting Procedure,” International Journal of Forecasting, Month Unknown 1990, 11 pages, vol. 6, Issue 1, Elsevier Science Publishers B.V. |
Zhang, Xuehai, et al., “A Performance Study of Monitoring and Information Services for Distributed Systems,” Proceedings of the 12th IEEE International Symposium on High Performance Distributed Computing, Jun. 22-24, 2003, 12 pages, IEEE Computer Society, Washington, D.C., USA. |
Author Unknown, “BPF, eBPF, XDP and Bpfilter . . . What are These Things and What do They Mean for the Enterprise?,” Apr. 16, 2018, 11 pages, Netronome, retrieved from https://www.netronome.com/blog/bpf-ebpf-xdp-and-bpfilter-what-are-these-things-and-what-do-they-mean-enterprise/. |
Non-Published commonly owned U.S. Appl. No. 15/055,450, filed Feb. 26, 2016, 37 pages, VMware, Inc. |
Non-Published commonly Owned U.S. Appl. No. 16/905,571, filed Jun. 18, 2020, 40 pages, VMware, Inc. |