This application claims priority to Spanish Application No. P 201631645, filed Dec. 21, 2016.
Datacenters are large clusters of components (e.g., hardware and/or software resources) that are connected to perform operations using massive amounts of data. Keeping these components working efficiently is a complex task as many incidents may occur during the execution of the processes. In order to detect anomalies, problems, and/or failures, or to otherwise assess the health of the system, tools are utilized that extract and gather metrics from the datacenter components. Metrics may include, by way of example only, the temperature of datacenter components, workload, network usage, processor capacity, and the like. The set of metrics at a given timestamp forms the state of the datacenter at the point in time represented by the timestamp. While much may be gleaned from a datacenter state, the information is indicative of conditions that have already taken place and, accordingly, does little to facilitate mitigation of problems or failures before they occur.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor should it be used as an aid in determining the scope of the claimed subject matter.
Embodiments of the present disclosure relate to predicting future states of nodes of a datacenter (each node representing a component of the datacenter in a context graph) utilizing a trained future state predictor. The evolution of recent states of the datacenter is analyzed by examining historical metrics collected from the datacenter nodes, as well as historical metrics of neighboring nodes. The metrics are aggregated into historical metric summary vector representations of the nodes which are utilized to train a future state predictor to predict future datacenter states. Once trained, metrics may be input into the future state predictor and the future state predictor may be utilized to predict a future state of one or more of the nodes of the datacenter.
The present invention is described in detail below with reference to the attached drawing figures, wherein:
The subject matter of the present disclosure is described with specificity herein to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventors have contemplated that the claimed subject matter might also be embodied in other ways, to include different steps or combinations of steps similar to the ones described in this document, in conjunction with other present or future technologies. For example, although this disclosure refers to generating context graphs that represent datacenters in illustrative examples, aspects of this disclosure can be applied to generating context graphs that represent relationships between components in a local hardware or software system, such as a storage system or distributed software application. Moreover, although the terms “step” and/or “block” may be used herein to connote different elements of methods employed, the terms should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly described. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The term “component” as used in the description below encompasses both hardware and software resources. The term component may refer to a physical device such as a computer, server, router, etc., a virtualized device such as a virtual machine or virtualized network function, or software such as an application, a process of an application, a database management system, etc. A component may include other components. For example, a server component may include a web service component which may include a web application component.
The term “context graph” refers to a data structure that depicts connections or relationships between components. A context graph consists of nodes (vertices, points) and edges (arcs, lines) that connect them. A node represents a component, and an edge represents a relationship between the corresponding components. Nodes and edges may be labeled or enriched with data or properties. For example, a node may include an identifier for a component, and an edge may be labeled to represent different types of relationships, such as a hierarchical relationship or a cause-and-effect type relationship. In embodiments where nodes and edges are enriched with data, nodes and edges may be indicated with data structures that allow for the additional information, such as JavaScript Object Notation (“JSON”) objects, extensible markup language (“XML”) files, etc. Context graphs also may be referred to in related literature as a triage map, relationship diagram/chart, causality graph, etc.
The term “subgraph” refers to a portion of a context graph. Subgraphs may be stored in a historical database as training data and may be aggregated to facilitate data imputation for missing data in a context graph. Subgraphs may additionally be utilized to diagnose particular problems in a datacenter. For example, if a particular problem occurs, the subgraphs utilized to generate a particular hash for the particular problem may be provided to help identify a source of the problem in the datacenter.
Properties of a subgraph or context graph may be described by “hash.” A hash may be determined based on a particular property or properties of a node. The properties may be metrics of the node itself or information related to one or more neighbors of the node. If related to multiple node neighbors, the information may be aggregated. The aggregated neighbor-related information may include, by way of example only, a number of neighbors of the node, an absolute number of neighbors having a particular condition, a relative number of neighbors having a particular condition, a sum/maximum/minimum/average of one or more node properties, and the like. For clarity, a “neighbor” is a node that is directly connected to the subject node by an edge. The edge may correspond to relationships among hardware and software components between the nodes.
A hash may additionally be computed through a predetermined number of iterations which may be based on a diameter of the subgraph or context graph, desired input size, etc. For example, at iteration 0, the hash may include a hash of the particular node. At iteration 1, the hash may include a hash of the hash of the particular node and the hash of neighbor nodes. At iteration 2, the hash may include a hash of the hash of the particular node, the hash of the neighbor nodes, and the hash of the neighbors of the neighbor nodes. In this way the hash provides a fingerprint or identifying characteristics of the context graph or subgraph corresponding to properties of nodes of the context graph or subgraph that can be utilized to identify similar context graphs or subgraphs. Subgraphs and/or context graphs may be “similar” to one another when a threshold level of similarity (as measured by similar nodes, similar node properties, similar connections between nodes, and the like) exists between items being measured. The threshold necessary for a similarity determination may be configured to use cases, as desired, and embodiments of the present disclosure are not intended to be limited to any particular threshold similarity.
A “vector representation” or “graph embedding” may correspond to the hash itself or a string of hashes being considered (e.g., hashes of multiple properties or for multiple nodes). In embodiments, a vector representation corresponds to a context graph or subgraph as it evolves over time. For example, as a particular property or node changes over time, a vector representation may represent the hash of the particular node as it changes over time which may help diagnose a particular a root cause of an anomalous condition, predict a future state of a datacenter (e.g., a particular property or particular node), identify missing properties, summarize a state of the datacenter, compare states of the datacenter, and the like.
As previously set forth in the Background, datacenters are large clusters of components (e.g., hardware and/or software resources) that are connected to perform operations using massive amounts of data. Keeping these components working efficiently is a complex task as many incidents may occur during the execution of the processes. In order to detect anomalies, problems, and/or failures, or to otherwise assess the health of the system, tools are utilized that extract and gather metrics from the datacenter components. Metrics may include, by way of example only, the temperature of datacenter components, workload, network usage, processor capacity, and the like. The set of metrics at a given timestamp forms the state of the datacenter at the point in time represented by the timestamp. While much may be gleaned from a datacenter state, the information is indicative of conditions that have already taken place and, accordingly, does little to facilitate mitigation of problems or failures before they occur.
Embodiments of the present disclosure are generally directed to predicting future states of nodes of a datacenter (each node representing a component of the datacenter in a context graph) utilizing a trained future state predictor. The evolution of recent states of the datacenter is analyzed by examining historical metrics collected from the datacenter nodes, as well as historical metrics of neighboring nodes. The metrics are aggregated into historical metric summary vector representations of the nodes which are utilized to train a future state predictor to predict future datacenter states. Once trained, metrics may be input into the future state predictor and the future state predictor may be utilized to predict a future state of one or more of the nodes of the datacenter.
In practice, historical metrics are collected from a plurality of historical nodes of a historical datacenter, that is, a datacenter for which the historical metrics have been collected on the historical nodes for a threshold period of time. Such threshold period of time may be configurable as desired and any particular threshold time period is not intended to limit the scope of embodiments of the present disclosure. Collected metrics may include, by way of example only and not limitation, the temperature of the components represented by the historical nodes, workload of the historical nodes, network usage of the historical nodes, input/output (I/O) operations of the historical nodes, functionality of the historical nodes, processor capacity, and the like. In embodiments, this historical data is assimilated and utilized to train a future state predictor to predict future states of datacenters, as more fully described below. Each of the historical nodes corresponds to a historical context graph or subgraph that represents the components of the historical datacenter and the relationships that exist between those components.
In addition to the local metrics of the historical nodes discussed above, information and metrics related to each node's neighborhood are also collected. By way of example and not limitation, such neighborhood information and metrics may include connections between historical nodes, traffic between historical nodes, the number or percentage of neighboring nodes of a historical node having a particular condition, and the like. In embodiments, the historical information and metrics collected do not include any global property related to the historical datacenter. For instance, in such embodiments, the historical information and metrics collected would not include the geolocation, humidity, temperature outside the historical nodes, and the like for the historical datacenter. In this way, the collected historical metrics and information may include historical metrics from a plurality of datacenters and the trained future state predictor may be utilized to predict future states of a datacenter that was not considered during the training phase.
Historical metrics and information collected are aggregated into historical metric summary vector representations for the plurality of historical nodes in the historical datacenter, the historical metric summary vector representations summarizing the collected historical metrics and the topology of the datacenter. The historical metric summary vector representations may include information derived from the historical metrics of neighboring nodes of the plurality of historical nodes as well as metrics and information regarding the historical nodes themselves.
In embodiments, concatenated to the historical metric summary vector is a vector indicating whether particular node state patterns are satisfied by the corresponding node. A node state pattern is a subgraph with particular conditions on the state of the nodes. In embodiments, in addition to using the aggregated metric information, a list of node state patterns may be consulted and a node state pattern vector indicating whether such node state patterns are satisfied may be appended providing a more global overview of the state of the datacenter.
After aggregation of the historical metric information (and node state pattern data, if desired) into enhanced historical vector representations for the plurality of historical nodes in the historical datacenter, such enhanced historical vector representations are utilized to train a future state predictor that approximates the known evolution of the datacenter based on the information contained in a time-window prior to a given state. The trained future state predictor may then be utilized to predict future states of nodes of an input datacenter. It will be understood and appreciated by those having ordinary skill in the art that historical metric summary vector representations may be utilized to train the future state predictor without having the node state pattern vector concatenated thereto. It will be further understood that vectors other than node state pattern vectors may be concatenated to the metric summary vectors, as desired. Any and all such variations, and any combination thereof, are contemplated to be within the scope of embodiments of the present disclosure.
Accordingly, one embodiment of the present disclosure is directed to a method that facilitates training a future state predictor to predict a future state of a node of an input datacenter. The method includes receiving historical metrics from a plurality of historical nodes of a historical datacenter, the plurality of historical nodes corresponding to a historical context graph indicating a plurality of relationships among the plurality of historical nodes. Each historical node corresponds to a component of the historical datacenter. The method further includes aggregating the historical metrics into historical metric summary vector representations for the plurality of historical nodes in the historical datacenter. The historical metric summary vector representations include information derived from the historical metrics of neighbors of the plurality of historical nodes. Still further, the method includes training a future state predictor with the historical metric summary vector representations to predict a future state of a node of an input datacenter.
In another embodiment, the present disclosure is directed to a method that facilitates predicting a future state of a node in a datacenter. The method includes receiving metrics from a plurality of nodes in a datacenter, the plurality of nodes corresponding to a context graph indicating a plurality of relationships among the plurality of nodes. Each node corresponds to a component of the datacenter. The method further includes aggregating the metrics into a metric summary vector representation for a node of the plurality of nodes in the datacenter. The metric summary vector representation corresponds to a time-window prior to a future state and includes information derived from the metrics of neighbors of the node. Still further, the method includes utilizing a future state predictor that has been trained to predict future states of the node, predicting a future state of the node.
In yet another embodiment, the present disclosure is directed to a computerized system that utilizes a future state predictor to predict a future state of a node in a datacenter. The system includes a processor and a non-transitory computer storage medium storing computer-useable instructions that, when used by the processor, cause the processor to receive historical metrics from a plurality of historical nodes in a historical datacenter corresponding to a historical context graph indicating a plurality of relationships among the plurality of historical nodes corresponding to components of the historical datacenter. When used by the processor, the computer-useable instructions further cause the processor to aggregate the historical metrics into historical metric summary vector representations for the plurality of historical nodes in the historical datacenter. The historical metric summary vector representations include information derived from the historical metrics of neighbors of the plurality of historical nodes. Still further, when used by the processor, the computer-useable instructions cause the processor to train a future state predictor with the historical metric summary vector representations to predict a future state of an input datacenter, receive metrics from a plurality of nodes in the datacenter corresponding to a time-window prior to the future state, and aggregate the metrics into a metric summary vector representation for the node of the plurality of nodes in the datacenter. The metric summary vector representation corresponds to the time-window prior to the future state and includes information derived from the metrics of neighbors of the node. When used by the processor, the computer-useable instructions additionally cause the processor, utilizing the future state predictor, to predict the future state of the node.
Referring now to
In some embodiments, one or more of the illustrated components/modules may be implemented as stand-alone applications. In other embodiments, one or more of the illustrated components/modules may be implemented via a server or as an Internet-based service. It will be understood by those having ordinary skill in the art that the components/modules illustrated in
It should be understood that this and other arrangements described herein are set forth only as examples. Other arrangements and elements (e.g., machines, interfaces, functions, orders, and groupings of functions, etc.) can be used in addition to or instead of those shown, and some elements may be omitted altogether. Further, many of the elements described herein are functional entities that may be implemented as discrete or distributed components or in conjunction with other components, and in any suitable combination and location. Various functions described herein as being performed by one or more entities may be carried out by hardware, firmware, and/or software. For instance, various functions may be carried out by a processor executing instructions stored in memory. The future state prediction system 100 may be implemented via any type of computing device, such as computing device 1200 described below with reference to
The future state prediction system 100 generally operates to predict future states of nodes in a datacenter. It should be understood that the future state prediction system 100 shown in
The future state prediction system 100 of
Metric collection is illustrated in the schematic diagram of
With reference back to
For example, using a number of neighbors as the selected property, at iteration 0, the hash of node A is represented by H(1) because its only neighbor is node B. Under the same properties, the hash of nodes B, C, D, E are represented by H(3), H(2), H(1), and H(1) because nodes B, C, D, E have 3, 2, 1, 1 neighbors, respectively. In some embodiments, the direction of the edges is ignored. In other embodiments, the direction of the edges is utilized as a property.
In the same example, and still referring to
Turning now to
With reference back to
In embodiments, and as illustrated in
With reference back to
Once trained, the future state predictor 126 may be utilized to predict future states for input metrics. As previously described with respect to training the future state predictor 126, metrics are collected. This time, however, metrics are collected from a plurality of nodes in a datacenter for which future state prediction is desired (as opposed to historical metrics). The metrics may be received, for instance, utilizing the metric collection component 122 of
Use of the trained future state predictor 126 is depicted in the schematic diagrams of
As illustrated in
Turning now to
The historical metrics are aggregated (e.g., utilizing the state discovery component 124 of
A future state predictor (e.g., future state predictor 126 of
Referring now to
At step 1112, the metrics are aggregated (e.g., utilizing the state discovery component 124 of
Having described embodiments of the present disclosure, an exemplary operating environment in which embodiments of the present disclosure may be implemented is described below in order to provide a general context for various aspects of the present disclosure. Referring to
The inventive embodiments may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program modules, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program modules including routines, programs, objects, components, data structures, etc., refer to code that perform particular tasks or implement particular abstract data types. The inventive embodiments may be practiced in a variety of system configurations, including handheld devices, consumer electronics, general-purpose computers, more specialty computing devices, etc. The inventive embodiments may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network.
With reference to
Computing device 1200 typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by computing device 1200 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 1200. Computer storage media does not comprise signals per se. Communication media typically embodies computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared, and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.
Memory 1212 includes computer-storage media in the form of volatile and/or nonvolatile memory. The memory may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid-state memory, hard drives, optical-disc drives, etc. Computing device 1200 includes one or more processors that read data from various entities such as memory 1212 or I/O components 1220. Presentation component(s) 1216 present data indications to a user or other device. Exemplary presentation components include a display device, speaker, printing component, vibrating component, etc.
I/O ports 1218 allow computing device 1200 to be logically coupled to other devices including I/O components 1220, some of which may be built in. Illustrative components include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, etc. The I/O components 1220 may provide a natural user interface (NUI) that processes air gestures, voice, or other physiological inputs generated by a user. In some instances, inputs may be transmitted to an appropriate network element for further processing. An NUI may implement any combination of speech recognition, touch and stylus recognition, facial recognition, biometric recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, and touch recognition associated with displays on the computing device 1200. The computing device 1200 may be equipped with depth cameras, such as stereoscopic camera systems, infrared camera systems, RGB camera systems, and combinations of these, for gesture detection and recognition. Additionally, the computing device 1200 may be equipped with accelerometers or gyroscopes that enable detection of motion. The output of the accelerometers or gyroscopes may be provided to the display of the computing device 1200 to render immersive augmented reality or virtual reality.
As can be understood, embodiments of the present disclosure provide for predicting future states of a datacenter utilizing a trained future state predictor. The present disclosure has been described in relation to particular embodiments, which are intended in all respects to be illustrative rather than restrictive. Alternative embodiments will become apparent to those of ordinary skill in the art to which the present disclosure pertains without departing from its scope.
From the foregoing, it will be seen that this disclosure is one well adapted to attain all the ends and objects set forth above, together with other advantages which are obvious and inherent to the system and method. It will be understood that certain features and sub-combinations are of utility and may be employed without reference to other features and sub-combinations. This is contemplated by and is within the scope of the claims.
Number | Date | Country | Kind |
---|---|---|---|
P 201631645 | Dec 2016 | ES | national |