The present invention relates to an arrangement, an arrangement comprising computer software modules, an arrangement comprising circuits, and a method for providing an improved manner of autonomous adaptation of software monitoring of real-time systems, and in particular to an arrangement, an arrangement comprising computer software modules, an arrangement comprising circuits, and a method for providing an improved manner of autonomous adaptation of monitoring (surveillance and control) of a cloud based system which supports real-time services (RTS).
As computer applications grow in complexity so does the requirement for processing power. As a result, more and more services are becoming cloud based, relying on networked resources offered by a provider of computational resources. As is known cloud computing is the delivery of computing services, such as servers, storage, databases, and networking, over the a network such as the Internet (“the cloud”) to offer flexible resources where a user typically only pays for the cloud services used. The services are thus provided as use of computational resources such as processing power (for example processor cores and processing time), memory access (space and access speed) and bandwidth to mention a few examples.
One example field where cloud computing is growing is real-time services. As the name suggests the timing of such services is important, where the services require a response in real-time and are thus sensitive to delays. Typical examples of acceptable response times are measured in milliseconds or even microseconds. It is thus important that a service or system or services, especially as regards real-time services have access to sufficient computational resources to provide the required response times.
In order to ensure that a service has access to sufficient computational resources, several monitoring systems (including surveillance and/or control) are proposed. The monitoring systems monitor the performance of one or more services and based on the performance provides the required computational resources.
The state-of-the-art solutions for providing such monitoring involves developing mechanistic models, both statistical and dynamic ones, which allow a system to predict the performance of the real-time services. For instance, such a model might take as input the CPU load, the memory usage, and the rate by which traffic is entering the service in order to predict the processing capacity of the node, and in turn predict whether any violation of a real-time requirement might occur. These models would then be used in conjunction with a control unit which decides whether to change some parameters or functionality of the real-time services, in order to affect its real-time performance and thereby mitigate a risk of violating a real-time requirement.
One example of such monitoring is given in the patent application published as US2015286507A1 which discloses a method, node and computer program for a resource controller node for enabling automatic adaptation of the size of a pool of resource units, the resource units needed for operation of an application in a computer environment.
As in that system, contemporary networks of real-time services can be huge, wherein the usually simple step of collecting the data is no longer a simple. Oftentimes, the services grow in complexity over time, simply by being scaled up.
A key feature with prior art surveillance- and control systems is that they are based on a deep learning framework for predicting future real-time performance of the real-time system(s). As is the case with any deep learning framework, good data is needed in order to train the network to achieve an acceptable level of accuracy for the predictions. A problem with the existing solutions is the fundamental nature that whenever one controls/changes the configuration (e.g., scale the CPU, storage, VNFs, etc.) of the underlying system, one will need to account for this while generating the prediction model. The reason is that when one scales the configuration it will also change the real-time performance (e.g., processing a queue will be quicker). Hence, in the previously proposed solutions there has been a need to generate this data and train the prediction model a priori and off-line.
There is thus a need for a more efficient manner of both providing software monitoring of real-time systems and for adapting the models for such monitoring.
The inventors have realized that an automated monitoring of the actual monitoring of the real-time services would indeed be beneficial as it would allow for detecting that a model utilized is not providing good predictions and based on this, retrain the model dynamically to adapt to any changes in the underlying system or use thereof.
The inventors have also realized that as the monitoring is to be automated, the exact nature of the real-time data to be collected need not be readable to a human monitor and the inventors further realized that by insightfully utilizing opaque services, that is services that reports performance metrics of an unknown format or alternatively in a normalized format, wherein the performance metrics have no real or at least clear meaning to the monitoring may be used to provide for a faster data collection and that is easier and/or faster to adapt to changes.
The inventors have also realized that there is an inherent problem in prior art monitoring systems in that as systems grow larger, while maintaining the same timing requirements especially for real-time services, any adaptations of the computational resources for a service(s) may be too slow, thereby affecting the performance of the service. In this regard, it can again be pointed out that real-time services operate on timing requirements of milliseconds and microseconds, whereas resource allocations operate on timings of hundreds of seconds. The inventors have realized that in order to preempt such allocations, the monitoring may reserve resources ahead of time based on the prediction into a standby pool from which any changes in allocations may be drawn.
As will be shown below, this has provided an extremely simple and elegant manner for providing automated monitoring and also training of the monitoring.
The inventors have further realized that traditional analysis tools are not advanced enough to handle the sizes of real-time service systems, which may be vast indeed, especially if implemented as cloud services. The inventors have therefore realized that the most advanced analysis tools available are aimed at image analysis, and are thus proposing to insightfully provide the performance metrics as an image(s), which image(s) may be analyzed using image analysis tools, such as the many various neural network tools and techniques commonly used for image analysis, for example through the use of Convoluted Neural Networks (CNN).
According to one aspect a software monitoring arrangement arranged to monitor a software system comprising one or more of computational resources, wherein the software system is configured to execute one or more services each utilizing a portion of the one or more of computational resources and the software system further comprises a live capacity controller configured to receive one or more first performance metrics from the one or more services and to assign the portion of the computational resources to the one or more services based on the first performance metrics, the software monitoring arrangement comprising a controller configured to: receive second performance metrics; execute a state predictor to determine a predicted state of the software system based on the second performance metrics; execute a standby capacity calculator to determine a standby capacity based on the predicted state; and reserve computational resources according to the standby capacity to a standby pool of computational resources enabling the live capacity controller to assign a change in the portion of the computational resources to the one or more services from the standby pool.
The solution may be implemented as a software solution, a hardware solution or a mix of software and hardware components.
In some embodiments the controller is further configured to: execute a performance calculator to determine a prediction performance; execute a compensator calculator to determine a compensator based on the prediction performance; and determine the standby capacity based on the compensator.
In some embodiments the controller is further configured to execute an accuracy calculator to determine a prediction accuracy by comparing the second performance metrics for a specific time period to the previously stored predicted state of the software system for the specific time period, and to determine the prediction performance based on the prediction accuracy.
In some embodiments the controller is further configured to determine the compensator based on a safety factor (k).
In some embodiments the controller is further configured to execute a minimum standby pool calculator to determine a minimum standby pool size based on the predicted state and in response thereto execute the standby capacity calculator to determine the standby capacity based also on the minimum standby pool size.
In some embodiments the controller is further configured to determine the predicted state based on a system model.
In some embodiments the controller is further configured to determine the predicted state based on the system model utilizing a neural network.
In some embodiments the controller is further configured to store the predicted state; store the received performance metrics; and execute a model trainer to train the system model based on the stored predicted states and the stored received performance metrics.
In some embodiments the controller is further configured to determine that the prediction performance falls below a threshold and in response thereto train the system model.
In some embodiments the second performance metrics comprises one or more images representing one or more current states of the one or more services of the software system and wherein the controller is further configured to determine the predicted state of the software system based on image analysis of the one or more images.
In some embodiments the controller is further configured to provide said image analysis to recognize a pattern in the one or more images, which pattern is associated with a known state, wherein the predicted state is determined to be the known state.
In some embodiments the second performance metrics comprises at least one performance metric from a first service of the one or more services, wherein the performance metric from the first service comprises an opaque data entity.
In some embodiments the first performance metrics is at least a subset of the second performance metrics.
In some embodiments the controller is further configured to reserve computational resources according to the standby capacity to the standby pool of computational resources enabling the live capacity controller to assign an increase in the portion of the computational resources to the one or more services.
These embodiments have several benefits including, but not limited to the ones disclosed herein. A few examples of benefits are disclosed below.
The solution allows the system to autonomously learn and adapt the software surveillance- and control in a safe way.
The solution reduces the need to collect and train the deep learning models a priori.
The solution allows the system to autonomously adapt to new characteristics of the real-time system.
By continuously monitoring the prediction performance and re-training the prediction models when necessary, the solution reduces the amount of necessary standby capacity for the system, thereby reducing the cost of running the system.
The solution reduces the amount of resources needed in hot-standby and thereby amounts to significant cost- and resource savings.
The solution thus allows the system to continuously adapt and update the prediction models in a safe way. Along with this, as the model is improved it continuously reduces the amount of resources needed in hot standby.
This invention may be used both within and outside a cloud environment. The intended target use-case is for the real-time cloud; however it can just as well be used in a closed multi-service environment deployed on local stand-alone hardware. One example could be the infotainment system in autonomous cars, actuators and sensors in a manufacturing plant, or in a radio base-station.
According to one aspect a method is provided for automated software monitoring of a software system comprising one or more of computational resources configured to execute one or more services and a live capacity controller configured to receive performance metrics from the one or more services and to assign a portion of the computational resources to the one or more services based on the performance metrics, wherein the method comprises: receiving the performance metrics; determining a predicted state of the software system; determining a standby capacity based on the predicted state; and reserving computational resources according to the standby capacity to a standby pool of computational resources enabling the live capacity controller to assign a change in the portion of the computational resources to the one or more services from the standby pool.
According to one aspect there is provided a computer-readable medium carrying computer instructions that when loaded into and executed by a controller of a software monitoring of real-time services arrangement enables the software monitoring of real-time services arrangement to implement a method according to herein.
According to one aspect there is provided a software component (or module) arrangement according to an embodiment of the teachings herein. The software component arrangement is adapted to be used in a software monitoring of real-time services arrangement as taught herein for automated software monitoring of real-time services as taught herein of a software system comprising a one or more of computational resources configured to execute one or more services and a live capacity controller configured to receive performance metrics from the one or more services and to assign a portion of the computational resources to the one or more services based on the performance metrics, wherein the software component arrangement comprises: a software component for receiving the performance metrics; a software component for determining a predicted state of the software system; a software component for determining a standby capacity based on the predicted state; and a software component for reserving computational resources according to the standby capacity to a standby pool of computational resources enabling the live capacity controller to assign an increase in the portion of the computational resources to the one or more services from the standby pool.
According to one aspect there is provided an arrangement comprising circuitry for software monitoring of real-time services according to an embodiment of the teachings herein. The arrangement comprising circuitry for software monitoring of real-time services is adapted to be used in a software monitoring of real-time services arrangement as taught herein for automated software monitoring of a software system comprising a one or more of computational resources configured to execute one or more services and a live capacity controller configured to receive performance metrics from the one or more services and to assign a portion of the computational resources to the one or more services based on the performance metrics, wherein the software monitoring arrangement comprises: circuitry arranged to receive the performance metrics; circuitry arranged to determine a predicted state of the software system; circuitry arranged to determine a standby capacity based on the predicted state; and circuitry arranged to reserve computational resources according to the standby capacity to a standby pool of computational resources enabling the live capacity controller to assign an increase in the portion of the computational resources to the one or more services from the standby pool.
The teachings herein finds use both within and outside a cloud environment. One intended target use-case is for a real-time cloud; however the teachings herein may equally well be used in a closed multi-service environment deployed on local stand-alone hardware. One example could be the infotainment system in autonomous cars, or actuators and sensors in a manufacturing plant.
Further embodiments and advantages of the present invention will be given in the detailed description. It should be noted that the teachings herein find use in monitoring arrangements in many areas of software monitoring of real-time service systems.
Embodiments of the invention will be described in the following, reference being made to the appended drawings which illustrate non-limiting examples of how the inventive concept can be reduced into practice.
The controller 101 is configured to control the overall operation of the software monitoring of real-time services arrangement 100. In one embodiment, the controller 101 is one or more general purpose processors. As a skilled person would understand there are many alternatives for how to implement a controller, such as using Field-Programmable Gate Arrays circuits, AISIC, GPU, etc. in addition or as an alternative. For the purpose of this application, all such possibilities and alternatives will be referred to simply as the controller 101.
The memory 102 is configured to store data and computer-readable instructions that when loaded into the controller 101 indicates how the software monitoring of real-time services arrangement 100 is to be controlled. The memory 102 may comprise one or several memory modules, partitions, allocations, units or devices, but they will be perceived as being part of the same overall memory 102. There may be one memory unit for storing data, one memory for the communications interface (see below) for storing settings, and so on. As a skilled person would understand there are many possibilities of how to select where data should be stored and a general memory 102 for the software monitoring of real-time services arrangement 100 is therefore seen to comprise any and all such memory units for the purpose of this application. As a skilled person would understand there are many alternatives of how to implement a memory, for example using non-volatile memory circuits, such as EEPROM memory circuits, or using volatile memory circuits, such as RAM memory circuits. The memory 102 may also be an allocation of a larger memory partition. For the purpose of this application all such alternatives will be referred to simply as the memory 102.
In one embodiment the software monitoring of real-time services arrangement 100 further comprises a communication interface 103. The communication interface may be wired and/or wireless. The communication interface may comprise several interfaces.
In one embodiment the communication interface comprises a USB (Universal Serial Bus) interface. In one embodiment the communication interface comprises an analog interface, a CAN (Controller Area Network) bus interface, an I2C (Inter-Integrated Circuit) interface, or other interface.
In one embodiment the communication interface comprises a radio frequency (RF) communications interface. In one such embodiment the communication interface comprises a Bluetooth™ interface, a WiFi™ interface, a ZigBee™ interface, a RFID™ (Radio Frequency IDentifier) interface, Wireless Display (WiDi) interface, Miracast interface, and/or other RF interface commonly used for short range RF communication. In an alternative or supplemental such embodiment the communication interface comprises a cellular communications interface such as a fifth generation (5G) cellular communication interface, an LTE (Long Term Evolution) interface, a GSM (Global Systéme Mobile) interface and/or other interface commonly used for cellular communication. In one embodiment the communications interface is configured to communicate using the UPnP (Universal Plug n Play) protocol. In one embodiment the communications interface is configured to communicate using the DLNA (Digital Living Network Appliance) protocol.
In one embodiment, the communications interface 103 is configured to enable communication through more than one of the example technologies given above.
The communications interface 103 may be configured to enable the software monitoring of real-time services arrangement 100 to communicate with services that are part of the (real-time) services for receiving data on (real-time) performance from such (real-time) services. The communications interface 103 may be configured to enable the software monitoring of real-time services arrangement 100 to communicate with a cloud service or other alternative for providing the computational resources to a software system (not shown in
As is indicated in
In the following the software monitoring of real-time services arrangement 100 will also be referred to simply as the monitoring arrangement 100. It should be noted that even though the teachings herein are focused on the monitoring of real-time services, the monitoring arrangement 100 may also be used for monitoring other services.
In some embodiments the software system 200 comprises a plurality of services 215 to be monitored. In some embodiments at least some or all of the services 215 are real-time services 215. In this example, three groupings of services are shown, but it should be noted that any number of services may be monitored and that the number of groupings and services in each grouping is probably of a higher number than illustrated in
The real-time services 215 may be connected for sharing data, for cooperating on a task or to be part of a process flow, to mention a few reasons, as indicated by the dotted arrows. As is also shown in
The live controller 210 is, in some embodiments, arranged to operate as per any prior art solution and its internal operations will therefore not be discussed in great detail herein. Suffice to state that the live controller receives performance metrics, analyzes these metrics, possibly based on neural networks or other machine learning algorithm, and allocates computational resources accordingly.
In embodiments where real-time services are monitored the live-capacity controller may be referred to as a real-time capacity controller 210.
In
As mentioned above, the software monitoring arrangement 220 receives performance metrics from the one or more of the services 215. It could be noted that the performance metrics received by the live controller 210 are, in some embodiments, different from the performance metrics received by the software monitoring arrangement 220. The live controller 210 is thus seen to receive a first set of performance metrics and the software monitoring arrangement 220 is seen to receive a second set of performance metrics. In some embodiments the first set is a subset of the second set. In some embodiments the first set is the same as the second set. In some embodiments the second set includes performance metrics for all services 215. In some embodiments the second set includes performance metrics for some of the services 215. In some embodiments the second set includes performance metrics for one of the services 215.
The software monitoring arrangement 220 includes a state predictor 221 that is configured to determine a predicted state 221′ of the software system 200 based on the received performance metrics, when executed. The state predictor is in some embodiments, configured to operate based on a model and which model is based on machine learning, such as utilizing neural networks. More details on the state predictor will be given with reference to
The software monitoring arrangement 220 includes a standby capacity calculator 222 that is configured to determine a standby capacity 222′ based on the predicted state 221′, when executed.
By monitoring the performance of the software system 200 and predicting a (future) state of the software system 200 representing the (future) requirements of at least some of the services 215 of the software system 200, it is possible to determine a (future) required capacity of computational resources. This in turn, and as the inventors have realized through inventive and insightful reasoning providing a simple and elegant solution, enables for reserving the needed capacity ahead of time so that as the need for the capacity arises, the capacity is already available in a standby pool 206, having been reserved ahead of time. The software monitoring arrangement 220 is thus also configured to reserve computational resources 205 into a standby pool of computational resources 206. In
The live controller 210 is thereby enabled to assign any change in the assigned portion of computational resources 205 for any service(s) 215 quickly and effectively from a standby pool 206 of computational resources.
In some embodiments the change in the assigned portion computational resources 205 for a service 215 relates to an increase in required resources. In some such embodiments such a change causes the software monitoring arrangement 220 to reserve other additional and/or other more powerful resources. In one such embodiment the software monitoring arrangement 220 is configured to reserve computational resources according to the standby capacity to the standby pool 206 of computational resources enabling the real-time capacity controller 210 to assign an increase in the portion of the computational resources 205 to the one or more services 215.
In some embodiments the change in the assigned portion computational resources 205 for a service 215 relates to an adaptation in requirements (for example an increased requirement on bandwidth, while a decreased request on processing power, such as when streaming huge data loads) in required resources. In some such embodiments such a change causes the software monitoring arrangement 220 to reserve other, more suitable, resources (as per the actual case).
In some embodiments the change in the assigned portion computational resources 205 for a service 215 relates to a decrease in requirements (for example a decreased timing requirement) in required resources. In some such embodiments such a change causes the software monitoring arrangement 220 to reserve other less powerful resources, which may save on the overall costs.
In some embodiments, the performance metrics represent a current measured states of the software system 200.
As discussed in the background and in the summary, the inventors have realized several problems regarding errors or inconsistencies in the prediction and scaling of systems. The inventors have further realized, that in due to such changes in the system 200 and/or for errors or inconsistencies in the prediction, the predicted state 221′ may not always be (highly) accurate which will lead to an incorrectly apportioned standby pool 206 to be maintained, leading to too few or the wrong computational resources being available and/or computational resources being reserved but never used, thus increasing the cost of the system.
The software monitoring arrangement 220 therefore includes, in some embodiments, a performance calculator 225 configured to determine a prediction performance 225′, that is a performance of the state predictor 221. The software monitoring arrangement 220 also includes, in some embodiments, a compensator calculator 226 configured to determine a compensator 226′ based on the prediction performance 225′. This allows for any prediction errors in predicted state 221′ or bad performance of the state predictor 221 to be compensated for based on the compensator 226′ when determining the standby capacity. The software monitoring arrangement 220 is configured, in some embodiments, to determine the standby capacity based on the compensator 226′, as well as on the predicted state 221′.
In some embodiments the software monitoring arrangement 220 is further configured to store a predicted state 221′ of the software system 200 for later determining if the predicted state 221′ was accurate or not. In this regard, the software monitoring arrangement 220 includes an accuracy calculator 224 configured to determine a prediction accuracy 224′ of the state predictor. In some embodiments the accuracy calculator 224 determines the prediction accuracy 24′ based on a stored predicted state for a specific time period and the received performance metrics for the specific time period (same, adjacent or overlapping).
In some such embodiments the accuracy calculator 224 determines the prediction accuracy 24′ based on the number of times the predicted state has been stored, where a high number gives a higher accuracy.
The software monitoring arrangement 220 in such embodiments determines the prediction performance 226′ based on the prediction accuracy 224′. In cases where a low prediction accuracy is determined, a low prediction performance will also be determined. In some embodiments the prediction performance 226′ is or corresponds directly to the prediction accuracy 224′. In some such embodiments the prediction performance 226′ is determined as the prediction accuracy 224′ over time (such as an average over a past time period). In some such embodiments, the accuracy calculator 224 is comprised in the performance calculator 226.
In order to provide adaptability of the standby pool, for example for satisfying higher requirements on time constraints, and/or higher requirements on cost efficiency, the software monitoring arrangement 220 is further configured to determine the compensator 226′ based on a safety factor (k). The safety factor k is, in some embodiments set by the software system 2000. In some embodiments the safety factor k is set by a user of a service 215. In some embodiments the safety factor k is set by a system designer of the software system 200 or of a service 215.
In some embodiments the safety factor is set dynamically. In some such embodiments the safety factor is set as a function of the prediction performance, where a high (in some aspect) prediction performance results in a lower (in some aspect) safety factor. In some such embodiments the safety factor is set as a function of training the prediction performance. In some such embodiments a safety actor k is set according to how well a model is trained. The models used for the state predictors have been touched upon briefly in the above with reference to
In some embodiments the safety factor k is arranged to provide a greater margin of error for the standby pool (i.e. a larger size and/or more powerful resources) in order to ensure or at least increase the likelihood that resources are available by reserving possibly unnecessary resources.
In some embodiments the safety factor k is arranged to provide a lower cost for the standby pool (i.e. a smaller size and/or cheaper resources) in order to ensure or at least decrease the cost of resources by reserving only (absolutely) necessary resources.
In order to provide additional security as regards uninterrupted execution and to ensure that sufficient resources are always available, the software monitoring arrangement 220 further includes, in some embodiments, a minimum standby pool calculator 223 configured to determine a minimum standby pool size 223′ when executed. The minimum standby pool size 223′ is determined based on the predicted state 221′. In some embodiments the minimum standby pool size 223′ is determined to ensure that some constraints are indeed satisfied.
The software monitoring arrangement 220 is further configured to execute the standby capacity calculator 222 to determine the standby capacity 222′ based also on the minimum standby pool size 223′ ensuring that the minimum size 223′ is available.
It could be noted that the time-horizon for scaling the amount of standby resources is larger than the time-horizon for what resources are needed for the live capacity control. The standby capacity calculator or controller 222 must therefore rely on the predicted performance 225′ of the state predictor 221. The state predictor 221′ generates the predicted state which results in an (minimum) amount of standby resources in the standby pool 206. In other words this is based on the minimum amount of resource that the system will need in order to meet the predicted performance levels. The actual size of the standby pool 206 used enables for a more adaptable allocation of resources from the standby pool, as more resources than the bare minimum are made available. In some embodiments the minimum standby pool size is the standby pool size and the minimum standby pool size calculator 223 is comprised in the standby pool calculator 222.
It should be noted that even though terminology such as “size” is used, the size is not to be taken as a simple reference to a total number of resources. The size of the standby pool is to be regarded as a matrix of computational resources of varying kind, performance and availability.
As discussed in the above with regards to
In some embodiments the system model 228′ is based on machine learning such as neural networks, and the system model 28′ is in some such embodiments represented by a neural network. In some such embodiments the software monitoring arrangement 220 is configured (when executed by the controller 101) to determine the predicted state 221′ through the state predictor 221 utilizing such a neural network. In some embodiments, the neural network is based on deep learning. In some embodiments, the neural network is based on convolutional neural networks (CNN).
This allows for adapting to a vast number of different possibilities which may only differ in small increments but hade hugely different effects that would be very complicated indeed to provide a mechanistic performance model for.
In order to enable the software monitoring arrangement 220 of the teachings herein to autonomously adapt to changes in the software system 200 or to changes or errors in the system model 228′ or the prediction(s) 221′ made based on the model 228′, the software monitoring arrangement 220 is configured to (re)train the system model 228′. The software monitoring arrangement 220 therefore stores some (for example every other, every tenth, or any such interval) or all state predictions 221′ made, at least over a time period of for example 10 ms, 100 ms, 1 s, 10 s, 100 s, 10 minutes, 1 hour, 24 hours. The corresponding performance metrics for the predicted states are also stored, possibly along with the predicted states 221′. This allows for a model trainer 228 that is comprised in the software monitoring arrangement 220 to train the system model 228′ based on the stored (or at least a subset of the stored) predicted states and the corresponding performance metrics.
In order to increase the accuracy of the training as well as reducing the needed storage capacity of the system model trainer 228, the software monitoring arrangement 220 is, in some embodiments, configured to only store predicted states that show a prediction accuracy exceeding a quality level, for example 70%, 75%, 80%, 85%, 90%, 95% or 99%.
In some embodiments the software monitoring arrangement 220 is configured to perform such (re)training when the performance 225′ of the state predictor 221 falls below a performance threshold. The threshold is dependent on the representation of the accuracy, but may generally be referred to in percentages, and the performance level may thus be any of or in the range of any of for example 70%, 75%, 80%, 85%, 90%, 95% or 99%.
The threshold is in some embodiments set by a system designer. The threshold is in some embodiments set dynamically by the software monitoring arrangement 220.
In some embodiments the threshold is set as a level of accuracy thereby being able to adapt to a low accuracy. In some embodiments the threshold is set as a number of miss predictions or predictions of low level of accuracy thereby being able to adapt to a repeatedly low accuracy. In some embodiments the threshold is set as a rate of change in level of accuracy thereby being able to adapt to a change in accuracy.
As mentioned in the above, the performance metrics reflect or represent a measured state of a service 215, some services 215 or all services 215.
In some embodiments the performance metrics are representations of performances such as processing times, response times, response accuracy, memory read successes, queue sizes, worker threads, compute units, fan speeds, radio connections, execution frequency, cache allocations and so on to mention a few examples. In some such embodiments the live-capacity controller 210 receives the performance metrics (or rather the first (subset of) performance metrics in a format that the live-capacity controller 210 is arranged to understand and to determine what computational resources to assign based on the received performance metrics. For example if the response times increase, the live-controller may respond by assigning more CPU time.
In further some such embodiments the software monitoring arrangement 220 receives the performance metrics (or rather the second (subset of) performance metrics in a format that the software monitoring arrangement 220 is arranged to understand and to predict a future state based on the received performance metrics.
However, such embodiments, especially as relates to the state predictor 221, are complicated to set up as there are so many different services, all likely to provide the performance metrics in different formats, which then all need to be handled and understood in order for an accurate prediction.
The inventors have, however, realized after insightful and inventive reasoning, that this long-standing problem can be overcome by transforming all incoming performance metrics into a format where the meaning of the performance metrics is unknown to the state predictor. In one such embodiment, the performance metrics are all (possibly individually) normalized, for example to a value between 0 and 1. The transformation may be performed by the providing service 215, the live-controller 210, the state predictor 221 or any component or entity in between. It is not of importance exactly where the performance metrics are formatted or how, and variations are possible.
Alternatively, no transformation is performed, but the software monitoring arrangement 220 and especially the state predictor does not put any particular emphasis on the exact meaning of a performance metric.
One or more of the services 215 are thus opaque services 215 in some embodiments and provide performance metrics comprising at least one opaque data entity. As would be understood an opaque data entity is a data entity whose representation may be known to the service providing the data entity, but is unknown to the software monitoring arrangement 220.
It is thereafter possible to recognize and associate patterns of performance metrics, irrespective of the actual meaning of such patterns, with states of the service(s) 215 of the software system 200.
The system model 228′ is thus, in some embodiments, trained to associate such patterns of performance metrics with states, wherein the states represent or corresponds to computational requirements, which may then be fulfilled by the computational resources 205. Based on the predicted state, the software monitoring arrangement 220 is thus configured to determine whether a change (such as an increase, a decrease or other change) is to be expected and in response thereto reserve computational resources 205 to the standby pool accordingly. It should be noted that in this context the terminology of “reserve” or “making reservations” includes cancelling any already existing reserved computational resources, a cancellation thus being treated as a negative reservation. Similarly a cancellation of a computational resource, may be seen as a negative assignment of a resource. And, a negatively assigned resource i.e. a cancelled resource) may thus be seen as being freed up from the used computational resources and no longer part of the standby pool, or returned to the standby pool depending on the implementation and the case at hand.
It should be mentioned that the terminology of opaque data or opaque services are well-known and would be understood by a skilled person in the field of software services.
As mentioned in the above, the inventors have realized that by providing the performance metrics as images to the state predictor 221 has a great advantage in that already existing power tools may be utilized, namely neural networks arranged for image recognition, object classification or object detection to mention a few examples. In particular there are many very powerful Convolutional Neural Networks available that may be trained to associate the patterns of the performance metrics with objects being expressed as states of the service(s) 215 of the software system 200.
As expressed above, the performance metrics may be received as opaque data entities, and in one such embodiment, the data entity may be one or more images. In some such embodiments, an image may be generated so that a (group of) pixel(s) is associated with a performance aspect and is assigned a value regarding how well that aspect performs. This may provide an image with different color or grey scales depending on the performance of the service(s) 215. In such an image an actual pattern may be recognized and associated with a state. It should be noted that the images discussed herein need not be visible or comprehensible to a human as they are only processed by computer processors.
It should be noted that the image(s) may be generated based on opaque data entities in some embodiments. And that the image(s) may be generated based on non-opaque data entities in some embodiments.
In some embodiments the second (subset of) performance metrics thus comprises one or more images representing one or more current states of the one or more services of the software system. And in such embodiments, the software monitoring arrangement 220 is configured to determine the predicted state 221′ of the software system 200 based on image analysis of the one or more images. In some such embodiments, the software monitoring arrangement 220 is further configured to provide said image analysis to recognize a pattern in the one or more images, which pattern is associated with a known state, wherein the predicted state is determined to be the known state, the known state being a previously experienced state that was experienced when the same performance metrics were received.
In some embodiments, the performance metrics present a moment in time. In such embodiments, the state predictor is trained to associate a moment in time with a predicted state. For embodiments based on image analysis, the state predictor 221 is configured to determine the predicted state based on one image for one or more services 215.
In some embodiments, the performance metrics present a series of moments (i.e. a time period) in time. In such embodiments, the state predictor is trained to associate a time period with a predicted state. The pattern may in such embodiments also be a pattern over time. For embodiments based on image analysis, the state predictor 221 is configured to determine the predicted state based on a series (or stream) of images for one or more services 215.
It should be noted that since the software monitoring arrangement 220 is executed by the controller 101 of the software monitoring arrangement 100 and as there is made no difference between the two, there is also not made any difference between the software monitoring arrangement 220 comprising some functionality or the controller 101 of the software monitoring arrangement 100 being configured to execute such functionality.
It should be noted that the method may comprise further functionalities as discussed in relation to
As is indicated in
It should be noted that a software component may be replaced or supplemented by a software module.
As is indicated in
As has been discussed in the above, the live capacity controller may be regarded as a real-time controller, especially when controlling real-time services 215.
The computer-readable medium 120 may be tangible such as a hard drive or a flash memory, for example a USB memory stick or a cloud server. Alternatively, the computer-readable medium 120 may be intangible such as a signal carrying the computer instructions enabling the computer instructions to be downloaded through a network connection, such as an internet connection.
In the example of
The computer disc reader 122 may also or alternatively be connected to (or possibly inserted into) a software monitoring of real-time services arrangement 100 for transferring the computer-readable computer instructions 121 to a controller of the software monitoring of real-time services arrangement (presumably via a memory of the software monitoring of real-time services arrangement 100).
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2021/064701 | 6/1/2021 | WO |