Aspects of the disclosure relate generally to establishing severity designations for associating with a potential occurrence of an incident of an entity. More specifically, aspects of the disclosure provide techniques for using a machine learning model to predict relationships between data of new development operations tools metric data and data of existing severity matrix data within a data store of an entity.
An incident severity matrix is a tool used by an entity to determine the severity of an incident. Such a tool is used during risk assessment to define the level of risk of occurrence of an incident by considering the category of probability, or likelihood, against the category of consequence severity. This is a tool used to increase the visibility of risks and assist management decision making Risk of the occurrence of an incident is the lack of certainty about the outcome of making a particular choice. The level of downside risk can be calculated as the product of the probability that harm occurs (that an incident happens) multiplied by the severity of that harm (the average amount of harm or, more conservatively, the maximum credible amount of harm). In practice, an incident severity matrix is a useful approach where either the probability or the harm severity cannot be estimated with accuracy and precision.
Severity on an incident severity matrix represents the severity of the most likely consequence of a particular incident occurrence. Accordingly, if an incident occurs and is not mitigated, what is the severity of the most likely problem that will occur thereafter. Some entities may use different criteria to define severity within its incident severity matrix. Different criteria provides a plurality of justifications for each risk assessment's severity. Each level of severity may utilize the same criteria but have an increase in damages/effect for each rising level of severity. When defining likelihood, criteria may be defined by either a quantitative approach: a number of expected incident occurrences or number of incident occurrences/resolution time period; or a qualitative approach, the relative chances of an incident occurring.
Thus, an incident severity matrix is based on the likelihood that the incident will occur, and the potential impact that the incident will have on the entity. It is a tool that helps an entity visualize the probability verses the severity of a potential incident. Depending on likelihood and severity, incidents may be categorized as high, moderate, or low. As part of the severity management process, entities may use incident severity matrices to help them prioritize different incidents and develop an appropriate mitigation strategy. Incidents come in many forms including strategic, operational, financial, and external. An incident severity matrix works by presenting various incidents by severity designations. An incident severity matrix also may include two axes: one that measures likelihood of an incident, and another that measures impact.
Although standard incident severity matrices may exist in certain contexts, individual projects and/or entities may need to create their own or tailor an existing incident severity matrix. The entity may calculate what levels of risk the entity can take with different events. This may be done by weighing the risk of an incident occurring against the cost to implement safety and the benefit gained from it. For entities, as they develop more applications and entity tools, they will update the incident severity matrix for each one.
Operational efficiency often is sought by entities. Many entities want their business to operate with as few incidents that require some form of mitigation to address. For example, cybersecurity is a sector of an entity's business that has increased substantially in recent years. Attacks from hackers and other nefarious individuals are a constant siege for an entity on a daily basis. An entity must manage these and other types of incidents constantly. Yet, when new applications for an entity are to be introduced and added to business functions of the entity, conventional systems for incident severity matrix creation and updating is slow and hampered by wasted time and resources.
In step 105, the severity matrix manager manually determines severity designations for incidents that may occur upon implementation of the new application. For example, in the case of the new application being associated with a service for using reward points of the entity to donate to a local charity, the severity matrix manager may arbitrarily set severity designations to fit within a severity matrix tool of the entity based upon default criteria. In such a case, the severity matrix manager may determine the number of people that have to be effected by occurrence of an incident and/or the amount of time that an incident affecting a customer may have to meet different thresholds for the different severity designations. However, manual implementation by human interaction often leads to very long lead times for entry, inconsistent severity designations for potentially similar incidents and/or similar applications, and resistance to change when necessary.
In step 107, an incident occurs that is associated with the new application. For example, in the case of the new application being associated with a service for using reward points of the entity to donate to a local charity, a server that implements the new application may have a technical issue occur that causes the server to go offline. One or more customers may then not be able to access the service associated with the new application. As part of this step, an individual associated with the entity may review the incident severity matrix to determine the severity designation associated with the current number of customers affected and/or the amount of time of impact to customers.
Proceeding to step 109, because of one or more inaccurate severity designations within the incident severity matrix, the response time to mitigate the incident may be delayed. For example, due to an inaccurate designation, an individual reviewing the incident severity matrix may see that the severity designation for a particular incident is only low urgency and thus falls behind other incidents in priority when it comes to mitigating the occurrence of the incident. Because of this inaccurate entry in the incident severity matrix, any mitigation to handle reoccurrence of such an incident is further delayed.
In step 111, when the priority of the occurrence of the incident meets the severity designation of the incident severity matrix that warrants mitigation, one or more remediation actions may be performed to mitigate the incident. One or more individuals responsible for the entity resources affected by the new application perform the remediation actions. These remediation actions may be assigned to help make sure that the issues that caused the incident to occur do not occur again or are at least less likely to occur again. Thereafter in step 113, the severity matrix manager may manually determine adjustments needed to severity designations for incidents that may occur upon implementation of the new application. However, such manual adjustments only are made some time later when the severity matrix manager has the time and resources to perform the necessary manual act.
Aspects described herein may address these and other problems, and generally enable predicting relationships between data of new development operations tools metric data and data of existing severity matrix data within a data store of an entity. Such a prediction thereby reduces the likelihood that an occurrence of an incident affects an entity or unallowable number of customers of the entity or for an unallowable amount of time and reduces the time and resources spent in mitigating the occurrence of such an incident as quickly or efficiently as possible as the system operates proactively as opposed to reactively.
The following presents a simplified summary of various aspects described herein. This summary is not an extensive overview, and is not intended to identify key or critical elements or to delineate the scope of the claims. The following summary merely presents some concepts in a simplified form as an introductory prelude to the more detailed description provided below.
Aspects described herein may allow for the prediction and assignment of a new entry to add to a severity matrix data store of an entity. The new entry comprises data representative of a severity of a consequence of a particular incident occurrence affecting new metric data representative of a new development operations tools metric data of the entity. This may have the effect of significantly improving the ability of entities to ensure appropriate mitigation of occurrence of an incident affecting the entity or its customers, ensure individuals likely to be suited for mitigating incidents based upon a plurality of incidents are spending their time and resources mitigating incidents in an order based upon priority scheme of the entity for mitigating incidents, and improve incident management experiences for future incidents. According to some aspects, these and other benefits may be achieved by compiling ownership data, metric data, and severity matrix data and analyzing the compiled data, using one or more machine learning models, to predict a new entry to add to the severity matrix data. The ownership data may be representative of assets of an entity and data representative of relationships between the assets of the entity; the metric data may be representative of development operations tools metric data of the assets; and severity matrix data may comprise a plurality of entries. Each entry of the plurality of entries of severity matrix data may comprise data representative of a severity of a consequence of a particular incident occurrence affecting the metric data. The one or more machine learning models may be trained to recognize one or more relationships between the compiled data and new metric data representative of a new development operations tools metric data of the assets. The new entry may comprise data representative of a severity of a consequence of a particular incident occurrence affecting the new metric data. Such a prediction then may be used to accurately manage an incident severity matrix of an entity and efficiently and correctly prioritize mitigation of various incidents as they occur.
Aspects discussed herein may provide a computer-implemented method for the prediction and assignment of a new entry to add to a severity matrix data store. For example, in at least one implementation, a computing device may compile ownership data, metric data, and severity matrix data as input data to a machine learning model data store. The ownership data may be data representative of assets of an entity and data representative of relationships between the assets. The metric data may be data representative of development operations tools metric data of the assets of the entity. The severity matrix data may comprise a plurality of entries, where each entry comprises data representative of a severity of a consequence of a particular incident occurrence affecting the metric data.
The same computing device or different computing device may recognize one or more relationships between the compiled data and new metric data representative of a new development operations tools metric data of the assets, to predict a new entry to add to the severity matrix data. Such a new entry comprises data representative of a severity of a consequence of a particular incident occurrence affecting the new metric data. A computing device may output a notification of the predicted new entry.
Corresponding apparatus, systems, and computer-readable media are also within the scope of the disclosure.
These features, along with many others, are discussed in greater detail below.
The present disclosure is illustrated by way of example and not limited in the accompanying figures in which like reference numerals indicate similar elements and in which:
In the following description of the various embodiments, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration various embodiments in which aspects of the disclosure may be practiced. It is to be understood that other embodiments may be utilized and structural and functional modifications may be made without departing from the scope of the present disclosure. Aspects of the disclosure are capable of other embodiments and of being practiced or being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein are for the purpose of description and should not be regarded as limiting. Rather, the phrases and terms used herein are to be given their broadest interpretation and meaning. The use of “including” and “comprising” and variations thereof is meant to encompass the items listed thereafter and equivalents thereof as well as additional items and equivalents thereof.
By way of introduction, aspects discussed herein may relate to methods and techniques for prediction and assignment of a new entry to add to a severity matrix data store. The new entry comprises data representative of a severity of a consequence of a particular incident occurrence affecting new metric data representative of a new development operations tools metric data of the entity. Illustrative examples include applications for ordering groceries, checking financial data, uploading photos as part of a social media application, and/or other uses. Upon implementation, the present disclosure describes receiving ownership data. The ownership data may be data representative of assets of an entity and data representative of relationships between the assets. The present disclosure further describes receiving metric data, which may be data representative of development operations tools metric data of the assets, and receiving severity matrix data, comprising a plurality of entries, where each entry comprises data representative of a severity of a consequence of a particular incident occurrence affecting the metric data.
A first computing device may compile the ownership data, the metric data, and the severity matrix data as input data to a machine learning model data store. As part of the compiling of such data, natural language processing may be utilized in order to account for textual and/or other data entries that do not consistently identify the same or similar data in the same way. The natural language processing may be utilized to identify text in data of various types and in various formats.
The same or a second computing device may receive new metric data. The new metric data may be representative of a new development operations tools metric data of the assets. Training data to a first machine learning model may be received. A first machine learning model may be trained to recognize one or more relationships between the input data in the machine learning model data store. The same, or a second, computing device may receive new metric data. The new metric data may be representative of a new development operations tools metric data of the assets. The new metric data may be used as refinement data to further train the first machine learning model. The refinement data may update the input data in the machine learning model data store based upon the new metric data. One or more specific characteristics of entries within the severity matrix data and the new metric data may be identified by one of the same or different computing devices. The one or more specific characteristics may include one or more of cloud infrastructure, physical infrastructure, a recovery time objective, or a customer base.
The present disclosure further describes a second machine learning model. Any of the same or a different computing device may predict, via the second machine learning model trained to recognize one or more relationships between the input data in the machine learning model data store and the new metric data, a new entry to add to the severity matrix data. The new entry may comprise data representative of a severity of a consequence of a particular incident occurrence affecting the new metric data.
The present disclosure further describes outputting a notification of the predicted new entry based upon the predicted new entry. After the output of the notification, a user input representative of a confirmation of adding the new entry to the severity matrix data or receiving a user input representative of a modification to the new entry to the severity matrix data may then be received. Thereafter, the new entry may be added to the severity matrix data and the second machine learning model may be modified based on the received user input.
Aspects described herein improve the functioning of computers by improving the ability of computing devices to identify and predict severity designations as part of a new entry to an existing severity matrix. Conventional systems are susceptible to failure or repetition of occurrence of a previous incident—for example, an inaccurate severity designation for the occurrence of an incident associated with a new application of an entity may lead to wasted time and resources to properly address the occurrence of an incident. As such, these conventional techniques leave entities exposed to the possibility of a constant reoccurrence of the incident on the operation of the entity as well as delayed response times to mitigating an incident to begin with. By providing prediction techniques—for example, based on predicting the likely severity designations to assign to occurrence of an incident for a new application—a proper remediation action scheme can be more accurately implemented in a more time efficient manner Over time, the processes described herein can save processing time, network bandwidth, and other computing resources. Moreover, such improvement cannot be performed by a human being with the level of accuracy obtainable by computer-implemented techniques to ensure accurate prediction of the severity designations.
Before discussing these concepts in greater detail, however, several examples of a computing device and environment that may be used in implementing and/or otherwise providing various aspects of the disclosure will first be discussed with respect to
Computing device 201 may, in some embodiments, operate in a standalone environment. In others, computing device 201 may operate in a networked environment, including network 203 and network 381 in
As seen in
I/O interfaces 219 may include a variety of interface units and drives for reading, writing, displaying, and/or printing data or files. I/O interfaces 219 may be coupled with a display such as display 220. I/O interfaces 219 can include a microphone, keypad, touch screen, and/or stylus through which a user of the computing device 201 can provide input, and can also include one or more of a speaker for providing audio output and a video display device for providing textual, audiovisual, and/or graphical output.
Network interface 217 can include one or more transceivers, digital signal processors, and/or additional circuitry and software for communicating via any network, wired or wireless, using any protocol as described herein. It will be appreciated that the network connections shown are illustrative and any means of establishing a communications link between the computers or other devices can be used. The existence of any of various network protocols such as TCP/IP, Ethernet, FTP, Hypertext Transfer Protocol (HTTP) and the like, and various wireless communication technologies such as Global system for Mobile Communication (GSM), Code-division multiple access (CDMA), WiFi, and Long-Term Evolution (LTE), is presumed, and the various computing devices described herein can be configured to communicate using any of these network protocols or technologies.
Memory 221 may store software for configuring computing device 201 into a special purpose computing device in order to perform one or more of the various functions discussed herein. Memory 221 may store operating system software 223 for controlling overall operation of computing device 201, control logic 225 for instructing computing device 201 to perform aspects discussed herein, software 227, data 229, and other applications 231. Control logic 225 may be incorporated in and may be a part of software 227. In other embodiments, computing device 201 may include two or more of any and/or all of these components (e.g., two or more processors, two or more memories, etc.) and/or other components and/or subsystems not illustrated here.
Devices 205, 207, 209 may have similar or different architecture as described with respect to computing device 201. Those of skill in the art will appreciate that the functionality of computing device 201 (or device 205, 207, 209) as described herein may be spread across multiple data processing devices, for example, to distribute processing load across multiple computers, to segregate transactions based on geographic location, user access level, quality of service (QoS), etc. For example, devices 201, 205, 207, 209, and others may operate in concert to provide parallel computing features in support of the operation of control logic 225 and/or software 227.
Although not shown in
One or more aspects discussed herein may be embodied in computer-usable or readable data and/or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices as described herein. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types when executed by a processor in a computer or other device. The modules may be written in a source code programming language that is subsequently compiled for execution, or may be written in a scripting language such as (but not limited to) Python, Perl, or an equivalent thereof. The computer executable instructions may be stored on a computer readable medium such as a hard disk, optical disk, removable storage media, solid state memory, RAM, etc. As will be appreciated by one of skill in the art, the functionality of the program modules may be combined or distributed as desired in various embodiments. In addition, the functionality may be embodied in whole or in part in firmware or hardware equivalents such as integrated circuits, field programmable gate arrays (FPGA), and the like. Particular data structures may be used to more effectively implement one or more aspects discussed herein, and such data structures are contemplated within the scope of computer executable instructions and computer-usable data described herein. Various aspects discussed herein may be embodied as a method, a computing device, a data processing system, or a computer program product.
Although various components of computing device 201 are described separately, functionality of the various components can be combined and/or performed by a single component and/or multiple computing devices in communication without departing from the invention. Having discussed several examples of computing devices that may be used to implement some aspects as discussed further below, discussion will now turn to various examples for predicting a new entry for a severity matrix.
As shown in
Illustrative examples of development operations tool metric data includes:
New development operations data 301 further may be used by refinement model 321 trained to recognize one or more relationships between the input data in a machine learning model data store 311. As described below, the refinement model 321 updates the input data in the machine learning model data store 311 based upon the new development operations metric data 301.
The system 300 may include one or more memories or databases that maintains entity data 303. A computing device utilizing natural language processing 313 may be configured to access the one or more memories or databases that maintains entity data 303. The entity data 303 may include data representative of assets of an entity. Assets of an entity may include computing devices, databases, servers, facilities, software, firmware, and/or other equipment of the entity. The entity data 303 also may include data representative of associations between the assets of the entity. In some embodiments, entity data 303 may include data representative of support team ownership data and/or line of business ownership data, e.g., data for one or more members of a support team and/or line of business of the entity that is responsible for operation, implementation, and/or development of one or more pieces of equipment of the entity, including software and/or firmware operating on a physical piece of equipment and/or software and/or firmware implementing specific code of the entity, such as an application.
The system 300 may include one or more memories or databases that maintains development operations data 305. A computing device utilizing natural language processing 311 may be configured to access the one or more memories or databases that maintains development operations data 305. The development operations data 305 may include data representative of development operations tools metric data, as described above, that are already in implementation by the entity.
The system 300 may include one or more memories or databases that maintains severity matrix data 307. A computing device utilizing natural language processing 313 may be configured to access the one or more memories or databases that maintains severity matrix data 307. The severity matrix data 307 may include a plurality of entries, where each entry includes data representative of a severity of a consequence of a particular incident occurrence affecting the metric data. Severity in an entry within the severity matrix 307 may represent the severity of the most likely consequence of a particular incident occurrence. Thus, severity matrix data may be based on the likelihood that incidents with respect to an application will occur, and the potential impact that the incidents will have on the entity. It is a tool that helps an entity visualize the probability verses the severity of a potential incident.
System 300 may include one or more computing devices as a compiler 309 for compiling the entity data 303, the development operations tools metric data 305, and/or the severity matrix data 307. Compiler 309 may bring together the entity data 303, the development operations tools metric data 305, and/or the severity matrix data 307 for use as input data to a machine learning model data store 311. Compiler 309 may utilize natural language processing 313 in order to modify data for storage in the machine learning model data store 311. Compiler 309 may be configured to load various data from the entity data 303, development operations tools metric data 305, and/or severity matrix data 307, in order to create one or more derived fields for use in the machine learning model data store 311. Derived fields may include data entries that do not exist in the machine learning model data store 311 itself. Rather, they are calculated from one or more existing numeric fields via basic arithmetic expressions and non-aggregate numeric functions.
System 300 may include one or more computing devices utilizing natural language processing 313. The one or more computing devices utilizing natural language processing 313 may receive data and/or access data from one or more of memories or databases 301, 303, 305, 307, and 311. Natural language processing 313 may be utilized in order to account for textual and/or other data entries that do not consistently identify the same or similar data in the same way. The natural language processing 313 may be utilized to identify text in data of various types and in various formats.
The system 300 may include one or more memories or databases storing a machine learning model data store 311 that maintains data as input to a refinement model 321 and/or a prediction model 331. Machine learning model data store 311 may be configured to maintain data elements used in refinement model 321 and prediction model 331 that may not be stored elsewhere, or for which runtime calculation is either too cumbersome or otherwise not feasible. Examples include point-in-time historical values of development operations attribute values, development operations attribute values as of time of production change, and historical production asset ownership information. Any derived fields related to rates of change of these attributes, historical trend information that might be predictive, as well as model specifications may be maintained here as well.
System 300 may include one or more computing devices implementing a refinement model 321. Refinement model 321 may be a machine learning model. The machine learning model may comprise a neural network, such as a convolutional neural network (CNN), a recurrent neural network, a recursive neural network, a long short-term memory (LSTM), a gated recurrent unit (GRU), an unsupervised pre-trained network, a space invariant artificial neural network, a generative adversarial network (GAN), or a consistent adversarial network (CAN), such as a cyclic generative adversarial network (C-GAN), a deep convolutional GAN (DC-GAN), GAN interpolation (GAN-INT), GAN-CLS, a cyclic-CAN (e.g., C-CAN), or any equivalent thereof. Additionally or alternatively, the machine learning model may comprise one or more decisions trees. Refinement model 321 may be trained to recognize one or more relationships between the input data in the machine learning model data store 311. The machine learning model may be trained using supervised learning, unsupervised learning, back propagation, transfer learning, stochastic gradient descent, learning rate decay, dropout, max pooling, batch normalization, long short-term memory, skip-gram, or any equivalent deep learning technique. Once trained, the refinement model may update the input data in the machine learning model data store 311. Specifically, refinement model 321 may be configured to discern an objective relationship between the data captures for production assets in the machine learning model data store 311. The output of refinement model 321 may include refined model data that is then maintained in the machine learning model data store 311. The refined model data thereafter may be used as input to prediction model 331.
System 300 may include one or more computing devices implementing a prediction model 331. Prediction model 331 may be a machine learning model. The machine learning model may be any of the machine learning models described above with respect to the refinement model 321. Prediction model 331 may be trained, using the techniques described above, to recognize one or more relationships between the input data in the machine learning model data store 311 and new development operations metric data 301. In addition, prediction model 331 utilizes the body of attributes maintained in the machine learning model data store 311. Prediction model 331 may identify one or more specific characteristics of entries within the severity matrix data 307 and the new development operations data 301. The one or more characteristics may include any one or more of cloud infrastructure, physical infrastructure, a recovery time objective, or a customer base.
Prediction model 331 may predict a new entry to add to the severity matrix data 307 based upon the input data from the machine learning model data store 311. Once implemented, prediction model 331 may output to machine learning model data store 311. In addition, prediction model 331 may output to a notification system 351 to output a notification of the predicted new entry. Illustrative notifications include an alert of some type, an email, an instant message, a phone call, and/or some other type of notification.
Prediction model 331 may be trained to output a score representative of a confidence of the basis for the severity, within the new entry, of a consequence of a particular incident occurrence affecting the new development operations data 301. Such a score may be generated based on the predicted relationship. A score may be a numerical value associated with a designated scale with a higher value corresponding to higher confidence determination. In some embodiments, each score may be compared to a threshold value. The threshold value may be a score requirement for providing a score to a user. When a score satisfies the threshold value, the predicted new entry may be outputted via a notification to a user. In some embodiments, a notification may not be outputted unless the score satisfies a threshold. In some embodiments, additional scores representative of other confidence determinations may be outputted. Such additional scores may be generated based on the predicted relationships. In embodiments in which there are multiple scores, the prediction model 331 may output a notification based on one or more of the plurality of scores. In embodiments of multiple scores, different incidents, development operations data 301, and/or applications may have different thresholds to satisfy. In some embodiments, the predictive model 331 may compare a plurality of scores with each other and output a notification based on the comparison, such as one score being higher in value than a second score.
As described herein, system 300 includes a notification system 351 configured to output a notification of the predicted new entry. The notification system 351 may be configured to receive a plurality of new entry options based upon scores and may determine which of the plurality to output as part of the notification. Alternatively, notification system 351 may be configured to output all possible new entry options based upon a score meeting a threshold. Further, notification system 351 may be configured to output a notification of all possible new entries that were determined with the corresponding score for each included in the notification.
System 300 also includes confirmation and modification system 361. Confirmation and modification system 361 may include receiving user input that is representative of a confirmation of adding the new entry to the severity matrix data 307. System 300 may be configured to be completely autonomous where predicted new entries are automatically added to the severity matrix data 307. Alternatively, system 300 may be configured to require a confirmation by a user prior to adding the new entry to the severity matrix data 307. The user may confirm all, some, or no portion of the new entry that the system has predicted. In some occurrences, the user may want to modify the predicted new entry prior to updating the severity matrix data 307. Confirmation and modification system 361 may include receiving a user input representative of a modification to the predicted new entry to the severity matrix data 307. This user confirmation and/or user override may be feedback data to the machine learning model data store 311, refinement model 321, and/or prediction model 331. Such an update may include creating, in the database maintaining the severity matrix data 307, a new database entry comprising data representative of a severity of a consequence of a particular incident occurrence affecting the development operations data 301 that is based upon a change made by the user.
At step 402, one or more computing devices may receive ownership data. Ownership data may be maintained in a memory of a computing device and/or as part of a database or other memory location accessible by a computing device, such as entity data 303 in
At step 404, one or more computing devices may receive development operations data. Development operations data may be maintained in a memory of a computing device and/or as part of a database or other memory location accessible by a computing device, such as development operations data 305 in
At step 406, one or more computing devices may receive severity matrix data. Severity matrix data may be maintained in a memory of a computing device and/or as part of a database or other memory location accessible by a computing device, such as severity matrix data 307 in
At step 408, one or more computing devices may compile the ownership data, the development operations tools metric data, and the severity matrix data for use as input data to one or more machine learning model data stores. Compiling of data may be implemented by compiler 309 as described for
At step 410, one or more computing devices may receive new development operations tools metric data. New development operations tools metric data may be maintained in a memory of a computing device and/or as part of a database or other memory location accessible by a computing device, such as new development operations data 301 in
Moving to step 412, input data may be inputted to a refinement model to recognize one or more relationships between the input data in a machine learning model data store. As described herein, the refinement data may update the input data in the machine learning model data store. Such a refinement model may be refinement model 321 described in
Moving to step 416, input data from machine learning model data store, which may include refinement data, may be inputted to a machine learning model trained to recognize one or more relationships between the input data in the machine learning model data store and new development operations data. The machine learning model may operate on one or more computing devices, such as the one or more computing devices in
Proceeding to step 420 in
In step 422, the machine learning model may output a notification of the predicted new entry to a user. In some embodiments, a notification may not be outputted unless the score satisfies a threshold. In some embodiments, additional scores representative of other confidence determinations may be outputted. Such additional scores may be generated based on the predicted relationships. In embodiments in which there are multiple scores, the machine learning model may output a notification based on one or more of the plurality of scores. In embodiments of multiple scores, different incidents, development operations data, and/or applications, each may have different thresholds to satisfy. In some embodiments, the machine learning model may compare a plurality of scores with each other and output a notification based on the comparison, such as one score being higher in value than a second score. Illustrative notifications include an alert of some type, an email, an instant message, a phone call, and/or some other type of notification. Accordingly, an individual may receive an email message indicating a predicted new entry to add to the entity's severity matrix based upon the new development operations data. The notification may include a request to confirm the predicted entry or to modify the predicted entry.
Proceeding to step 424, a determination may be made as to whether the predicted new entry is confirmed or modified by a user. For step 424, the system may receive a user input representative of a confirmation of adding the predicted new entry to the severity matrix data and follow to step 426. Alternatively, the system may receive a user input representative of a modification to the predicted new entry to the severity matrix data and follow to step 428. If the system receives, in step 426, a user input representative of a confirmation of adding the predicted new entry to the severity matrix data, the system may determine that the predicted new entry from the machine learning model predicted in step 418, scored in step 420, and notified to a user in step 422 is to be added to the severity matrix data of the entity. In the alternative, if the system receives, in step 428, a user input representative of a modification to the predicted new entry to the severity matrix data, the system may determine that the predicted new entry from the machine learning model predicted in step 418, scored in step 420, and notified to a user in step 422 is to be added to the severity matrix data of the entity per one or more modifications by the user. For example, the user may change a portion of the predicted new entry to have a different number range for a number of customers affected by a specific incident for a specific severity designation. Accordingly, any modifications to the predicted new entry by the user may be received as part of step 428 as well. An individual may accept or reject any particular portion of the predicted new entry before proceeding to step 430. In alternative embodiments, no user confirmation may be needed. This may be a situation in which the system operates autonomously and merely creates new database entries automatically without user confirmation before proceeding to step 432. Steps 424-428 may be implemented by receiving such confirmation data at a confirmation and modification system 361 as described in
In step 430, a new database entry in the severity matrix data and/or the machine learning model data store may be created. The new database entry may include the predicted new entry automatically or the user confirmed predicted new entry, whether modified by a user or not. Accordingly, the severity matrix data and/or machine learning model data store now has been updated to account for the new development operations data. Again, this process may occur separately or concurrently for many incidents and/or new development operations data. Finally, in step 432, the machine learning model, such as prediction model 331 described in
At step 502, one or more computing devices may receive ownership data. Ownership data may be maintained in a memory of a computing device and/or as part of a database or other memory location accessible by a computing device, such as entity data 303 in
At step 504, one or more computing devices may receive development operations data. Development operations data may be maintained in a memory of a computing device and/or as part of a database or other memory location accessible by a computing device, such as development operations data 305 in
At step 506, one or more computing devices may receive severity matrix data. Severity matrix data may be maintained in a memory of a computing device and/or as part of a database or other memory location accessible by a computing device, such as severity matrix data 307 in
At step 508, one or more computing devices may compile the ownership data, the development operations tools metric data, and/or the severity matrix data for use as input data to one or more machine learning model data stores. Compiling of data may be implemented by compiler 309 as described for
At step 510, one or more computing devices may identify one entry of the development operations tools metric data. The entry of the development operations tools metric data may be maintained in a memory of a computing device and/or as part of a database or other memory location accessible by a computing device, such as development operations data 305 in
Moving to step 512, input data may be inputted to a refinement model to recognize one or more relationships between the input data in a machine learning model data store. As described herein, the refinement data may update the input data in the machine learning model data store. Such a refinement model may be refinement model 321 described in
Moving to step 514, input data from machine learning model data store, which may include refinement data, may be inputted to a machine learning model trained to recognize one or more relationships between the input data in the machine learning model data store and the identified entry in the development operations data. The machine learning model may operate on one or more computing devices, such as the one or more computing devices in
Proceeding to step 518 in
In step 520, the machine learning model may output a notification of the predicted modification to a user. In some embodiments, a notification may not be outputted unless the score satisfies a threshold. In some embodiments, additional scores representative of other confidence determinations may be outputted. Such additional scores may be generated based on the relationships. In embodiments in which there are multiple scores, the machine learning model may output a notification based on one or more of the plurality of scores. In embodiments of multiple scores, different incidents, development operations data, and/or applications may have different thresholds to satisfy. In some embodiments, the machine learning model may compare a plurality of scores with each other and output a notification based on the comparison, such as one score being higher in value than a second score. The notification may include a request to confirm the predicted modification or to change the predicted modification.
Proceeding to step 522, a determination may be made as to whether the predicted modification is confirmed or changed by a user. For step 522, the system may receive a user input representative of a confirmation of the predicted modification to an identified entry in the severity matrix data and follow to step 524. Alternatively, the system may receive a user input representative of a change to predicted modification of an identified entry in the severity matrix data and follow to step 526. If the system receives, in step 524, a user input representative of a confirmation of the predicted modification to an identified entry in the severity matrix data, the system may determine that the identified entry in the severity matrix data of the entity be modified. In the alternative, if the system receives, in step 526, a user input representative of a change to predicted modification to an identified entry in the severity matrix data, the system may determine that the predicted modification is to be modified prior to modifying the identified entry in the severity matrix data of the entity per one or more modifications by the user. An individual may accept or reject any particular portion of the predicted modification before proceeding to step 528. In alternative embodiments, no user confirmation may be needed. This may be a situation in which the system operates autonomously and merely updates database entries automatically without user confirmation before proceeding to step 530. Steps 522-526 may be implemented by receiving such confirmation data at a confirmation and modification system 361 as described in
In step 528, the identified database entry in the severity matrix data and/or the machine learning model data store may be updated. The database entry may include the predicted modification automatically or the user confirmed modification, whether changed by a user or not. Accordingly, the severity matrix data and/or machine learning model data store now has been updated to account for the existing development operations data based upon changes over time to the overall system. Again, this process may occur separately or concurrently for many incidents and/or development operations data. Finally, in step 530, the machine learning model, such as prediction model 331 described in
One or more steps of the example may be rearranged, omitted, and/or otherwise modified, and/or other steps may be added.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.