During a software development lifecycle, software applications are built and packaged for delivery. In some cases, build packages may encounter runtime errors causing an installation to fail. Often, these runtime errors are never identified, saved or scanned at any stage in development so, even in a perfect system, runtime errors may be encountered many years after product launch.
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosure. The summary is not an extensive overview of the disclosure. It is neither intended to identify key or critical elements of the disclosure nor to delineate the scope of the disclosure. The following summary merely presents some concepts of the disclosure in a simplified form as a prelude to the description below.
Aspects of the disclosure provide effective, efficient, scalable, and convenient technical solutions that address and overcome the technical problems associated with accurately evaluating instruments for authenticity and validity.
Aspects of the disclosure relate to intelligent prediction of runtime errors that may be encountered during an installation of a software application. One or more aspects of the disclosure relate to a system agnostic intelligent artificial intelligence (AI) computing system to intelligently predict production-based runtime errors at a pre-build stage, determine a solution to the runtime error, and automatically repackage the software application build.
During a software development lifecycle of a software application, the application code is modified and multiple versions are built and packaged to be installed on different computing systems, such as on a software development computing system, a software testing computing systems, and/or production or end-user computing systems. Build packages may encounter runtime errors causing an installation to fail, where the causes of the failures may not be identified or resolved. As such, a need has been recognized for an artificial intelligent computing system capable of predicting runtime errors of software application build packages, resolving the predicted runtime errors, and automatically repackaging the build to significantly reduce a probability that a runtime error will occur during installation.
These features, along with many others, are discussed in greater detail below.
The present disclosure is illustrated by way of example and not limited in the accompanying figures in which like reference numerals indicate similar elements and in which:
In the following description of various illustrative embodiments, reference is made to the accompanying drawings, which form a part hereof, and in which is shown, by way of illustration, various embodiments in which aspects of the disclosure may be practiced. It is to be understood that other embodiments may be utilized, and structural and functional modifications may be made, without departing from the scope of the present disclosure.
It is noted that various connections between elements are discussed in the following description. It is noted that these connections are general and, unless specified otherwise, may be direct or indirect, wired or wireless, and that the specification is not intended to be limiting in this respect.
The above-described examples and arrangements are merely some example arrangements in which the systems described herein may be used. Various other arrangements employing aspects described herein may be used without departing from the invention.
The runtime error prediction system 150, as discussed in greater detail with respect to
The model training engine 210 may collect historical production version information and/or test framework code from one or more different software development tool computing systems and/or software test computing systems, such as from the versioning server 140. Additionally, the model training engine 210 may collect historical production version run time error information and/or fixed solution information, such as from the deployed computing environments (e.g., installation log information, system log information, and the like). The model training engine may train the model based on the gathered historical source code information and/or the runtime error information and fixed solution information, such as by utilizing an unconventional usage deep learning recurrent neural network algorithm.
The runtime error optimization engine 220 may utilize the AI learning model (e.g., a supervised model, an unsupervised model, and the like), such as a classic neural network model, a convolutional neural network model, a recurrent neural network model, a self-organizing map model, a Boltzmann machine model, an autoencoder model, and the like. In some cases, the runtime error optimization engine 220 may monitor for new inputs, such as a new production version release indication, a new test version release indication, a revised version release indication that may be received from the versioning server 140 during a code push. The runtime error optimization engine 220, upon receiving an indication of a new version push, may automatically analyze, using the trained model, to identify and/or predict possible runtime errors. In some cases, the runtime error optimization engine 220 may associate prediction information, such as an indication of successful runtime operation, a prediction of runtime errors, and indication of an indeterminate runtime result (e.g., when a prediction that the build will encounter runtime errors or will install correctly is not determinative), with a version, such as in a data entry in one or more of the code repositories 130 or other data storage associated with the versioning server 140 or the runtime error prediction system 150. Version information may include software component information, build configuration information, prediction information and the like. In some days, prediction information may be stored in metadata that may be associated with versioning information and stored in a data repository associated with the runtime error prediction system 150.
In some cases, the runtime error prediction system 150 may predict a run time error during a pre-build stage. For example, the runtime error prediction system 150 may receive an input identifying a new production build, a new development build, or a new production build from the versioning server. The runtime error optimization engine 220 may identify a runtime error or may predict a runtime error based on processing of the input using the trained AI deep learning model trained using information of historical successful builds and historical runtime errors encountered in one or more production environments. If the runtime error optimization engine 220 predicts no errors—no predicted run time errors to be encountered on the production servers with the build, the runtime error optimization engine 220 may clear a flag (e.g., a runtime error flag=0). Based on this flag, the runtime error optimization engine 220 indicates to the build server 160 that a particular build is good to deploy to production, and the build server 160 may be triggered by the flag to build the version release package via a release pipeline computing system 170 to be deployed to one or more applicable deployed computing environments, such as a production computing environment, a testing computing environment and/or a development computing environment. Status information from the deployment that may indicate a successful deployment and/or whether one or more errors were encountered during deployment may be saved and/or communicated (e.g., pushed, pulled) to the model training engine 210 and/or the runtime error orchestration engine 230.
In some cases, the runtime error optimization engine 220 may analyze new inputs such as new production version information, test framework code components that may be received from the versioning server 140. The runtime error optimization engine 220 may process the trained model to detect and/or predict production-based run time errors and may set a flag identifying description of a predicted runtime error. For example, the runtime error optimization engine 220 process new production version information using the trained model to identify or predict a likelihood that if the version was built and deployed, a high likelihood exists that this deployed version will crash when installed on production servers with run time errors. In such cases (e.g., when a probability that a version will crash meets or exceeds a threshold condition), the runtime error optimization engine 220 may route the version information to a runtime error orchestration engine 230 to be analyzed and processed to automatically resolve the identified errors and may route the version for repackaging.
The runtime error orchestration engine 230 may include one or more artificial intelligence models (e.g., an autoregressive language model) such as a generative pre-trained transformer (GPT-3) model. Such models may allow the runtime error orchestration engine 230 to analyze queries and/or source code and produce human-like text or database queries to automatically reconfigure the build code for the versioning server. For example, the runtime error orchestration engine 230 may generate SQL code to configure a build process for a version of the software application. The runtime error orchestration engine 230 may process the artificial intelligence model to the code by resolving identified runtime errors based on its historical learning of the trained model that is based on historical production run time errors and/or solutions to the historical runtime errors. When the runtime error orchestration engine 230 receives information regarding a predicted runtime error input (e.g., text information) from the runtime error optimization engine 220, the artificial intelligence model (e.g., the GPT-3 model) may process the predicted runtime error input text and generate new code with an identified solution and push the resolved version build code to the to the data repositories 130 for repackaging and deployment. In some cases, when the runtime error orchestration engine 230 analysis by the AI model may result in an unresolved runtime error condition. In such cases, the runtime error orchestration engine 230 may send a notification to another system to virtually deploy the version predicted to have an error for further analysis. For example, the runtime error orchestration engine may trigger a deployment of the subject version of the software application to a virtual environment via a virtual build server 240, so that the virtual deployment may be analyzed via an extended reality engine 250. The extended reality engine 250 may utilize extended reality (XR) environment(s) that combine real computing environments, virtual computing environments, and human-machine interactions via remote computing technology and/or wearable computing components. For example, the extended reality engine may allow humans (e.g., developers, engineers, business users, subject matter experts, support team members, and the like) to interact with the virtual build environment to analyze and collaborate to identify a solution for use in future deployments. Once a solution is found, the results may be automatically sent to the runtime error prediction system 150 for use in training one or more AI models.
If, at 335, the runtime error optimization engine 220 predicts an error may likely occur for the new version, the runtime error orchestration engine 230 may analyze the version build code using an AI model (e.g., the GPT-3 model) to identify and automatically resolve the error at 370. If a resolution was found, the runtime error orchestration engine 230 may automatically generate build code and store the build code in the repositories 130 and trigger a new build of the version at 340. If, at 325, the runtime error orchestration engine's analysis by the AI model was unsuccessful in determining a solution, the unresolved solution may be forwarded to the extended reality engine 250 for further analysis at 390. Results of the analysis via the extended reality engine 250 may be provided back to the runtime error prediction system 150 for use in training the AI models at 362.
The computing system environment 400 may include an illustrative runtime error prediction device 401 having a processor 403 for controlling overall operation of the runtime error prediction device 401 and its associated components, including a Random Access Memory (RAM) 405, a Read-Only Memory (ROM) 407, a communications module 409, and a memory 415. The runtime error prediction device 401 may include a variety of computer readable media. Computer readable media may be any available media that may be accessed by the runtime error prediction device 401, may be non-transitory, and may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, object code, data structures, program modules, or other data. Examples of computer readable media may include Random Access Memory (RAM), Read Only Memory (ROM), Electronically Erasable Programmable Read-Only Memory (EEPROM), flash memory or other memory technology, Compact Disk Read-Only Memory (CD-ROM), Digital Versatile Disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information and that can be accessed by the runtime error prediction device 401.
Although not required, various aspects described herein may be embodied as a method, a data transfer system, or as a computer-readable medium storing computer-executable instructions. For example, a computer-readable medium storing instructions to cause a processor to perform steps of a method in accordance with aspects of the disclosed embodiments is contemplated. For example, aspects of method steps disclosed herein may be executed by the processor 403 of the runtime error prediction device 401. Such a processor may execute computer-executable instructions stored on a computer-readable medium.
Software may be stored within the memory 415 and/or other digital storage to provide instructions to the processor 403 for enabling the runtime error prediction device 401 to perform various functions as discussed herein. For example, the memory 415 may store software used by the runtime error prediction device 401, such as an operating system 417, one or more application programs 419, and/or an associated database 421. In addition, some or all of the computer executable instructions for the runtime error prediction device 401 may be embodied in hardware or firmware. Although not shown, the RAM 405 may include one or more applications representing the application data stored in the RAM 405 while the runtime error prediction device 401 is on and corresponding software applications (e.g., software tasks) are running on the runtime error prediction device 401.
The communications module 409 may include a microphone, a keypad, a touch screen, and/or a stylus through which a user of the runtime error prediction device 401 may provide input, and may include one or more of a speaker for providing audio output and a video display device for providing textual, audiovisual and/or graphical output. The computing system environment 400 may also include optical scanners (not shown).
The runtime error prediction device 401 may operate in a networked environment supporting connections to one or more remote computing devices, such as the computing devices 441 and 451. The computing devices 441 and 451 may be personal computing devices or servers that include any or all of the elements described above relative to the runtime error prediction device 401.
The network connections depicted in
The disclosure is operational with numerous other computing system environments or configurations. Examples of computing systems, environments, and/or configurations that may be suitable for use with the disclosed embodiments include, but are not limited to, personal computers (PCs), server computers, hand-held or laptop devices, smart phones, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like that are configured to perform the functions described herein.
The computer network 503 may be any suitable computer network including the Internet, an intranet, a Wide-Area Network (WAN), a Local-Area Network (LAN), a wireless network, a Digital Subscriber Line (DSL) network, a frame relay network, an Asynchronous Transfer Mode network, a Virtual Private Network (VPN), or any combination of any of the same. The communications links 502 and 505 may be communications links suitable for communicating between the workstations 501 and the runtime error prediction server 504, such as network links, dial-up links, wireless links, hard-wired links, as well as network types developed in the future, and the like.
One or more aspects of the disclosure may be embodied in computer-usable data or computer-executable instructions, such as in one or more program modules, executed by one or more computers or other devices to perform the operations described herein. Generally, program modules include routines, programs, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types when executed by one or more processors in a computer or other data processing device. The computer-executable instructions may be stored as computer-readable instructions on a computer-readable medium such as a hard disk, optical disk, removable storage media, solid-state memory, RAM, and the like. The functionality of the program modules may be combined or distributed as desired in various embodiments. In addition, the functionality may be embodied in whole or in part in firmware or hardware equivalents, such as integrated circuits, Application-Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGA), and the like. Particular data structures may be used to more effectively implement one or more aspects of the disclosure, and such data structures are contemplated to be within the scope of computer executable instructions and computer-usable data described herein.
Various aspects described herein may be embodied as a method, an apparatus, or as one or more computer-readable media storing computer-executable instructions. Accordingly, those aspects may take the form of an entirely hardware embodiment, an entirely software embodiment, an entirely firmware embodiment, or an embodiment combining software, hardware, and firmware aspects in any combination. In addition, various signals representing data or events as described herein may be transferred between a source and a destination in the form of light or electromagnetic waves traveling through signal-conducting media such as metal wires, optical fibers, or wireless transmission media (e.g., air or space). In general, the one or more computer-readable media may be and/or include one or more non-transitory computer-readable media.
As described herein, the various methods and acts may be operative across one or more computing servers and one or more networks. The functionality may be distributed in any manner, or may be located in a single computing device (e.g., a server, a client computer, and the like). For example, in alternative embodiments, one or more of the computing platforms discussed above may be combined into a single computing platform, and the various functions of each computing platform may be performed by the single computing platform. In such arrangements, any and/or all of the above-discussed communications between computing platforms may correspond to data being accessed, moved, modified, updated, and/or otherwise used by the single computing platform. Additionally or alternatively, one or more of the computing platforms discussed above may be implemented in one or more virtual machines that are provided by one or more physical computing devices. In such arrangements, the various functions of each computing platform may be performed by the one or more virtual machines, and any and/or all of the above-discussed communications between computing platforms may correspond to data being accessed, moved, modified, updated, and/or otherwise used by the one or more virtual machines.
Aspects of the disclosure have been described in terms of illustrative embodiments thereof. Numerous other embodiments, modifications, and variations within the scope and spirit of the appended claims will occur to persons of ordinary skill in the art from a review of this disclosure. For example, one or more of the steps depicted in the illustrative figures may be performed in other than the recited order, one or more steps described with respect to one figure may be used in combination with one or more steps described with respect to another figure, and/or one or more depicted steps may be optional in accordance with aspects of the disclosure.