The present disclosure relates to methods, apparatus, and products for validating code generated by artificial intelligence using abstract syntax trees. Migrating the functionality of legacy source code to a more modern programming language can increase the maintainability and readability of the source code as well as improve system performance. However, such a migration is an arduous task that can include writing, testing, validating, and debugging massive amounts of code.
According to embodiments of the present disclosure, various methods, apparatuses and products for validating code generated by artificial intelligence using abstract syntax trees are described herein. In some aspects, an artificial intelligence (AI) language model is used to remap application source code from an original codebase to a target codebase while maintaining the same functionality. In some aspects, abstract syntax trees (ASTs) are used to validate the translation of the original application source code to the AI-generated source code. An equivalency mapping between the AST of the input source code and the AST of the output source code indicates the translation accuracy of the AI-generated code. A validation result can be identified from the degree of equivalency exhibited in the equivalency mapping. In some aspects, when the ASTs are found to include non-equivalencies, the AI language model can be prompted to regenerate the code. In some aspects, improvements to the language model can be measured using AST comparison as a validation metric. Thus, the AST comparison facilitates code validation when migrating from an original codebase to a new codebase, such as from a first programming language to a second programming language or from a legacy system to a modernized system, using AI-generated code.
In a particular embodiment, a method of validating code generated by artificial intelligence using abstract syntax trees includes generating, by an artificial intelligence (AI) language model, output source code based on input source code. The method also includes determining an equivalency mapping between a first abstract syntax tree (AST) constructed for the input source code and a second AST constructed for the output source code. The method further includes indicating, based on the equivalency mapping, a validation result for the output source code. In some examples, the input source code is implemented in a first programming language and the output source code is implemented in a second programming language that is different from the first programming language. In this way, it can be determined whether the ASTs of the input source code and the output source code are functionally equivalent, even though the ASTs may exhibit a different structure. All permutations of core functions can be relationally mapped for AST comparison.
In some variations, the validation result indicates a validation failure when the equivalency mapping indicates at least one nonequivalent element in at least one of the first AST and the second AST, and the validation result indicates a validation success when the equivalency mapping indicates an equivalency for all elements of the first AST and the second AST. In other variations, the validation result indicates a degree of equivalency. The aim of the code translation is that the AI-generated output source code has an AST that is equivalent to the AST of the input source code, thus the validation result may be pass/fail; however, the degree of equivalency is useful in assessing the progress of the AI language model before and after retraining, and for determining whether the identified non-equivalencies impart a change in functionality.
In some variations, determining an equivalency mapping between a first AST constructed for the input source code and a second AST constructed for the output source code includes partitioning the first AST and the second AST into subtrees and identifying equivalencies between the subtrees of the first AST and the second AST. In this way, complex ASTs constructed from hundreds of thousands, if not millions, of lines of code are unitized for comparison such that independent subtrees of one AST are reordered to match the structure of the other AST to facilitate comparison and determine equivalency.
In some variations, identifying equivalencies between the subtrees of the first AST and the second AST includes identifying one or more equivalent permutations of a first subtree of one AST and determining that a second subtree in another AST matches one of the one or more equivalent permutations. In some examples, the one or more equivalent permutations of the first subtree are generated by one or more of changing variable names and reordering independent statements. In this way, subtrees of different shapes but identical paths can be used to determine whether one tree is functionally equivalent to another subtree. This allows a rearrangement of elements in one AST to match the shape of the other AST.
In some variations the method also includes indicating a location of a nonequivalent element found in at least one of the input source code and the output source code. In this way, software engineers can analyze the translation error and determine whether the error (i.e., indicated by a non-equivalency in the AST) is a critical error. Engineers can further use the information to identify aspects for retraining the AI language model.
In some variations the method also includes regenerating, by the AI language model in response to the validation result, new output source code from the input source code. In this way, the AI language module can iteratively regenerate the output source code until an acceptable validation score is achieved.
In some variations the method also includes determining, subsequent to retraining the AI language model, a second validation result for regenerated output source code. In these variations, the method further includes quantifying an improvement of the AI language model based on at least the validation result and the second validation result. In this way, the accuracy and reliability of the AI language model can be assessed and the result of retraining the AI language model can be measured.
In some aspects, an apparatus may include a processing device; and memory operatively coupled to the processing device, wherein the memory stores computer program instructions that, when executed, configure the processing device to perform the above-described operations. In some aspects, a computer program product comprising a computer readable storage medium may store computer program instructions that, when executed, configure a computer to perform above-described operations.
In the world of software development, the need for modernizing a codebase from one programming language to another has become increasingly prevalent. For example, the source code for an application may be migrated from a legacy programming language (e.g., COBOL) to a modern programming language (e.g., Java). The motivation for such a migration may be to facilitate easier maintenance and readability of the source code, increase security and error handling, improve software and/or hardware performance, and other advantages that will be recognized by those of skill in the art.
In accordance with the present disclosure, artificial intelligence (AI) is used to port or migrate source code of an application to a different programming language. A large language model (LLM) is trained on datasets that include massive amounts of source code to develop generative AI that can output source code based on an input or prompt. That is, the AI language model is used to generate new source code based on an input of original source code. For example, an AI language model may be given a prompt such as “Generate Java code that achieves the same objectives as the following COBOL code,” where the legacy COBOL source code is provided as an input. In response, the AI language model may output, ideally, AI-generated Java source code that performs the same functions and produces the same output as the legacy.
However, migrating a codebase to a new language introduces a significant challenge in ensuring the accuracy and functionality of the translated code, especially when leveraging AI for automated translations. The difficulty lies in the validation of AI-generated code translations and ascertaining whether the translated code preserves the intended logic, functionality, and structure of the original code. The inherent complexity of programming languages, coupled with the ways in which developers express their logic, poses a challenge in reliably validating the correctness and similarity of AI-generated translations. Further, validating output source code translated from input source code requires an analysis of hundreds of thousands if not millions of lines of code.
The present disclosure addresses the challenges associated with validating the accuracy of AI-generated code translations, with a specific focus on enhancing the reliability and maintainability of automated code translation through the comparison of abstract syntax trees (ASTs). In accordance with the present disclosure, ASTs are leveraged to validate AI-translated code by breaking input source code and AI-generated source code down into elements such as if statements, loops, function declarations, variable declarations, etc. that are represented in by an AST. Both ASTs are then traversed and the types and properties of each node are compared to identify functional equivalence. Programmatic fuzzing and complexity blurring are applied to shapes in the ASTs so that all permutations of core function in the program can be relationally mapped when doing an AST comparison. In some examples, the ASTs are unitized by breaking the ASTs into subtrees for direct comparison. In some examples, a validation result identifies overfitting and underfitting in the AI-generated code and this metric is used for an analysis of translation accuracy.
With reference now to
Computer 101 may take the form of a desktop computer, laptop computer, tablet computer, smart phone, smart watch or other wearable computer, mainframe computer, quantum computer or any other form of computer or mobile device now known or to be developed in the future that is capable of running a program, accessing a network or querying a database, such as remote database 130. As is well understood in the art of computer technology, and depending upon the technology, performance of a computer-implemented method may be distributed among multiple computers and/or between multiple locations. On the other hand, in this presentation of computing environment 100, detailed discussion is focused on a single computer, specifically computer 101, to keep the presentation as simple as possible. Computer 101 may be located in a cloud, even though it is not shown in a cloud in
Processor set 110 includes one, or more, computer processors of any type now known or to be developed in the future. Processing circuitry 120 may be distributed over multiple packages, for example, multiple, coordinated integrated circuit chips. Processing circuitry 120 may implement multiple processor threads and/or multiple processor cores. Cache 121 is memory that is located in the processor chip package(s) and is typically used for data or code that should be available for rapid access by the threads or cores running on processor set 110. Cache memories are typically organized into multiple levels depending upon relative proximity to the processing circuitry. Alternatively, some, or all, of the cache for the processor set may be located “off chip.” In some computing environments, processor set 110 may be designed for working with qubits and performing quantum computing.
Computer readable program instructions are typically loaded onto computer 101 to cause a series of operational steps to be performed by processor set 110 of computer 101 and thereby effect a computer-implemented method, such that the instructions thus executed will instantiate the methods specified in flowcharts and/or narrative descriptions of computer-implemented methods included in this document. These computer readable program instructions are stored in various types of computer readable storage media, such as cache 121 and the other storage media discussed below. The program instructions, and associated data, are accessed by processor set 110 to control and direct performance of the computer-implemented methods. In computing environment 100, at least some of the instructions for performing the computer-implemented methods may be stored in code analysis module 107 in persistent storage 113.
Communication fabric 111 is the signal conduction path that allows the various components of computer 101 to communicate with each other. Typically, this fabric is made of switches and electrically conductive paths, such as the switches and electrically conductive paths that make up buses, bridges, physical input/output ports and the like. Other types of signal communication paths may be used, such as fiber optic communication paths and/or wireless communication paths.
Volatile memory 112 is any type of volatile memory now known or to be developed in the future. Examples include dynamic type random access memory (RAM) or static type RAM. Typically, volatile memory 112 is characterized by random access, but this is not required unless affirmatively indicated. In computer 101, the volatile memory 112 is located in a single package and is internal to computer 101, but, alternatively or additionally, the volatile memory may be distributed over multiple packages and/or located externally with respect to computer 101.
Persistent storage 113 is any form of non-volatile storage for computers that is now known or to be developed in the future. The non-volatility of this storage means that the stored data is maintained regardless of whether power is being supplied to computer 101 and/or directly to persistent storage 113. Persistent storage 113 may be a read only memory (ROM), but typically at least a portion of the persistent storage allows writing of data, deletion of data and re-writing of data. Some familiar forms of persistent storage include magnetic disks and solid state storage devices. Operating system 122 may take several forms, such as various known proprietary operating systems or open source Portable Operating System Interface-type operating systems that employ a kernel. The code included in code analysis module 107 typically includes at least some of the computer code involved in performing the computer-implemented methods described herein.
Peripheral device set 114 includes the set of peripheral devices of computer 101. Data communication connections between the peripheral devices and the other components of computer 101 may be implemented in various ways, such as Bluetooth connections, Near-Field Communication (NFC) connections, connections made by cables (such as universal serial bus (USB) type cables), insertion-type connections (for example, secure digital (SD) card), connections made through local area communication networks and even connections made through wide area networks such as the internet. In various embodiments, UI device set 123 may include components such as a display screen, speaker, microphone, wearable devices (such as goggles and smart watches), keyboard, mouse, printer, touchpad, game controllers, and haptic devices. Storage 124 is external storage, such as an external hard drive, or insertable storage, such as an SD card. Storage 124 may be persistent and/or volatile. In some embodiments, storage 124 may take the form of a quantum computing storage device for storing data in the form of qubits. In embodiments where computer 101 is required to have a large amount of storage (for example, where computer 101 locally stores and manages a large database), this storage may be provided by peripheral storage devices designed for storing very large amounts of data, such as a storage area network (SAN) that is shared by multiple, geographically distributed computers. IoT sensor set 125 is made up of sensors that can be used in Internet of Things applications. For example, one sensor may be a thermometer and another sensor may be a motion detector.
Network module 115 is the collection of computer software, hardware, and firmware that allows computer 101 to communicate with other computers through WAN 102. Network module 115 may include hardware, such as modems or Wi-Fi signal transceivers, software for packetizing and/or de-packetizing data for communication network transmission, and/or web browser software for communicating data over the internet. In some embodiments, network control functions and network forwarding functions of network module 115 are performed on the same physical hardware device. In other embodiments (for example, embodiments that utilize software-defined networking (SDN)), the control functions and the forwarding functions of network module 115 are performed on physically separate devices, such that the control functions manage several different network hardware devices. Computer readable program instructions for performing the computer-implemented methods can typically be downloaded to computer 101 from an external computer or external storage device through a network adapter card or network interface included in network module 115.
WAN 102 is any wide area network (for example, the internet) capable of communicating computer data over non-local distances by any technology for communicating computer data, now known or to be developed in the future. In some embodiments, the WAN 102 may be replaced and/or supplemented by local area networks (LANs) designed to communicate data between devices located in a local area, such as a Wi-Fi network. The WAN and/or LANs typically include computer hardware such as copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and edge servers.
End user device (EUD) 103 is any computer system that is used and controlled by an end user (for example, a customer of an enterprise that operates computer 101), and may take any of the forms discussed above in connection with computer 101. EUD 103 typically receives helpful and useful data from the operations of computer 101. For example, in a hypothetical case where computer 101 is designed to provide a recommendation to an end user, this recommendation would typically be communicated from network module 115 of computer 101 through WAN 102 to EUD 103. In this way, EUD 103 can display, or otherwise present, the recommendation to an end user. In some embodiments, EUD 103 may be a client device, such as thin client, heavy client, mainframe computer, desktop computer and so on.
Remote server 104 is any computer system that serves at least some data and/or functionality to computer 101. Remote server 104 may be controlled and used by the same entity that operates computer 101. Remote server 104 represents the machine(s) that collect and store helpful and useful data for use by other computers, such as computer 101. For example, in a hypothetical case where computer 101 is designed and programmed to provide a recommendation based on historical data, then this historical data may be provided to computer 101 from remote database 130 of remote server 104.
Public cloud 105 is any computer system available for use by multiple entities that provides on-demand availability of computer system resources and/or other computer capabilities, especially data storage (cloud storage) and computing power, without direct active management by the user. Cloud computing typically leverages sharing of resources to achieve coherence and economics of scale. The direct and active management of the computing resources of public cloud 105 is performed by the computer hardware and/or software of cloud orchestration module 141. The computing resources provided by public cloud 105 are typically implemented by virtual computing environments that run on various computers making up the computers of host physical machine set 142, which is the universe of physical computers in and/or available to public cloud 105. The virtual computing environments (VCEs) typically take the form of virtual machines from virtual machine set 143 and/or containers from container set 144. It is understood that these VCEs may be stored as images and may be transferred among and between the various physical machine hosts, either as images or after instantiation of the VCE. Cloud orchestration module 141 manages the transfer and storage of images, deploys new instantiations of VCEs and manages active instantiations of VCE deployments. Gateway 140 is the collection of computer software, hardware, and firmware that allows public cloud 105 to communicate through WAN 102.
Some further explanation of virtualized computing environments (VCEs) will now be provided. VCEs can be stored as “images.” A new active instance of the VCE can be instantiated from the image. Two familiar types of VCEs are virtual machines and containers. A container is a VCE that uses operating-system-level virtualization. This refers to an operating system feature in which the kernel allows the existence of multiple isolated user-space instances, called containers. These isolated user-space instances typically behave as real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can utilize all resources of that computer, such as connected devices, files and folders, network shares, CPU power, and quantifiable hardware capabilities. However, programs running inside a container can only use the contents of the container and devices assigned to the container, a feature which is known as containerization.
Private cloud 106 is similar to public cloud 105, except that the computing resources are only available for use by a single enterprise. While private cloud 106 is depicted as being in communication with WAN 102, in other embodiments a private cloud may be disconnected from the internet entirely and only accessible through a local/private network. A hybrid cloud is a composition of multiple clouds of different types (for example, private, community or public cloud types), often respectively implemented by different vendors. Each of the multiple clouds remains a separate and discrete entity, but the larger hybrid cloud architecture is bound together by standardized or proprietary technology that enables orchestration, management, and/or data/application portability between the multiple constituent clouds. In this embodiment, public cloud 105 and private cloud 106 are both part of a larger hybrid cloud.
For further explanation,
The method of
In some examples, the output source code 205 is generated by prompting the AI language model to generate output source code based on the input source code 203. For example, the AI language model may be prompted “Generate Java source code from block A of COBOL source code” where block A is provided as the input source code. In response, the AI language model generates Java source code that is intended to provide the same interfaces, perform the same functions, and generate the same outputs as the original COBOL source code.
The method of
An AST is a hierarchical tree data structure that represents the syntactic structure of source code in a programming language. The AST abstracts the details of the concrete syntax, which contains information not necessary for understanding the program's semantics. An AST focuses on the relationships between language elements, such as expressions, statements, and declarations while also abstracting away details like parentheses, braces, and other punctuation. An AST is implemented as a tree of nodes that represent the different syntactic constructs in the source code (e.g., if statements, loops, function and variable declarations, assignments). These nodes are organized in a hierarchical structure, reflecting the nested and hierarchical nature of the source code. ASTs provide a concise and abstract view of code that facilitates automated static code analysis. In accordance with the present disclosure, a comparison of the ASTs of the input source code and the AI-generated output source code are used to validate the output source code as functionally equivalent to the input source code.
Here equivalency between two ASTs means that for every node in a first tree there is an equivalent node in the second AST, and for every edge between two nodes in the first tree there is an edge between two equivalent nodes in the second AST. The equivalency of two ASTs does not require that the trees exhibit the same order where the order of independent statements, or the order of independent paths, does not alter the functionality of the code. In this disclosure the terms ‘node’ and ‘element’ of an AST may be used interchangeably.
In some examples, the code analysis module 201 determines 204 an equivalency mapping 207 by identifying whether there is, for every node and edge in the first AST, an equivalent node and edge in the second AST. In some implementations, the code analysis module 201 identifies equivalent nodes by walking each AST 213, 215 and comparing the type and properties of each node. For example, the type may be an abstract type that is not syntax-dependent, such as an if statement, a loop (e.g., ‘for’ or ‘do . . . while’), a function declaration, a variable declaration, and so on. The properties of each node can include the degree of the node and the set of nodes to which that node is connected. In some cases, the nodes of the AST may not be labeled with information indicating a type or other non-observable properties. In such cases, determining an equivalency mapping can be carried out by matching patterns within one AST to patterns within the other AST.
In some examples, the code analysis module 201 unitizes each AST 213, 215 by decomposing the AST into subtrees for comparison, as will be explained in more detail below. In these examples, the code analysis module 201 determines 204 an equivalency mapping 207 by identifying subtrees in one AST that match to subtrees in the other AST. Once equivalent subtrees are identified, independent subtrees can be rearranged as needed to reconstruct the AST 215 of the output source code 205 to match the AST 213 of the input source code 203.
In some examples, the code analysis module 201 determines 204 the equivalency mapping 207 by creating a data structure that indicates an equivalency determination of for each element. An equivalency is recorded when it is identified that an element of the second AST is equivalent to an element of the first AST. A non-equivalency is recorded for an element in an AST when no equivalent element was identified in the other AST. That is, a non-equivalent element in the second AST may be an element that has no correspondence in the first AST or may be an element of the first AST that is missing in the second AST. In some implementations, the code analysis module 201 only records non-equivalencies. The equivalency map 207 may be used for validating the output source code 205.
The method of
For further explanation,
As part of determining 204 the equivalency mapping, the code analysis module 201 also identifies 304 equivalencies between the subtrees of the first AST 213 and the second AST 215. In some examples, the code analysis module 201 identifies 304 equivalencies between the subtrees by iteratively selecting a subtree in one AST (e.g., AST 213) and mapping the first subtree to an equivalent subtree in another AST (e.g., the second AST 215). For example, the code analysis module 201 can walk the selected subtree, evaluate the type and/or properties of each node, and determine whether there is an equivalent subtree with equivalent nodes and structure in the other AST (e.g., the second AST 215). In some implementations, the code analysis module 201 further scores the equivalency, or otherwise flags the equivalency, between subtrees based on the presence of any non-equivalent elements (e.g., extra or missing nodes). Using these subtrees as building blocks, and by changing the positioning of subtrees, the code analysis module 201 can reconstruct, through iteration and recursion, the second AST 215 to be structurally identical to the first AST 213 (or the revere) such that any missing or additional element in either AST will be apparent.
For further explanation,
As part of identifying 304 equivalencies between the subtrees of the first AST 213 and the second AST 215, the method of
For further explanation,
For further explanation,
In some implementations, the code analysis module 201 adjusts one or more parameters of the AI language model before regenerating the output source code. The AI language model can include configurable parameters that influence the creativity of the model's response to a prompt. For example, a temperature parameter adjusts the distribution of probabilities that can be used to select the next token for an output stream. In selecting the next token for an output stream, a lower temperature causes the language model to select tokens whose probabilities are within a narrower range, tending to more deterministic output, while a higher temperature causes the language model to select tokens whose probabilities are within a wider range, tending to more random output. Another example parameter is a top k parameter that controls the randomness of selecting the next token by telling the language model that it must select from the top k highest probability tokens. Yet another example parameter is a top p parameter that controls the randomness of selecting the next token by telling the language model that it must select from the highest probability tokens whose probabilities sum to or exceed the p value.
In some examples, the code analysis module 201 adjusts one or more parameters of the AI language model in response to determining that one or more iterations of generating the output source code failed validation. For example, as the number of iterations increases, the parameters that control the creativity of the AI language model may be adjusted to increase the randomness of the output. In this way, the AI language model can be induced to generate a solution that is dissimilar to the failed solutions presented in previous iterations. In some examples, adjusting one or more parameters is carried out by including a statement in a prompt to adjust the parameter, such as “Set temperature to 0.8.”
For further explanation,
The method of
While embodiments are useful in migrating or porting an application from one programming language to a different programming language, and from legacy programming language to a more modernized programming language, it will be appreciated that in some examples the original source code and the new source code may be written in the same programming language.
In view of the foregoing, validating code generated by artificial intelligence using abstract syntax trees in accordance with the present disclosure provides a number of advantages. Embodiments of the present disclosure improve the accuracy and quality of automated code generation, and further improve the reliability and maintainability of the source code generated through automated code generation, thus providing mechanisms to meet the challenges of validating AI-generated source code against the original source code. This provides a technical advantage to the field of automated code translation and to the fields of software development and maintenance as a whole. Consistent with these advantages, the comparison of the ASTs of the input source code and the output source code determines whether the input source code and output source code are functionally equivalent though the ASTs may exhibit a different structure. All permutations of core functions can be relationally mapped for AST comparison. Complex ASTs constructed from hundreds of thousands, if not millions, lines of code can be unitized for comparison, such that independent subtrees of one AST can be reordered to match the structure of the other AST to facilitate comparison and determine equivalency. Subtrees of different shapes but identical paths can be used to determine whether one tree is functionally equivalent to another subtree. This allows a rearrangement of elements in one AST to match the shape of the other AST. Based on the validation result of an initial generation of output source code, the AI language module can iteratively regenerate the output source code until an acceptable validation result is achieved. The validation result allows software engineers to analyze the translation error and determine whether the error (i.e., indicated by a non-equivalency in the AST) is a critical error. Engineers can further use the information to identify aspects for retraining the AI language model. Further, the accuracy and reliability of the AI language model can be quantified using the validation result and the effect of retraining the AI language model can be measured.
Various aspects of the present disclosure are described by narrative text, flowcharts, block diagrams of computer systems and/or block diagrams of the machine logic included in computer program product (CPP) embodiments. With respect to any flowcharts, depending upon the technology involved, the operations can be performed in a different order than what is shown in a given flowchart. For example, again depending upon the technology involved, two operations shown in successive flowchart blocks may be performed in reverse order, as a single integrated step, concurrently, or in a manner at least partially overlapping in time.
A computer program product embodiment (“CPP embodiment” or “CPP”) is a term used in the present disclosure to describe any set of one, or more, storage media (also called “mediums”) collectively included in a set of one, or more, storage devices that collectively include machine readable code corresponding to instructions and/or data for performing computer operations specified in a given CPP claim. A “storage device” is any tangible device that can retain and store instructions for use by a computer processor. Without limitation, the computer readable storage medium may be an electronic storage medium, a magnetic storage medium, an optical storage medium, an electromagnetic storage medium, a semiconductor storage medium, a mechanical storage medium, or any suitable combination of the foregoing. Some known types of storage devices that include these mediums include: diskette, hard disk, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or Flash memory), static random access memory (SRAM), compact disc read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanically encoded device (such as punch cards or pits/lands formed in a major surface of a disc) or any suitable combination of the foregoing. A computer readable storage medium, as that term is used in the present disclosure, is not to be construed as storage in the form of transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide, light pulses passing through a fiber optic cable, electrical signals communicated through a wire, and/or other transmission media. As will be understood by those of skill in the art, data is typically moved at some occasional points in time during normal operations of a storage device, such as during access, de-fragmentation or garbage collection, but this does not render the storage device as transitory because the data is not transitory while it is stored.
The descriptions of the various embodiments of the present disclosure have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.