This disclosure relates to migration systems and, more particularly, to migration systems that make predictions concerning the difficulty of a migration.
Migrating a computing platform from on-premises to the cloud offers numerous advantages, including scalability and cost-efficiency, but it also comes with a set of significant challenges. One of the primary difficulties is data transfer and bandwidth constraints. Moving large volumes of data to the cloud can be time-consuming and expensive, especially when dealing with limited network bandwidth.
Data security and compliance are critical concerns during migration. Ensuring the protection of sensitive information throughout the process requires robust encryption, access controls, and compliance measures. Additionally, application compatibility poses a challenge, as not all on-premises applications seamlessly transition to the cloud, often necessitating modifications or updates.
Performance and latency issues can arise, particularly for applications heavily reliant on on-premises resources. Organizations must optimize their applications for cloud infrastructure. Cost management is another challenge, as unexpected expenses can occur if cloud usage is not closely monitored and controlled.
Moreover, the shift to the cloud may require staff training or hiring cloud-savvy talent, and organizations can face vendor lock-in, making it difficult to switch providers once committed. Downtime and business continuity concerns are crucial, and integrating cloud-based and on-premises systems can be complex. Cultural resistance to change among employees and teams can further complicate the transition.
Managing cloud resources, establishing governance policies, addressing network complexity, and ensuring backup and disaster recovery are also challenging tasks. Additionally, regulatory and compliance requirements can vary across industries and regions, necessitating adjustments in policies and practices. To mitigate these challenges, organizations must conduct thorough planning, risk assessments, and comprehensive testing while considering the expertise of cloud service providers or third-party consultants to facilitate a successful transition to the cloud.
In one implementation, a computer-implemented method is executed on a computer device and includes: defining a migration pathway for a current migration project, wherein the migration pathway includes one or more migration portions; and assigning a complexity score to each of the one or more migration portions, thus defining one or more complexity scores.
One or more of the following features may be included. The current migration project may include an on-premise to cloud IT migration project. Each of the one or more migration portions may concern one or more of: an application migration task; a data migration task; and a general migration task. Each of the one or more complexity scores may define the relatively complexity of each of the one or more migration portions. A project index may be assigned to the current migration project that defines the relative complexity of the current migration project based, at least in part, upon the one or more complexity scores. The project index may have a normal value of one and the deviation of the project index above/below the normal value of one may be indicative of the increased/decreased level of complexity of the current migration project with respect to a normal migration project. Staffing levels may be defined for the current migration project based, at least in part, upon the project index. Defining staffing levels for the current migration project based, at least in part, upon the project index may include: defining staffing levels for a plurality of phases of the current migration project based, at least in part, upon the project index. The current migration project may be effectuated via one or more agents. The one or more agents may include one or more of: a project agent; a file systems agent; and an operational agent. The project agent may be executed on a customer's network associated with the current migration project. The file systems agent may be executed on a customer's network associated with the current migration project. The operational agent may be executed on a service provider's network associated with the current migration project.
In another implementation, a computer program product resides on a computer readable medium and has a plurality of instructions stored on it. When executed by a processor, the instructions cause the processor to perform operations including: defining a migration pathway for a current migration project, wherein the migration pathway includes one or more migration portions; and assigning a complexity score to each of the one or more migration portions, thus defining one or more complexity scores.
One or more of the following features may be included. The current migration project may include an on-premise to cloud IT migration project. Each of the one or more migration portions may concern one or more of: an application migration task; a data migration task; and a general migration task. Each of the one or more complexity scores may define the relatively complexity of each of the one or more migration portions. A project index may be assigned to the current migration project that defines the relative complexity of the current migration project based, at least in part, upon the one or more complexity scores. The project index may have a normal value of one and the deviation of the project index above/below the normal value of one may be indicative of the increased/decreased level of complexity of the current migration project with respect to a normal migration project. Staffing levels may be defined for the current migration project based, at least in part, upon the project index. Defining staffing levels for the current migration project based, at least in part, upon the project index may include: defining staffing levels for a plurality of phases of the current migration project based, at least in part, upon the project index. The current migration project may be effectuated via one or more agents. The one or more agents may include one or more of: a project agent; a file systems agent; and an operational agent. The project agent may be executed on a customer's network associated with the current migration project. The file systems agent may be executed on a customer's network associated with the current migration project. The operational agent may be executed on a service provider's network associated with the current migration project.
In another implementation, a computing system includes a processor and a memory system configured to perform operations including: defining a migration pathway for a current migration project, wherein the migration pathway includes one or more migration portions; and assigning a complexity score to each of the one or more migration portions, thus defining one or more complexity scores.
One or more of the following features may be included. The current migration project may include an on-premise to cloud IT migration project. Each of the one or more migration portions may concern one or more of: an application migration task; a data migration task; and a general migration task. Each of the one or more complexity scores may define the relatively complexity of each of the one or more migration portions. A project index may be assigned to the current migration project that defines the relative complexity of the current migration project based, at least in part, upon the one or more complexity scores. The project index may have a normal value of one and the deviation of the project index above/below the normal value of one may be indicative of the increased/decreased level of complexity of the current migration project with respect to a normal migration project. Staffing levels may be defined for the current migration project based, at least in part, upon the project index. Defining staffing levels for the current migration project based, at least in part, upon the project index may include: defining staffing levels for a plurality of phases of the current migration project based, at least in part, upon the project index. The current migration project may be effectuated via one or more agents. The one or more agents may include one or more of: a project agent; a file systems agent; and an operational agent. The project agent may be executed on a customer's network associated with the current migration project. The file systems agent may be executed on a customer's network associated with the current migration project. The operational agent may be executed on a service provider's network associated with the current migration project.
The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features and advantages will become apparent from the description, the drawings, and the claims.
Like reference symbols in the various drawings indicate like elements.
Referring to
Migration management process 10s may be a server application and may reside on and may be executed by computing device 12, which may be connected to network 14 (e.g., the Internet or a local area network). Examples of computing device 12 may include, but are not limited to: a personal computer, a server computer, a series of server computers, a mini computer, a mainframe computer, a smartphone, or a cloud-based computing platform.
The instruction sets and subroutines of migration management process 10s, which may be stored on storage device 16 coupled to computing device 12, may be executed by one or more processors (not shown) and one or more memory architectures (not shown) included within computing device 12. Examples of storage device 16 may include but are not limited to: a hard disk drive; a RAID device; a random access memory (RAM); a read-only memory (ROM); and all forms of flash memory storage devices.
Network 14 may be connected to one or more secondary networks (e.g., network 18), examples of which may include but are not limited to: a local area network; a wide area network; or an intranet, for example.
Examples of migration management processes 10c1, 10c2, 10c3, 10c4 may include but are not limited to a web browser, a game console user interface, a mobile device user interface, or a specialized application (e.g., an application running on e.g., the Android™ platform, the iOS™ platform, the Windows™ platform, the Linux platform or the UNIX™ platform). The instruction sets and subroutines of migration management processes 10c1, 10c2, 10c3, 10c4, which may be stored on storage devices 20, 22, 24, 26 (respectively) coupled to client electronic devices 28, 30, 32, 34 (respectively), may be executed by one or more processors (not shown) and one or more memory architectures (not shown) incorporated into client electronic devices 28, 30, 32, 34 (respectively). Examples of storage devices 20, 22, 24, 26 may include but are not limited to: hard disk drives; RAID devices; random access memories (RAM); read-only memories (ROM), and all forms of flash memory storage devices.
Examples of client electronic devices 28, 30, 32, 34 may include, but are not limited to a personal digital assistant (not shown), a tablet computer (not shown), laptop computer 28, smart phone 30, smart phone 32, personal computer 34, a notebook computer (not shown), a server computer (not shown), a gaming console (not shown), and a dedicated network device (not shown). Client electronic devices 28, 30, 32, 34 may each execute an operating system, examples of which may include but are not limited to Microsoft Windows™, Android™, iOS™, Linux™, or a custom operating system.
Users 36, 38, 40, 42 may access migration management process 10 directly through network 14 or through secondary network 18. Further, migration management process 10 may be connected to network 14 through secondary network 18, as illustrated with link line 44.
The various client electronic devices (e.g., client electronic devices 28, 30, 32, 34) may be directly or indirectly coupled to network 14 (or network 18). For example, laptop computer 28 and smart phone 30 are shown wirelessly coupled to network 14 via wireless communication channels 44, 46 (respectively) established between laptop computer 28, smart phone 30 (respectively) and cellular network/bridge 48, which is shown directly coupled to network 14. Further, smart phone 32 is shown wirelessly coupled to network 14 via wireless communication channel 50 established between smart phone 32 and wireless access point (i.e., WAP) 52, which is shown directly coupled to network 14. Additionally, personal computer 34 is shown directly coupled to network 18 via a hardwired network connection.
WAP 52 may be, for example, an IEEE 802.11a, 802.11b, 802.11g, 802.11n, Wi-Fi, and/or Bluetooth device that is capable of establishing wireless communication channel 50 between smart phone 32 and WAP 52. As is known in the art, IEEE 802.11x specifications may use Ethernet protocol and carrier sense multiple access with collision avoidance (i.e., CSMA/CA) for path sharing. As is known in the art, Bluetooth is a telecommunications industry specification that allows e.g., mobile phones, computers, and personal digital assistants to be interconnected using a short-range wireless connection.
Referring also to
As is known in the art, an on-premises computing platform (e.g., on-premise computing platform 100), often known as “on-prem,” represents the traditional model of computing where organizations maintain and manage all their hardware, software, and networking resources within their own physical data centers or facilities. In this setup, the computing infrastructure is located on-site or at a dedicated organizational location. It grants complete ownership and control over every aspect of the infrastructure to the organization, including the purchase, maintenance, and upgrades of servers, storage, and networking equipment. However, it involves significant capital expenditures, as organizations must invest upfront in hardware and bear the ongoing operational costs. Scalability is limited, and scaling up or down can be time-consuming and costly. Routine maintenance, security, and compliance are entirely the organization's responsibility, with physical security measures also falling under its jurisdiction.
In contrast, a cloud-based computing platform (e.g., cloud-based computing platform 102) operates on infrastructure provided by cloud service providers like AWS, Azure, or Google Cloud, located in data centers distributed globally. Users access these resources and services over the internet, and while they have control over configuring and managing the resources they use, they do not own or manage the underlying infrastructure. Cloud computing operates on an operational expense model, where users pay only for the resources they consume, converting capital expenditures into operational ones. Scalability is a key advantage, with the ability to quickly and efficiently scale resources up or down based on demand. Cloud providers handle routine maintenance, hardware upgrades, and software updates, reducing operational burden and ensuring services remain up-to-date and secure. These providers also invest significantly in security measures and hold various compliance certifications, with users responsible for configuring security settings within their own cloud environments. The global reach of cloud platforms allows for resource deployment in multiple regions worldwide, improving availability and reducing latency for end-users. Ultimately, the choice between on-premises and cloud-based computing platforms depends on an organization's specific needs, budget, and strategic goals.
Referring also to
Below is an example of such a migration pathway:
Local Source=(App1+App2+App3+Data1+Encrypt)Cloud Target
Generally speaking, a migration pathway (such as the one above) may generally define a migration project. In all but the simplest of migrations, a migration pathway (such as the one above) may include one or more migration portions. For the above-shown migration pathway, the migration pathway is shown to include five migration portions, as follows:
Local Source=(App1)Cloud Target
Local Source=(App2)Cloud Target
Local Source=(App3)Cloud Target
Local Source=(Data1)Cloud Target
Local Source=(Encrypt)Cloud Target
Each of the one or more migration portions (e.g., the five migration portions shown above) may concern one or more of: an application migration task; a data migration task; and a general migration task.
With respect to the five migration portions, the following may be Application Migration Tasks:
Local Source =(App1)Cloud Target
Local Source =(App2)Cloud Target
Local Source =(App3)Cloud Target
An application migration task, in the context of cloud migration, refers to the process of moving an existing on-premises or legacy application to a cloud-based environment. This task is a crucial component of a broader cloud migration strategy and involves several steps and considerations.
Examples of application migration tasks may include but are not limited to:
With respect to the five migration portions, the following may be Data Migration Tasks:
Local Source=(Data1)Cloud Target
A data migration task, in the context of cloud migration, refers to the process of moving data from an organization's on-premises or legacy systems to a cloud-based environment. Data migration is a fundamental component of most cloud migration projects because it involves transferring critical data assets, such as databases, files, and application data, to the cloud.
Examples of data migration tasks may include but are not limited to:
With respect to the five migration portions, the following may be General Migration Tasks:
Local Source=(Encrypt)Cloud Target
In a cloud migration project, in addition to application and data migration tasks, there are several general tasks that are essential for the successful transition from on-premises or legacy systems to the cloud. These general tasks encompass various aspects of the migration process and play a crucial role in ensuring a smooth and efficient migration.
Examples of common general tasks may include but are not limited to:
Migration management process 10 may assign 202 a complexity score to each of the one or more migration portions, thus defining one or more complexity scores (e.g., complexity scores 106). Continuing with the above-stated example, the migration pathway is as follows:
Local Source=(App1+App2+App3+Data1+Encrypt1)Cloud Target
As also discussed above, this migration pathway may generally define a migration project. For the above-shown migration pathway, the migration pathway includes five migration portions, as follows:
Local Source=(App1)Cloud Target
Local Source=(App2)Cloud Target
Local Source=(App3)Cloud Target
Local Source=(Data1)Cloud Target
Local Source=(Encrypt1)Cloud Target
Accordingly, migration management process 10 may assign 202 a complexity score to each of these five migration portions, thus defining five complexity scores (e.g., complexity scores 106), wherein each of these complexity scores (e.g., complexity scores 106) may define the relatively complexity of each of these migration portions.
General speaking and as will be discussed below in greater detail, historical information concerning such complexities assigned 202 to these five migration portions may be stored within a complexity prediction model (e.g., complexity prediction model 54). For example, complexity prediction model 54 may define the historical complexities associated with various application migration tasks; various data migration tasks; and various general migration tasks. Accordingly, complexity prediction model 54 may define a complexity score (e.g., one of complexity scores 106) for various applications that may be migrated to the cloud, various types of data that may be migrated to the cloud, and various general tasks that may be performed during a migration to the cloud.
For example and with respect to the “App 1” migration portion, the complexity score (e.g., one of complexity scores 106) for this migration portion may be calculated as follows:
The above-illustrated complexity calculation template for “App1” (which may be stored within complexity prediction model 54) is for illustrative purposes only and is not intended to be a limitation of this disclosure. Accordingly, the above-illustrated complexity calculation template is simply provided to show one manner in which a complexity score (e.g., one of complexity scores 106) may be calculated for a migration portion that concerns “App1”. Assume for this example that a similar complexity calculation template (for “App2”, “App3”, “Data1” and “Encrypt1”) may be stored within (and available from) complexity prediction model 54, wherein such similar complexity calculation templates (for “App2”, “App3”, “Data1” and “Encrypt1”) may be utilized by migration management process 10 to assign 202 a complexity score (e.g., one of complexity scores 106) to the migration portions that concern “App2”, “App3”, “Data1” and “Encrypt1”.
Migration management process 10 may assign 204 a project index (e.g., project index 108) to the current migration project (e.g., an on-premise to cloud IT migration project from on-premise computing platform 100 to cloud-based computing platform 102) that defines the relative complexity of the current migration project (e.g., an on-premise to cloud IT migration project from on-premise computing platform 100 to cloud-based computing platform 102) based, at least in part, upon the one or more complexity scores (e.g., complexity scores 106).
As discussed above, this migration pathway may generally define a migration project, wherein the migration pathway (in this example) includes five migration portions, as follows:
Local Source=(App1)Cloud Target
Local Source=(App2)Cloud Target
Local Source=(App3)Cloud Target
Local Source=(Data1)Cloud Target
Local Source=(Encrypt1)Cloud Target
As discussed above, the above-illustrated complexity calculation templates stored within (and available from) complexity prediction model 54 may be utilized by migration management process 10 to assign 202 a complexity score (e.g., one of complexity scores 106) to each of the migration portions that concern “App1”, “App2”, “App3”, “Data1” and “Encrypt1”.
Accordingly, migration management process 10 may assign 204 a project index (e.g., project index 108) to the current migration project (e.g., an on-premise to cloud IT migration project from on-premise computing platform 100 to cloud-based computing platform 102) that defines the relative complexity of the current migration project based, at least in part, upon these complexity scores (e.g., complexity scores 106), which correspond to the five migration portions that concern “App1”, “App2”, “App3”, “Data1” and “Encrypt1”).
An example of the manner in which the project index (e.g., project index 108) may be calculated by the migration management process 10 is as follows:
((ComplexityScor*0.01)+(User Count*0.01))*sum(Journey Contribution)
The above-illustrated project index calculation template is for illustrative purposes only and is not intended to be a limitation of this disclosure. Accordingly, the above-illustrated project index template is simply provided to show one manner in which a project index (e.g., project index 108) may be calculated for a migration project (e.g., an on-premise to cloud IT migration project from on-premise computing platform 100 to cloud-based computing platform 102) that includes migration portions for “App1”, “App2”, “App3”, “Data1” and “Encrypt1”.
The “ComplexityScore” referenced above may represent the complexity scores (e.g., complexity scores 106) of the five migration portions associated with “App1”, “App2”, “App3”, “Data1” and “Encrypt1”. For example, the “ComplexityScore” referenced above may be e.g., an unweighted average of the five complexity scores (e.g., complexity scores 106), a weighted average of the five complexity scores (e.g., complexity scores 106), a sum of the five complexity scores (e.g., complexity scores 106), etc.
The “sum(Journey Contribution)” referenced above may represent the sum of weights of the various components that make up the current migration project (e.g., an on-premise to cloud IT migration project from on-premise computing platform 100 to cloud-based computing platform 102).
Examples of such weights and how they pertain to tasks within a migration is as follows:
So if the current migration project (e.g., an on-premise to cloud IT migration project from on-premise computing platform 100 to cloud-based computing platform 102) concerns the migration of an Abacus™ installation, a Quickbooks™ installation and an Ontos™ installation, the “sum(Journey Contribution)” would be 2.90 (1.25 for Abacus™+0.15 Quickbooks™+1.50 Ontos™ respectively).
The project index (e.g., project index 108) may have a normal value of one and the deviation of the project index (e.g., project index 108) above/below the normal value of one may be indicative of the increased/decreased level of complexity of the current migration project (e.g., an on-premise to cloud IT migration project from on-premise computing platform 100 to cloud-based computing platform 102) with respect to a normal migration project.
Accordingly:
Migration management process 10 may define 206 staffing levels for the current migration project (e.g., an on-premise to cloud IT migration project from on-premise computing platform 100 to cloud-based computing platform 102) based, at least in part, upon the project index (e.g., project index 108). For example and when defining 206 staffing levels for the current migration project (e.g., an on-premise to cloud IT migration project from on-premise computing platform 100 to cloud-based computing platform 102) based, at least in part, upon the project index (e.g., project index 108), migration management process 10 may define 208 staffing levels for a plurality of phases of the current migration project (e.g., an on-premise to cloud IT migration project from on-premise computing platform 100 to cloud-based computing platform 102) based, at least in part, upon the project index (e.g., project index 108).
For example and referring also to
While this particular staffing chart (e.g., staffing chart 300) is shown to define four types of professionals (e.g., Project Manager, Engineer, Professional Services Provider, Trainer), this is for illustrative purposes only and is not intended to be a limitation of this disclosure. For example, the number of professional types defined may be increased/decreased depending upon the desired level of staffing granularity (when defining 206 staffing levels for the current migration project based, at least in part, upon project index 108).
Additionally, this particular staffing chart (e.g., staffing chart 300) is shown to define seven phases (e.g., backlog, kickoff, discovery, conversion, PPE-UAT, GoLive, Stabilization), this is for illustrative purposes only and is not intended to be a limitation of this disclosure. For example, the number of phases may be increased or decreased depending upon the desired level of staffing granularity (when defining 208 staffing levels for a plurality of phases of the current migration project based, at least in part, upon project index 108).
Further, while this particular staffing chart (e.g., staffing chart 300) is shown for a project index (e.g., project index 108) of 1.00, this is for illustrative purposes only and is not intended to be a limitation of this disclosure. For example, this particular staffing chart (e.g., staffing chart 300) may be scaled upward if project index 108 is higher (e.g., for a project index of 1.50), while this particular staffing chart (e.g., staffing chart 300) may be scaled downward if project index 108 is lower (e.g., for a project index of 0.70).
Specifically, staffing chart 300 indicates that the current migration project (e.g., an on-premise to cloud IT migration project from on-premise computing platform 100 to cloud-based computing platform 102) requires 0.35 Project Managers/0.23 Engineers/0.30 Professional Service Providers/0.75 Trainers for the PPE-UAT phase of the current migration project having a project index (e.g., project index 108) of 1.00.
However, if the current migration project (e.g., an on-premise to cloud IT migration project from on-premise computing platform 100 to cloud-based computing platform 102) has a project index (e.g., project index 108) of 1.50, the staffing requirements for the current mitigation project may be scaled up accordingly (e.g., to 0.52 Project Managers/0.34 Engineers/0.45 Professional Service Providers/1.12 Trainers during the PPE-UAT phase.
Conversely, if the current migration project (e.g., an on-premise to cloud IT migration project from on-premise computing platform 100 to cloud-based computing platform 102) has a project index (e.g., project index 108) of 0.70, the staffing requirements for the current mitigation project may be scaled down accordingly (e.g., to 0.24 Project Managers/0.16 Engineers/0.21 Professional Service Providers/0.52 Trainers during the PPE-UAT phase.
Migration management process 10 may effectuate 210 the current migration project (e.g., an on-premise to cloud IT migration project from on-premise computing platform 100 to cloud-based computing platform 102) via one or more agents (e.g., one or more of: a project agent; a file systems agent; and an operational agent).
The project agent may perform various operations, examples of which may include but are not limited to:
The file systems agent may perform various operations, examples of which may include but are not limited to:
The operational agent may perform various operations, examples of which may include but are not limited to:
Each pathway may have specific tests and events that migration management process 10 may perform as part of completing a phase (e.g., one of the above-referenced seven phases, namely backlog, kickoff, discovery, conversion, PPE-UAT, GoLive, Stabilization). These specific tests and events (which may be defined within complexity prediction model 54) may be learned/informed/extended as new items are identified. Any new learnings may be immediately deployed by migration management process 10 and complexity prediction model 54 may be updated to define the same.
Examples of such specific tests and events may include but are not limited to:
The following discussion concerns the manner in which complexity prediction model 54 may be trained/updated.
Referring also to
Generally speaking, this complexity prediction may concern one or more of: a complexity score assigned to each of the one or more migration portions of the current migration project (thus defining one or more complexity scores); and a project index assigned to the current migration project that defines the relative complexity of the current migration project based, at least in part, upon the one or more complexity scores.
For example and when generating 400 such a complexity prediction for the current migration project using complexity prediction model 54, migration management process 10 may assign 202 a complexity score to each of these five migration portions (thus defining complexity scores 106) and may assign 204 project index 108 that defines the relative complexity of the current migration project based, at least in part, upon these complexity scores (e.g., complexity scores 106).
Migration management process 10 may effectuate 402 the current migration project (e.g., an on-premise to cloud IT migration project from on-premise computing platform 100 to cloud-based computing platform 102), thus resulting in an effectuated migration project. Once the mitigation project is completed, migration management process 10 may define 404 an actual complexity for the effectuated migration project. Migration management process 10 may then compare 406 the actual complexity of the effectuated migration project to the complexity prediction of the current migration project to identify a complexity delta.
Accordingly, assume that the complexity prediction being scrutinized is the project index (e.g., project index 108). Further assume that the complexity prediction (that was predicted by migration management process 10 before the migration project began) was a project index of 1.60; while the actual complexity (that was realized by migration management process 10 after the migration project was completed) was a project index of 1.80. Accordingly, migration management process 10 may identify a complexity delta of +0.20 (or +12.50%)
Migration management process 10 may revise 408 the complexity prediction model (e.g., complexity prediction model 54) based, at least in part, upon the complexity delta (e.g., +0.20 or +12.50%). For example, migration management process 10 may revise 408 complexity prediction model 54 by e.g., revising one or more of the complexity calculation templates (as discussed above), component weights (as discussed above) and/or various other formulas/methodologies defined within complexity prediction model 54 to achieve the desired revision of complexity prediction model 54. As discussed above, the complexity prediction model (e.g., complexity prediction model 54) may be based, at least in part, upon prior-completed migration projects and may be generated by processing completion information associated with the prior-completed migration projects.
As is known in the art, Artificial intelligence (AI) utilizes historical data to make predictions about future outcomes through a process known as machine learning. This process involves several key steps. Initially, relevant historical data is collected from various sources, such as sensors, databases, or external datasets. Once collected, this data undergoes preprocessing to ensure cleanliness and suitability for analysis. Feature selection comes next, where the most pertinent attributes are chosen for the prediction task.
To train a predictive model (e.g., complexity prediction model 54), the historical data is divided into a training set and a testing set. The model selection process involves choosing an appropriate machine learning algorithm based on the nature of the data and the specific prediction problem. Training the model (e.g., complexity prediction model 54) is a crucial step where it learns patterns and relationships within the historical data, adjusting its internal parameters to minimize the difference between its predictions and actual outcomes.
Validation and tuning follow, as the model's performance is assessed using the testing set. Hyperparameters may be fine-tuned to optimize accuracy and generalization. Once the model (e.g., complexity prediction model 54) is ready, it's deployed in a production environment, where it processes new data (e.g., data concerning a new migration project) to make predictions about future outcomes. These predictions can take various forms, such as numeric values or class labels.
Continuous monitoring is essential to ensure the model's accuracy and relevance over time, as patterns in data may change. Feedback and retraining might be necessary to adapt the model to new information. Ultimately, AI leverages historical data, advanced algorithms, and ongoing learning to make predictions about future events, offering valuable insights and decision-making support across various domains.
Accordingly and when revising 408 the complexity prediction model (e.g., complexity prediction model 54) based, at least in part, upon the complexity delta (e.g., +0.20 or +12.50%), migration management process 10 may utilize 410 the complexity delta (e.g., +0.20 or +12.50%) as training data (e.g., training data 56) for the complexity prediction model (e.g., complexity prediction model 54).
As will be appreciated by one skilled in the art, the present disclosure may be embodied as a method, a system, or a computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, the present disclosure may take the form of a computer program product on a computer-usable storage medium having computer-usable program code embodied in the medium.
Any suitable computer usable or computer readable medium may be utilized. The computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non-exhaustive list) of the computer-readable medium may include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a transmission media such as those supporting the Internet or an intranet, or a magnetic storage device. The computer-usable or computer-readable medium may also be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory. In the context of this document, a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer-usable medium may include a propagated data signal with the computer-usable program code embodied therewith, either in baseband or as part of a carrier wave. The computer usable program code may be transmitted using any appropriate medium, including but not limited to the Internet, wireline, optical fiber cable, RF, etc.
Computer program code for carrying out operations of the present disclosure may be written in an object oriented programming language such as Java, Smalltalk, C++ or the like. However, the computer program code for carrying out operations of the present disclosure may also be written in conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through a local area network/a wide area network/the Internet (e.g., network 14).
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, may be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer/special purpose computer/other programmable data processing apparatus, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that may direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures may illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, may be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The embodiment was chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated.
A number of implementations have been described. Having thus described the disclosure of the present application in detail and by reference to embodiments thereof, it will be apparent that modifications and variations are possible without departing from the scope of the disclosure defined in the appended claims.
This application claims the benefit of U.S. Provisional Application No. 63/377,665, filed on 29 Sep. 2022, the entire contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63377665 | Sep 2022 | US |