The present invention, in some embodiments thereof, relates to a method and system for improving the organization's Agile working model based on operational data collected (or sampled, possibly in real time and/or periodically e.g. daily) by various “agents” or “sensors”, from various business operational systems (e.g. emailing system, task management/work flow system, finance/budget control system etc.).
In its full operation, the system can be used for controlling and assisting in enforcing organization Agile policies. Alternatively, it can be used also for predicting the execution of the organization in the next quarter (Super-sprint or Program Increment) in order to ensure higher degree of predictability and risk reduction, by using what-if scenarios.
A third usage of the system, is to use it as a synthetic simulator, running a close enough model of the organization.
When used as a control system, the system can be used for realizing the ‘manufacturing constraints’ and bottlenecks of the existing organization. The system can be used in various ways, such as automatically altering and controlling the utilization of the bottleneck services; recommending team-forming changes; automatic prioritization of projects and tasks; adding or removing scope to a plan, and additionally a training environment of a team of professional managers either preparing for the Large Scale Agile transformation, or considering organizational changes or any other improvements during the course of the scaled agile journey.
The data collected/sampled by the sensors/agents could potentially be very detailed and in large volumes (aka big-data) and will be used as input for automatic analysis of patterns, learning, and improvements of the organization operating model. The data is further analyzed by a learning-engine, which keeps tracking correlations, and using previously acquired data in order to detect, suggest and, if necessary, automatically enforce, if required, the improved operational models. In the context of a training environment, these alternative models are provided as probable suggestions for managers during their simulation sessions, or training sessions. Alternatively, the reporting systems which are a part of the system, or use third party reporting tools, can expose the new, optimized models.
As used herein, the terms/phrases Agile, Lean, LeSS, SAFe, Scrum, and Kanban mean various methods for implementing the Agile methodology within an organization. Agile is a rapidly accepted methodology, a way of thinking, and a common practice for working in complex environments such that are common in the modern work environments. It assumes that processes and goals must be rapidly adjusted in order to ensure the survival and competitive edge of the business or organization.
In its simple version, Agile is adopted for the team level, aiming at improving most of the common Key Performance Indicators (KPI) including: Predictability, Quality, coordination between Business and IT and the Team's Moral.
For the team level, various simulation techniques were evolved, including board-games, computerized simulations and more. These typically are used to show the nature and decisions which are required by the team and its immediate leadership to achieve the desired impacts. These implementations were configured as synthetic, theoretical cases, not related to the actual operational systems. Further, they did not have any way of intervening with the executable process.
Those simulations relied first on a mathematical model, which represents the various parameters which impacted a successful application of the Agile methodology at a team level. These parameters typically include: the team size and specialized skills of each member, the Timebox (referred to as Sprint length), and the size-distribution and quantity of work-items selected during the simulation.
Those simulations had a static, fixed model, and a limited interaction with the actual events taking place in the organization, and no feedback capability—to impact the organization actual operation.
Forming a model of a large-scale Agile implementation is much more complex, and relies on abstracting beyond the team level. At this level, the KPIs are different, and include, typically: Time to Market (TTM), Customer Satisfaction, Alignment with organizational initiatives, meeting regulatory requirements, and more. Further, forming a model relies, on top of the team-level parameters, on additional parameters: a. modeling several teams, b. the cross-team relationships/dependencies c. the size, partition and distribution of larger work-items (features, or projects—in the Agile language this is referred to as the backlog) between the teams, and d. the size of the “super-sprint”, or the major release or larger Timebox which all the teams aim at synchronizing at.
Such a formal modeling has not been done, to the best of our knowledge, so far, for various reasons: Simulation of a larger organization is harder, as inherently in the Agile thinking, teams have a higher degree of ownership, and while predictability is achieved, the synchronization mechanisms are subtler, and thus harder to model. The attempt to form such a model based on a formula, or a process seems useless; instead the system should use the gathered sensors data in order to provide the needed learning to continuously improve the model.
It seems obvious that any attempt to formalize such a model is doomed to fail. Instead, we propose to form an initial model, and to rely on machine-learning techniques which will refine the model, and will provide: a. a more accurate model of the real organization and processes b. will provide alternative improved model and c. will evolve over time, as new scenarios and constraints are gathered, or the reality of the organization changes. The newly acquired models can be shared with managers to experience, discuss and learn, and naturally, also to control the degree of changes with regard to the current organization model.
In order to be able to simulate a larger organization performance an initial scaling-up operational model needs to be selected. This should be based on one of the common scaling up practices. The most common are SAFe (Scaled Agile Framework), Scrum of Scrum, Disciplined Agile Delivery, LeSS (Large Scale Scrum), the Spotify Model, Nexus and some other models. The simulation is based on an abstraction model which summarizes the common principles of the selected relevant scaling techniques.
It is expected that based on the selected model the simulation would expose some control ‘knobs’ to the user, which allows for modifications of the model parameters. These modifications of the model parameters can be done either manually, or based on actual operational data collected by the agents/sensors from the operational systems (e.g. statistical characteristics of requests coming from customers, timing, types, sizes . . . ). Furthermore, the model should be refined and learning should be performed as more data is gathered from the sensors.
As mentioned before, few simulations exist for the team level, relying mostly on the Kanban flow model. These simulations typically do not even refer to a Timebox. To the best of our knowledge, there is no Scrum team simulator, let alone a multiple team simulator which “learns” and adjusts its configuration parameters based on actual operational data.
Since the Scaling-Agile methodology generates multiple constraints, non-standard organization structures and processes, and high degree of fluidity, and changes—the common organization simulations provide very little, if any, support in analyzing, forming and understanding the impacts of managerial decisions within the Agile framework, let alone, the ability to actually direct the organization to a newly derived model.
A simulation system is provided, which offers the participants (typically managers, or trainees, considering an implementation of Agile in their organization, or considering organizational changes or any other improvements as part of their existing agile journey) the ability to control the managed organization, using actual data and with relations to the operation of the organization. The system lets managers experience the impact of possible managerial decisions on the Agile organization performance. The system can automatically enforce many of the model's operational activities, or alternatively, turn them into alerts, or recommendations.
Typically, the simulator has an initial organization Agile model, and may use various sensors to acquire real-time behavioral data on the organization, and a learning engine, which keeps refining the internal model, and suggesting possible improvements to the model (translated to managerial decisions/actions). Managerial decisions can be fed into the simulation system either manually (reflecting managers' thoughts and beliefs as to how the system should better be modeled) and/or combined with simulation parameters generated automatically based on actual performance data collected by the various agents/sensors from the operational systems (for example—the agents/sensors can detect, from the task management system, that the average number of work items that a team is working on in parallel and suggest to reduce the number (aka WIP limit). Another example—the agents/sensors can detect, from the finance/budgeting control system that a project has just been approved, and based on previously learnt patterns, predict/forecast an increase in architecture work, and make a recommendation to add capacity within a known time to the architecture team).
In general, the system can detect and predict bottlenecks in the Agile production flow, and thus can either suggest or enforce alternative passes, or better utilization of the bottlenecks. It is important to note, that unlike standard manufacturing, where the operation of a machine, its throughput, and availability, including faults, are well analyzed and formalized, an Agile organization behaves like a manufacturing ‘many lines’ with little regularity. Therefore, the formal manufacturing management tools and processes (Production Management tools) cannot be used for formalizing an Agile organization.
The system is based on an initial configuration of the organization. This configuration is, optionally, a model for multiple Agile teams, represented by a configuration parameter set, describing the organization structure, number of teams, grouping of teams, supporting teams, and mode of operation (e.g. Scrum, Kanban, service teams, non-agile teams, etc.). Often, several basic information flows are already embedded in the project management tools, reflecting contributors and owners adding value to the complete product. Using the agents/sensors which monitor the various communication systems (e.g. emailing system, messaging systems etc.), the system can analyze the actual communication traffic, acquire the needed flows, identify clusters of people that communicate intensively, and make configuration recommendations, or form an alternative model, such that include the required people as early as needed, and specifically, in the Periodic (typically, quarterly) planning session.
An additional set of parameters, which are required for the model, includes the organizational synchronization clocks, known in Agile as Sprint (Interval) and Program Increment (a sequence of several Sprints, Super-Sprint).
Optionally, an additional parameter set exposes the quantitative decisions that the leadership group often is required to make during the development process or on-going operation. These decisions are exposed through a User Interface, and have impact on the progress of the development process in the successive time interval(s). Typically, these decisions include:
Reducing cross-team dependencies, by restructuring/splitting service teams
Reducing cross-team dependencies, by differently splitting or modifying the tasks
Re-balancing teams, to better fit new requirement and business needs
Adding synchronization processes, to ensure better alignment
Adding reporting dashboards, to enable better management control during the various time intervals.
All of the above decisions can either be automatically governed by the system, semi-automatically applied, or exposed to managers, so that they can consider them when necessary, e.g. during the quarterly planning process, or during a training session.
The challenges that a manager faces once Agile transformation is in place, are that although the flow and the mechanics are better understood, where to invest managerial attention and budget, is not always that intuitive. A sample of such challenges are:
Some embodiments of the invention are herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced.
Once the organization's configuration is set, data acquisition also takes place. This can be achieved by either connecting to the relevant information systems Application Program Interfaces (API) or through other gates (e.g. mail database). This is used as an initial training set for defining the Machine-Learning-based model. The result of running the first iteration of learning is the initial model, which we will refer to as MO. MO is used by the simulation engine. It naturally refers to the organization goals which, in Agile, are described as a backlog, and are presented to the users. The goal of the simulation, is identical to the main goal of the organization, hence—to complete all of the initiatives/projects/features/which are stored in the backlog (No. 7).
In order to achieve this goal, the system or the managers need to decide several types of decisions: when using the system as a simulator/game, each time interval, (Sprint or Program Increment) the participants are allowed to take some decisions. When used as a semi-automatic control system, some of the required decisions may be carried out by the system itself, based on the understandings that were acquired by the improved model (will be referred to as Mi) which the Model Learning Engine (No. 6) has generated.
Based on the new mode, new decisions are required, for prioritization of tasks to meet goals; investment in preparation for tasks; investments in monitoring & governance; and additional alterations in team structures or work processes. These decisions are passed through the Simulation and Control Engine (No. 2) to the Model DB (No. 3) where they are stored.
Based on these parameters the next interval events are executed and recorded. These can be either generated by the simulation & control engine (No. 2) or based on events gathered from the operational systems (No. 5)—from the various sensors/agents. The gathering and recording is done by the Simulation & Control Engine (No. 2). When used as a simulator, the system than may feed timely events into Module No. 4, the Optional—third party event & reporting engine. When used as a control system, the required events may be fed into the Operational system (No. 5) control agents. These may cause activities, raise flags, or initiate some reviewing processes within the organization.
NOTE: in a preferred implementation, Module No. 4 is a standard Agile Management tool, such as Jira, Rally, TFS, Monday or VersionOne. (or the like). These tools provide Programming APIs, which record low level events (e.g. commitment by a team, moving a task during a Sprint, and completion of a task). These tools also provide a variety of reporting graphs & dashboards for the teams and for managers. By using such a tool, much of the recording and reporting becomes standard, and the main challenges remain within the other modules.
Module No. 5 is an additional set of agents/sensors which monitor the various operational systems, collect data which is further being used for automatic learning and improving the work model, by the model-learning engine (No. 6).
Module No. 6—is an ongoing learning engine which uses common machine-learning techniques in order to find correlations, patterns and predictors between the various events. Additionally, this learning engine may use common machine-learning techniques (e.g. genetic algorithms, classification methodologies, and predictive methods), in order to provide in order to provide an improved model M1 which performs better than the initial model M0, or later, to keep building new model sequences M(i+1) which outperforms the previous model Mi. Eventually, this learning engine should provide an alternative operational model, to be stored in the Model Database (3), which is used in the following sequence of time-box executions or the sequence of simulation stages by the Simulation & Control Engine (2).
Note that the Backlog database (No. 7) may include either a. manually configured work-items (initiatives, projects and features) (this must be the case when used as a control system), b. artificially generated items, (derived from the profiling generated by the learning module (No. 6) or c. a combination of real work items and generated ones (the last two options may be relevant only when used as a simulation engine).
Move to Stage B and use simulation control engine (2) and the Model DB 3 the User Interface to set the initial system parameters.
The parameters that need to be set are stated in stages S.1 and S.2—and include the team level parameters (number of teams, structure and specialization) and sprint duration. Additional parameters are at the Scaling up level: the structure of an ART, the duration of an Increment, (the super-sprint), etc.
This report is vital for a manager that needs to know and monitor the progress towards a planned delivery. This report allows, typically, for predicting the time-scope convergence in about a third of the time-span of the project.
Three sets of parameters can be defined:
The present invention, in some embodiments thereof, relates to and, more particularly, but not exclusively, to automatically (or semi-automatically) improve the work model of the Agile organization using a learning-model and a simulation. The system generates predictions of an outcome of the behavior of an organization going through the Agile Scaling transformation. This Agile transformation may include both a structure change, but mostly, a culture change. The simulation lets team leaders and managers observe the impact of the various decision alternatives they normally face. The current invention describes a computer program that according to embodiments of the invention, helps managers analyze, modify, control and predict, the impacts of their decisions. Further, it allows them to optimize their organization performance automatically, semi-automatically, or manually, based on recommendations derived from the system.
It will be understood that each block of the illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. Reference is now made to
Reference is now made to
Stage A: Configuration
General note: The specific numbers used in this section (and this applies also to other places in the document), e.g. the number of teams, number team members in a team, number of weeks in a sprint etc.—all these numbers are just examples representing typical sizes/quantities used in the industry, and are not meant to limit the scope of the patent in any way.
The cross-service teams are common in a large organization, representing key skills and capabilities which are subject to organization policies, and are typically not fully distributed to the development teams. These often include: Database administration, Security, Infrastructure, Networking and the like (legal, risk management, networking and procurement).
The Configuration Stage happens once in the beginning of the simulation. The participants, typically, a management team, select a configuration which best describes the situation in their organization:
Stage B: Program Increment (Periodic/Quarterly) Planning
Most of the Agile scaling methodologies assume that a periodic planning event takes place, which involves multiple teams. This event is the peak of a ‘pre-planning’ process in which the leadership of the group (multiple-teams) reviews the business objectives and initiatives, and turns them into a prioritized backlog. This backlog is the input for the periodic planning event. The output is an agreed plan and agreed set of common goals for the next period (time box, quarter, or PI in SAFe)
During the Periodic Planning, during Stage B, the participating team takes the common steps and decisions taken by the teams:
These operations can be performed in the System UI, or externally, using the organization's preferred tools. In the latter case, the resulting Backlog may be loaded into the Backlog module (No. 7 in
During the setup of the system, the participants can choose a “backlog scenario”. In this context, “backlog scenario” means the parameters describing the backlog: The number of work items of various types (e.g. epics, features, projects, initiatives . . . ) and/or sizes (e.g. small work items which might take a few hours/days to complete, or bigger work items which might take weeks/months to complete), and/or with various dependencies on other teams and/or on other work items, as well as the maturity of the breakdown of each work-item.
Upon completion of the initialization and data gathering stages, the system is “activated” and the learning engine builds the first Model, M0.
Upon Completion of this Stage, the System is Ready to Start, Hence, it is ‘Operational’.
In the planning stage, management may consider using past information, in order to allow for a better planning towards predictability. For example, by using the Velocity view (
By looking at the Kanban Board (
Stage C: Simulate or Control a Program Increment.
During a simulation session, Stages B and C are repeated multiple times, representing the Program Increment Planning process (Stage B) followed by the Program Execution simulation, (Stage C). The output and actual simulation results are presented using the various dashboards mentioned above, in order to support better planning and decision making in the following Program Increment.
The two most meaningful Key Performance Indicators (KPIs) for measuring the improvement of the control/simulator are:
At each stage of the execution, management can review a variety of dashboards. Establishing these dashboards, configuring them, and ensuring that the group generates the data required for these dashboards, are time consuming, and reflect some of the costs that management has to pay for future improvements.
Reference is now made to the model database (number 3, in
When used as a simulator, during the simulation stages, the system exposes events derived from the Model Database, according to the timing of these events. The management team can perform one of several actions:
Obviously, the backlog items may be associated with business value, or monetary values. Similarly, each backlog item may be associated with relative effort estimate. The goal of the management team (both in simulation mode and in controller mode) can be to earn as much money as possible, or as many backlog items, during a fixed number of Program Intervals. (Note that in real life, long term goals may yield high value, and this may require low income at early Program Increments).
Further, each operation in the simulation, such as backlog refinement, or dashboard tuning, can also be associated with a cost, to reflect the effort that is required in order to improve overall performance.
As stated before, using the system the management team of large groups (50-500 people) can overcome some of the challenges that they face during an Agile transformation, or even after some experience has been gathered: the flow and the mechanics of manufacturing the backlog items are understood, but practical questions such as where to invest attention and budget, are not answered. Some of these challenges are:
When the control system is activated the user can use the various User-Interfaces knobs in order to control the desired focus of the execution:
During the execution of the Program Increment, the various views, specifically the Portfolio view (Kanban board
While the system can automatically generate these controls, it can also be used as a ‘decision support’ system, or as a simulator. In any case, alerts and indicators for issues during the Program Increment are directed to the relevant managers.
Number | Date | Country | |
---|---|---|---|
62880692 | Jul 2019 | US |