This disclosure relates generally to a cloud-based system and method for dynamic incident management, and more particularly to a cloud-based system and method for dynamic incident management using machine learning operations.
Many companies provide equipment that is used by various locations of a business during day to day operations. When problems arise with the equipment at a particular business location and the equipment becomes out of order and requires service, the operation of that location can be impacted, with the severity of the impact increasing based on the length of the outage. A company providing such equipment may also provide a service agreement to the business which commits to resolve work orders (incidents for help desk and service requests for field service) within a pre-defined time window based on a priority of the work order. A critical onboarding task for each work order is to define the priority of that work order based on business impact, location profile, equipment type, etc. Once a work order is created, it is typically assigned a static priority. A drawback is that a static priority does not factor in, for example, seasonality and the time-variant nature of revenue at a particular business location. For example, a coffee shop located close to an office building complex could tend to generate peak revenue around lunch time on workdays, while another coffee shop located close to a residential area may have its peak revenue in the morning, late afternoon, or evening on every day of the week.
Accordingly, there is a need for an improved incident management system which overcomes the drawbacks identified above.
The following detailed description, given by way of example and not intended to limit the present disclosure solely thereto, will best be understood in conjunction with the accompanying drawings in which:
In the present disclosure, like reference numbers refer to like elements throughout the drawings, which illustrate various exemplary embodiments of the present disclosure.
The system and method of the present disclosure adjusts incident priorities based on an impact to the location of a business in order to maximize customer revenue protection based on fixed help desk and field capacity for service calls. The impact is calculated by a machine learning model which continually forecasts revenue based on past transaction data for that location. The system and method of the present disclosure optimizes the service call process by leveraging a fixed service capacity in a way which maximizes customer revenue protection. In particular, the system and method provides an adaptive approach in which work order priorities are adjusted on-the-fly based on the time-variant business location revenue based upon the business locations which are competing for resources from the same service region. This system and method optimizes the service process by focusing on business locations with the highest business impact by day and by hour, given that there is a fixed amount of resources for performing service calls. By doing this, the amount of lost revenue for customers due to service-impacting incidents is greatly minimized.
Referring now to
The system and method presented herein can be implemented in whole or in part in one, all, or some combination of the components shown with the system 100. The method is programmed as executable instructions in memory and/or non-transitory computer-readable storage media and processed on one or more processors associated with the various components.
As shown in
A work order-based incident management supplier processes customer requests for service and arranges for (or provides) service visits which are provided in an order based on prioritized and ranked work orders from the cloud server 110. The supplier interface 120 is the transfer portal used by the maintenance service supplier in order to exchange word orders for arranging prioritized/ranked service calls pursuant updated work orders provided by the system and method of the present disclosure.
The business customer link 130 accumulates and transfers to cloud server 110 business customer sales information from each business location in a particular service area, e.g., from business location 1 132, business location 2 134, business location 3 136, and up to business location N 136 (when there are N business locations within that particular service area). Each of the business locations has equipment from the supplier that is used as part of the business location revenue generation (e.g., POS terminals). The business customer link 130 may be a web application programming interface (API) such as a RESTful API. A RESTful API (also be referred to as a RESTful web service) is a web service API implemented using Hypertext Transfer Protocol (HTTP) and Representational state transfer (REST) technology.
The business analyst computer 140 is for accessing the business dashboard 500 shown in
The cloud server 110 may be implemented as a Microsoft® Azure technology stack and has five primary functional blocks. The location revenue data processing block (module) 111 acquires the business customer sales information from the business customer link 130 (RESTful API) via network 150 and then formats and stores the acquired business customer sales information for further processing. The business customer sales data may include, for example, revenue by location by channel by hour in order to optimize the prioritization, ranking, and resolution of service-impacting incidents. The location revenue data processing block 111 is implemented as executable instructions programmed and residing within dynamic memory 190 and/or a non-transitory computer-readable (processor-readable) storage medium (hard disk 180) and executed by one or more processors 160.
The location revenue forecasting block (module) 112 processes the acquired business customer sales information to generate a location revenue forecast model (machine learning model). In an embodiment, the stored revenue forecast model may be a time-series forecast model developed in the Databricks platform using the open-source Prophet library which forecasts location revenue for a fixed future period of time (e.g., the next seven days) based on received location revenue data. Databricks is an open and unified platform for data engineering, data science, and data analytics. Because the location revenue forecast model is updated regularly based on the receipt of additional customer location data acquired via the business customer link, the implementation of system 100 is a full-blown machine learning Operation (MLOps) Pipeline, ensuring that the Machine Language model can accurately predict sales information for each business location by day and time of day, at least. The location revenue forecasting block 112 is implemented as executable instructions programmed and residing within dynamic memory 190 and/or a non-transitory computer-readable (processor-readable) storage medium (hard disk 180) and executed by one or more processors 160.
The incident scoring pipeline block (module) 113 includes heuristics for calculating the final priority and ranking for each work order. The heuristics for prioritization are triggered once an incident is created (e.g., when a work order is first generated). An initial priority is assigned based on equipment type and level of service required (e.g., how many POS terminals are not working, whether the equipment failure completely impacts revenue collection, etc.). Then a scoring script is run in which all pending work orders are compared. The business locations have a static ranking based on predicted income for each hour of the day, based on the machine language model. The scoring script applies weighted factors including overall business revenue, revenue per business day, peak revenue hours, and location. For example, the scoring script may apply a weight of 40% to revenue, 25% to business day, 25% to peak hours, and 10% to event venue (location) in order to re-prioritize each work order. Other factors, including available resources, may be used as part of the process. For incidents having the same priority score, rankings are may also be assigned to each ticket based on expected location revenue. The incident scoring pipeline block 113 is implemented as executable instructions programmed and residing within dynamic memory 190 and/or a non-transitory computer-readable (processor-readable) storage medium (hard disk 180) and executed by one or more processors 160.
The scoring script operation is shown in the flowchart 400 of
The work order management interface block (module) 114 allows information about pending work orders to be exchanged between cloud server 110 and the supplier interface 120. This information may include the original work order details supplied to cloud server 110 and updated work orders having new priority/ranking details which are supplied to supplier interface 120 for routing the service calls to the business locations. The updated priority/ranking information may be supplied together with justification for any changes. The work order management interface block 114 is implemented as executable instructions programmed and residing within dynamic memory 190 and/or a non-transitory computer-readable (processor-readable) storage medium (hard disk 180) and executed by one or more processors 160.
The business metric store and dashboard block (module) 115 stores all of the calculated business metrics based on updated priorities and rankings, and provides a web-based dashboard 500 shown in
In operation, there are two parallel processes in operation, all of which are implemented as executable instructions programmed and residing within dynamic memory 190 and/or a non-transitory computer-readable (processor-readable) storage medium (hard disk 180) and executed by one or more processors 160. The first is shown in the flowchart 200 in
The second on-going process is shown in flowchart 300 of
Although the present disclosure has been particularly shown and described with reference to the preferred embodiments and various aspects thereof, it will be appreciated by those of ordinary skill in the art that various changes and modifications may be made without departing from the spirit and scope of the disclosure. It is intended that the appended claims be interpreted as including the embodiments described herein, the alternatives mentioned above, and all equivalents thereto.