The present disclosure relates to predicting service delivery workforce under business changes, and more particularly to predicting service delivery effort time and labor cost.
In a service delivery environment, service customers desire to understand the impact of business changes to service delivery labor cost. Examples of changes include increased number of users, architecture changes, new business applications, and new infrastructure/servers. In addition, from the service providers' perspective, it is also desired to have quantitative understanding of the impact of customer change requests to the service agent workload.
According to an exemplary embodiment of the present disclosure, a method for predicting service delivery costs for a changed business requirement including detecting, by a processor, an infrastructure change corresponding to said changed business requirement, deriving, by said processor, a service delivery workload change from said infrastructure change, and determining, by said processor, a service delivery cost based on said service delivery workload change.
According to an exemplary embodiment of the present disclosure, a method for predicting service delivery workloads includes generating a discrete event simulation model, and outputting a cost prediction based on the discrete event simulation model, wherein the cost prediction corresponds to a change in a service delivery process.
According to an exemplary embodiment of the present disclosure, methods are implemented in a computer program product for predicting service delivery workloads, the computer program product including a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code being configured to perform method steps.
Preferred embodiments of the present disclosure will be described below in more detail, with reference to the accompanying drawings:
Described herein are exemplary model based approaches for service delivery workforce prediction under business changes. Some embodiments of the present disclosure use detailed business, IT (Information Technology), and service delivery mapping and modeling for predicting a cost impact.
Service delivery workforce prediction can be implemented in cases where, for example, a client wants to understand the impact of business changes to service delivery. These changes include changing (e.g., increasing) number of users, architecture changes, new business applications, new infrastructure/servers, etc. Some embodiments of the present disclosure relate to quantitative what-if analytics for client decision-making and service delivery change management.
Embodiments of the present disclosure relate to methods for a service delivery workforce prediction solution. In some embodiments the prediction is based on tickets, where tickets are issued as part of a tracking system that manages and maintains one or more lists of issues, as needed by an organization delivering the service.
Referring to
At block 101, a queuing model based approach is applied at an IT-level (e.g., number of servers, number of requests, server utilization, request response time). The queuing model based approach models infrastructure as a system including a server receiving requests corresponding to tickets. The server provides some service to the requests. The requests arrive at the system to be served. If the server is idle, a request is served immediately. Otherwise, an arriving request joins a waiting line or queue. When the server has completed serving a request, the request departs. If there is at least one request waiting in the queue, a next request is dispatched to the server. The server in this model can represent anything that performs some function or service for a collection of requests.
At block 102, a workload volume prediction module and a workload effort prediction module are applied.
According to an exemplary embodiment of the present disclosure, the workload volume prediction module predicts event/ticket volumes using a model of IT system configuration, load, and performance data. For example, the workload volume prediction includes a correlation of data including: (1) historical system loads, such as the amount, rate, and distribution of requests for a given resource (e.g., software or service); (2) historical system performance measurements (such as utilization and response time) associated with the system loads; (3) application/infrastructure configurations such as software/hardware configurations (e.g., CPU type, memory); and (4) historical system event (e.g., alerts) and/or ticket data (e.g., incidents and alerts) associated with the operation of IT infrastructure elements that are associated with the data above.
According to an exemplary embodiment of the present disclosure, the workload effort prediction module further comprises a reconciliation method (see
In addition, at block 103, a discrete event simulation based approach is applied, at service delivery (e.g., number of Service Agreements (SAs), number of tickets, effort time, SLA attainment), which further comprises a simplified and self-calibrated method for cost prediction (see
The architecture 100 of
Referring to
At block 203, the method includes global ticket classification. Referring to block 101 of
Referring to
The client ticket classification (see block 205) is based on the client ticketing data 201, and outputs a client per-class ticket volume at block 207. The client per-class ticket volume is used to determining a client overall ticket effort reconciliation at 209 given a client overall ticket effort time 210 determined from the client claim data 202.
Referring to block 102 of
According to an exemplary embodiment of the present disclosure, the client overall ticket effort reconciliation at 209 can be used by the client 211 to determine the predicted or agreed to effort time at block 212.
Referring to block 103 of
Referring now to
In some exemplary embodiments, the model of input parameters 601 includes ticket workload based on the workload volume changes 602 and the workload arrival patterns 603, effort time based on the client per-class effort time 604 and the complexity aggregation 605, and a shift schedule based on the pre-defined shift schedule patterns 606 and client input. The model of input parameters 601 can also include Service Level Agreements based on client input. The model of input parameters 601 can also include a non-ticket workload. The non-ticket workload can be calibrated by a model calibration (see block 607). The model calibration 607 can be determined based on current conditions 608 (e.g., a level of staffing) and an output of the model of input parameters 601, including a discrete event simulation model 609. Further, in some exemplary embodiments the discrete event simulation model 609 outputs a cost prediction 610.
By way of recapitulation, according to an exemplary embodiment of the present disclosure, a method for predicting service delivery costs for a changed business requirement includes detecting, by a processor (see for example,
The methodologies of embodiments of the disclosure may be particularly well-suited for use in an electronic device or alternative system. Accordingly, embodiments of the present disclosure may take the form of an entirely hardware embodiment or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “processor”, “circuit,” “module” or “system.” Furthermore, embodiments of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code stored thereon.
Furthermore, it should be noted that any of the methods described herein can include an additional step of providing a system for reconciliation methodology for effort prediction (see for example,
Any combination of one or more computer usable or computer readable medium(s) may be utilized. The computer-usable or computer-readable medium may be a computer readable storage medium. A computer readable storage medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus or device.
Computer program code for carrying out operations of embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Embodiments of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions.
These computer program instructions may be stored in a computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
For example,
In different applications, some of the components shown in
The processor 701 may be configured to perform one or more methodologies described in the present disclosure, illustrative embodiments of which are shown in the above figures and described herein. Embodiments of the present disclosure can be implemented as a routine that is stored in memory 702 and executed by the processor 701 to process the signal from the media 707. As such, the computer system is a general-purpose computer system that becomes a specific purpose computer system when executing the routine of the present disclosure.
Although the computer system described in
It is to be appreciated that the term “processor” as used herein is intended to include any processing device, such as, for example, one that includes a central processing unit (CPU) and/or other processing circuitry (e.g., digital signal processor (DSP), microprocessor, etc.). Additionally, it is to be understood that the term “processor” may refer to a multi-core processor that contains multiple processing cores in a processor or more than one processing device, and that various elements associated with a processing device may be shared by other processing devices.
The term “memory” as used herein is intended to include memory and other computer-readable media associated with a processor or CPU, such as, for example, random access memory (RAM), read only memory (ROM), fixed storage media (e.g., a hard drive), removable storage media (e.g., a diskette), flash memory, etc. Furthermore, the term “I/O circuitry” as used herein is intended to include, for example, one or more input devices (e.g., keyboard, mouse, etc.) for entering data to the processor, and/or one or more output devices (e.g., printer, monitor, etc.) for presenting the results associated with the processor.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Although illustrative embodiments of the present disclosure have been described herein with reference to the accompanying drawings, it is to be understood that the disclosure is not limited to those precise embodiments, and that various other changes and modifications may be made therein by one skilled in the art without departing from the scope of the appended claims.