The present disclosure generally relates to managing workflows, and more particularly to analyzing a workflow for optimization.
There is a need to establish methods that capture, frame, and manages the user design experience around intelligent workflows (IW). For example, the recruiting and human resources (HR) spaces are very crowded with platforms, and solutions that focus on job-skill-matching, but the majority of the focus is on hiring. This is done with little understanding of the job without any holistic business perspective of the roles that are the best fit for a candidate. In addition, role deprecation, outsourcing, new directives of strategic leadership can influence the allocation of resources, including changes in skill sets leaving gaps in the existing workflow.
In accordance with one aspect of the present disclosure, a computer-implemented method is described for using artificial intelligence to assess a user's workflow on a task. In one embodiments, the computer-implemented method includes receiving data regarding a workflow of a user completing a task, and assessing the data to identify attributes of the workflow that is expressed in a series of steps. The method may further include analyzing the steps of the workflow to identify areas of improvement. The method can further include generating augmentations from a plurality of technology fitments matched to the areas for improvement in the steps of the workflow. Sending the augmentations to a user device for communicating to the user. The method may further include receiving confirmation of fitment to business practices of persona; and adjusting augmentation responsive to confirmation of fitment.
In another aspect, a system is described for using artificial intelligence to assess a user's workflow on a task. The system can include a hardware processor; and a memory that stores a computer program product. The computer program product when executed by the hardware processor, causes the hardware processor to receive data regarding a workflow of a user completing a task, and assessing the data to identify attributes of the workflow that is expressed in a series of steps. The computer program product can also employ the hardware process to analyze the steps of the workflow to identify areas of improvement. In some embodiments, the computer program product using the processor can generate augmentations from a plurality of technology fitments matched to the areas for improvement in the steps of the workflow, and send the augmentations to a user device for communicating to the user. The computer program product can also receive confirmation of fitment to business practices of persona; and adjust augmentation responsive to confirmation of fitment.
In yet another aspect, a computer program product is described for using artificial intelligence to assess a user's workflow on a task. The computer program product can include a computer readable storage medium having computer readable program code embodied therewith. The program instructions executable by a processor to cause the processor to receive data regarding a workflow of a user completing a task, and assessing the data to identify attributes of the workflow that is expressed in a series of steps. The computer program product can also employ the hardware process to analyze the steps of the workflow to identify areas of improvement. In some embodiments, the computer program product using the processor can generate augmentations from a plurality of technology fitments matched to the areas for improvement in the steps of the workflow, and send the augmentations to a user device for communicating to the user. The computer program product can also receive confirmation of fitment to business practices of persona; and adjust augmentation responsive to confirmation of fitment.
The following description will provide details of preferred embodiments with reference to the following figures wherein:
The methods, systems, and computer program products described herein relate to frameworks for providing an intelligent workflow (IW) user experience (UX) that includes a human centered element for providing an optimal and viable process. The methods, system and computer program products of the present disclosure can analyze an existing workflow and can identify the existing gaps in the workflow, predict upcoming needs not met by the workflow, and support the ever-shifting nature of the workflow with an adaptive user experience.
The methods, systems and computer program products of the present disclosure can lock down the framework and associated the method to the workflow as a base. The three stages of insight and visibility that are considered can include 1) the existing work process, 2) assessing intelligent workflows (IW) and 3) assigning technologies, detecting technologies and appropriating best design system and user experience (UX) methods. The methods described herein can provide increased transparency into the various workflows, and constant monitoring of machine learning (ML) of the new methods for optimization purposes. The methods, systems and computer program products that are described herein can provide a step that allows for the human perspective to consider an optimized workflow that has been configured from preselected technologies applied to a baseline workflow using artificial intelligence, and modify the optimized workflow taking into account the human perspective in providing not only an optimized workflow, but a workflow that also is acceptable to the specific needs of a client/user.
The methods, systems and computer program products that are described herein can establish frameworks for assessing an existing process, and based on the technology of the existing process can define a user experience that assists in establishing methods for how an intelligent workflow engages with a user, and uses that user interaction to continue to obtain higher levels of optimization from a human perspective. The methods, systems and computer program products are now described in greater detail with reference to
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer readable program instructions may be provided to a processor of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
Through the intelligent workflow framework, the methods, systems and computer program products can facilitate a human-centered design engagement model that can increase attentiveness and responsiveness of the users. Intelligent workflows can be about the recognition of patterns, e.g., the dependencies and conditions that make up those patterns. The intelligent workflow framework can be directly correlated, and linked to an increase in optimization, productivity, and efficiency.
As will be described herein, triggers can be identified to alert the user to a new flow, or the possibility of a new workflow being developed. A method of sequencing can be transparently visualized by the system for increased understanding and accountability of automation. Situational analysis of the user experience and recommendations is provided that transparently demonstrates how the intelligent workflow augments and redistributes the existing processes with new streamlined experiences is provided. Through the user interacting with the system, patterns are identified to inform the user of a recommended technology fitment, such as blockchain, hybrid cloud, Internet of Things (IoT) applications, applications for artificial intelligence (AI), applications for edge computing, applications for 5G computing, etc., and communicating the effect on the user experience workflow including the recommended technology fitment, e.g., before, after and during when the update is occurring. An experience template output is provided that can be flexible and adaptable according to the persona, area of focus and the sector, as well as other defining variables.
Referring to block 2, the method may continue with identifying the need for the intelligent workflow (IW). This step defines the reasons why the workflow is needed. For example, what is the objective of the workflow. In the example, in which the sector is auditing, the need for the intelligent workflow (IW) can be to visualize regulation, fast access to information, fast access to systems and information flow, visual signoff, onsite applications at the point of the audit, as well as timely ease of use. In the example, in which the sector is manufacturing or industry, the need for the intelligent workflow (IW) can be to visualize regulation, fast access to information, fast access to systems and information flow, visual signoff, onsite applications at the point of the audit, as well as timely ease of use. In the example, in which the sector is emergency services, the need for the intelligent workflow can be to provide for AI analysis based on multiple inputs in real time, sequencing events, and insights to view situations from varying perspectives. The system 200 has access to a corpus of data on workflows 207, which may also have similar needs.
Blocks 1 and 2 are both examples of data the system receiving data/information regarding a workflow of a user performing a task.
Referring to block 15 of
In one example of the embodiment depicted in
The feedback surveys 15b are another input of data for use in providing an optimized workflow. The feedback surveys 15b can be quizzes, blogs, recorded comments or interviews based upon an existing workflow. These elements can all be referred to data that is collected from capture feedback artificial intelligence. In some instance, in which the collected data is in text, natural language processing may be employed to extract relevant data from the inputs. Recorded content, such as interviews, can be an analyzed for relevant input data using voice detection artificial intelligence based technology.
Social media may be the source of input data that can be extracted using a web crawler and natural language processing from web based platforms. Select employer groups (SEG) and events with particular invite lists may also be used to provide inputs.
The input data at block 15 can also include activities 15d, which also includes personas, processes, planning and related timing for the processes and planning. These activities 15d may be entered as input for the different elements that can be performed as the workflow. Personas are inputs that provide information on the people in the organization that is using the workflow. Personas include information on the identity of the people, their personal demographics, their normal tasks within the workflow, etc. A persona can also include information on what motivates them and what also frustrates them. Persona information can be provided by forums, direct observation, and interviews.
The input data at block 15 can also include Enterprise Design Thinking (EDT) 15e. Enterprise design thinking includes sessions with the users to capture the objects of the existing workflow, the steps of the existing workflows, and if the personas for the organization/user are following the procedures of the workflows. The EDT can particularly be helpful in identifying pain points.
In some embodiments, Enterprise Design Thinking 15e starts by bringing together a series of design techniques, such as personas, empathy maps, as-is scenarios, design ideation, to-be scenarios, wireframe sketches, hypothesis-driven design, and minimum viable product (MVP) definition, and adds three principles titled hills, playbacks, and sponsor users. Hills are rooted in user needs and desires. Each hill is expressed as an aspirational end state for users that is motivated by market understanding. Hills define the mission and scope of a release and serve to focus the design and development work on desired, measurable outcomes. Playbacks provide input on the user value in the existing workflow of a project. Sponsor users is the input of people operating in the existing workflow.
The input at blocks 1 and 2 of
As noted, the inputs are employed by an artificial intelligence (AI) system.
Referring back to the inputs at blocks 1, 2 and 15, empathy maps can define how one or more personas feel regarding an existing workflow, and what they do with the existing workflow. Empathy map can identify the major pain points of the personas.
“Current state journeys” can identify existing activities and tasks per user, e.g., per a persona. A “current state journey” data entry for existing activities and tasks per user can be ingested by the system, e.g., ingested by the machine learning system 203, for providing a work experience (UX) framework 200, and tagged as activities, and can help to cluster the user as a persona with other personas based upon interactions.
The “to be journeys” define the experiences the personas would like to have with key insights on removing bottlenecks, e.g., removing pain points, in workflows. From this input with the perception provided by the empathy maps, the AI intelligent system 203 for the system for providing a work experience (UX) framework 200 can create a problem-need-intelligent outcome” resolution pattern.
In some embodiments, the quantitative types of inputs can be “current state journey” and “pain points”. These types of inputs can be extracted from user interviews. User interviews can be provide by the feedback surveys 15b of the feedback input 15 for
The artificial intelligence algorithm may be provided by the rules engine 204 depicted in
Pain points can also be extracted from user interviews. Pain points can include patterns where fragmentation in a work flow occur. Pain points can show a failure in collaboration. Pain points can also show a redundancy in activities. Data on pain points can be ingested by the AI algorithm, e.g., provided by the rules engine 204, of the system for providing a work experience (UX) framework 200 so that potential areas can be identified to streamline the workflow in a manner acceptable to the personas (users).
In some embodiments, the qualitative research data for the inputs can results from observational research for over the shoulder progress. This can include visual tagging of activities.
Quantitative research types for inputs can include surveys, documentation, and performance reports, which also provides insights/features for the artificial intelligence algorithm of the system for providing a work experience (UX) framework 200. Documents 15a and procedures 15b are identified as inputs for the method flow described with reference to
Performance reports can also provide data inputs. For example, systems in use in the workflow may have associated performance reports indicating service issues, incidents that result in service disruptions, and reports on overall system performance. This data can be tagged to be associated with the activities in the work flow that they impact.
The type of research data that can serve as input to the system 200 can also include planning and execution data. This data can be directed to time sheets recording and project management reports. The time sheet recordings can be associated with task and personas in the work flow. The management reports can provide data for recurring and in development activities that can impact the work flow.
Additional research data may include collaboration type data. Collaboration type data can include data from a text based messaging system or video conferencing recordings. The text based messaging system can provide common team threads including conversations to detect potential bottlenecks, risk, sentiments or pain points. Similar data can be provided from transcripts of video conferencing recordings.
Analysis of the data can begin with establishing an intelligent work flow taxonomy for training the artificial intelligence model.
For example, an activity list is developed. Each activity in a work flow may include a sequence of steps. Each step in a work flow can be associated with a person. The person can be identified by their function, e.g., a claims analyst or an accountant. The person can also be identified as a function, such as being a beneficiary. In some examples, each step in the workflow has a Boolean attribute: sequential/parallel. A Boolean attribute is an attribute that can only be true or false. Each step as a Boolean attribute marking one path of decision. The timeline, duration of the activity in the activity list is also recorded for consideration.
The taxonomy for the intelligent workflow training also includes personas. Each persona has a set of steps.
The taxonomy for training the artificial intelligence model (e.g., rules engine 204) for intelligent workflow also includes pathways. For example, each intelligent workflow has a list of pathways marked as succeeded or blocked depending upon if the intelligent workflow is fully completed.
The intelligent workflow taxonomy for training the artificial intelligence model can include industries.
The intelligent workflow taxonomy can also include use cases. The use case being why the workflow is being performed. One intelligent workflow can fit multiple use cases.
In some embodiments, the intelligent workflow taxonomy may also include a Boolean attribute for marking a whether the workflow is a stand alone process, or whether the workflow is a component of a larger sequence.
In some embodiments, the intelligent workflow taxonomy can also include environmental attributes, such as weather being hazardous.
In some embodiments, Named Entity Recognition (NER), which can employ neural networks (NN), automatically extract features from text that is configured using the above described taxonomy for the purposes of training the artificial intelligence model.
In some embodiments, following the establishment of the taxonomy for training, the method may continue with employing the taxonomy to train models for the intelligent workflow. More particularly, a number of classifiers (provided in the rules engine 204) may be trained for the artificial intelligence model. In some embodiments, a corpus of data on historical workflows, may be employed to train the artificial intelligence model using the taxonomy of terms. For example, training may include the system ingesting the existing intelligent workflow with segmented features by taxonomy for identifying classes. For example, for each intelligent workflow the segmented features of the existing workflow are used to train individual or weighted classifiers selected from an input list activities classifier, a input list of persona and activities, an input business cases/industry classifier, and an input environment data classifier. Training the classifier (rules engine 204) of the artificial intelligence model 203 also includes entity recognition for technology.
In some embodiments, the input list activities can included activities such as send or posting an explanation of benefits (EOB), a receive payment notification, a second payment confirmation, a step of creating an explanation in response to an inquiry. It is noted that the aforementioned list is provided for illustrative purposes only, and is not intended to be limiting. Activities can be used to train an activity identifier classifier, in which the input can be a list of steps in a day in a life or client process; and the output can be a set of activities and their associate workflow.
In some embodiments, the input list of persona and activity can train a classifier that takes as input steps specific actions performed by persons and personas, and learns an association between the activities and the personas. The persona and activity classifier can take as input the raw data from the mining steps, e.g., blocks 1 and 2 of
In some embodiments, the input business cases/industry classifier can be trained based on the intelligent workflow features of a intelligent workflow tagged with for an industry. The classifier can be used to ingest raw bundled client data and associate business cases and industry cases. The algorithm of the artificial intelligence model can ingest input features and output associated technology, e.g., 5G mobile, internet of things (JOT), blockchain memory, etc., for a given process discovery. The classifier for environmental data is trained similar to the input business case/industry classifier.
In some embodiments, named entity recognition (NER) is used to train and extract entities associated to specific technology patterns. In a following step, activities and steps are classified to the technology patterns. The classifier (e.g., provided by the rules engine 204) ingests client steps, processes business and performs named entity recognition (NER) to extract the technologies being used, and maps the technologies extracted to the activities, etc.
The intelligent workflow artificial intelligence models 203 can employ neural networks (NN) that employs name entity recognition (NER) to train the classifier 204 to be able to propose for a given input a setoff associated intelligent workflow for a given process discovery. The inputs from blocks 1 and 2 of
At this point of the present disclosure, the classifier of the artificial intelligence systems has been trained. Artificial Machine learning systems 203 can be used to predict outcomes based on input data. For example, given a set of input data, a machine learning system can predict an outcome.
In some embodiments, the artificial machine learning system includes an artificial neural network (ANN).
Referring now to
ANNs demonstrate an ability to derive meaning from complicated or imprecise data and can be used to extract patterns and detect trends that are too complex to be detected by humans or other computer-based systems. The structure of a neural network is known generally to have input neurons 302 that provide information to one or more “hidden” neurons 304. Connections 308 between the input neurons 302 and hidden neurons 304 are weighted, and these weighted inputs are then processed by the hidden neurons 304 according to some function in the hidden neurons 304. There can be any number of layers of hidden neurons 304, and as well as neurons that perform different functions. There exist different neural network structures as well, such as a convolutional neural network, a maxout network, etc., which may vary according to the structure and function of the hidden layers, as well as the pattern of weights between the layers. The individual layers may perform particular functions, and may include convolutional layers, pooling layers, fully connected layers, softmax layers, or any other appropriate type of neural network layer. Finally, a set of output neurons 306 accepts and processes weighted input from the last set of hidden neurons 304.
This represents a “feed-forward” computation, where information propagates from input neurons 302 to the output neurons 306. Upon completion of a feed-forward computation, the output is compared to a desired output available from training data. The error relative to the training data is then processed in “backpropagation” computation, where the hidden neurons 304 and input neurons 302 receive information regarding the error propagating backward from the output neurons 306. Once the backward error propagation has been completed, weight updates are performed, with the weighted connections 308 being updated to account for the received error. It should be noted that the three modes of operation, feed forward, back propagation, and weight update, do not overlap with one another. This represents just one variety of ANN computation, and that any appropriate form of computation may be used instead.
To train an ANN, training data can be divided into a training set and a testing set. The training data includes pairs of an input and a known output. During training, the inputs of the training set are fed into the ANN using feed-forward propagation. After each input, the output of the ANN is compared to the respective known output. Discrepancies between the output of the ANN and the known output that is associated with that particular input are used to generate an error value, which may be backpropagated through the ANN, after which the weight values of the ANN may be updated. This process continues until the pairs in the training set are exhausted. In some embodiments, the streaming plan generator 303 trains to match search items extracted from definitions for requirements used in the requirement management tool to source code that is stored in repositories.
After the training has been completed, the ANN may be tested against the testing set, to ensure that the training has not resulted in overfitting. If the ANN can generalize to new inputs, beyond those which it was already trained on, then it is ready for use. If the ANN does not accurately reproduce the known outputs of the testing set, then additional training data may be needed, or hyperparameters of the ANN may need to be adjusted.
ANNs may be implemented in software, hardware, or a combination of the two. For example, each weight 308 may be characterized as a weight value that is stored in a computer memory, and the activation function of each neuron may be implemented by a computer processor. The weight value may store any appropriate data value, such as a real number, a binary value, or a value selected from a fixed number of possibilities, that is multiplied against the relevant neuron outputs. Alternatively, the weights 308 may be implemented as resistive processing units (RPUs), generating a predictable current output when an input voltage is applied in accordance with a settable resistance.
The trained artificial intelligence model can provide intelligent workflow discovery. For example, when the input is empathy mapping, this information can be used for risk association and pain point identification.
For example, when the client input is a “current state journey”, the activity list classifier can help to decompose and find gaps in a current flow of an intelligent workflow being analyzed. Additionally, when the input is the current state journey, the persona classifier match existing client personas with proposed intelligent workflow steps.
When the input is a “to be journey”, the activity list classifier can provide outputs for a future state, which basically is a transition from the current state to a desired intelligent workflow state, e.g., by combining a series of intelligent workflows. Persona classifiers and industry/business case/environments classifier may also be applied to the input for the “to be journey”.
The intelligent workflow (IW) process discovery can also include analysis of performance and user feedback on the artificial intelligence algorithms and suggested improvements.
The intelligent workflow (IW) process discovery can also include analyzing the input from documentation, i.e., qualitative inputs, for the activity list classifier, persona classifier, technology classifier, as well as industry/business case/environmental classifiers.
Referring to
The system initiates attribute collection, which includes attribute collection, associate and alignment. More specifically, in some embodiments, artificial intelligence confidence levels can be used to assign confidence levels to technologies of the current process. Additionally, a user interface is provided that allows the user to override the confidence levels being set for the existing workflow.
The current state of assessment 3 may employ the trained artificial intelligence model that was described above with reference to blocks 1 and 2 of
Referring to
Referring to block 18, the output of the artificial intelligence model for intelligent workflows (IW) may include proposed ranked intelligent workflows (IW) with augmented steps and activity list.
Referring back to
Referring to
Still referring to
Referring back to
In some embodiments, a user experience (UX) design is rendered and tested. The user design is based on the type of technology, user engagement model and persona archetypes. In some embodiments, an experience template is rendered to support the technology, modality.
The user experience (UX) is tested iteratively. For example, referring to block 24 of
Referring back to
When an intelligent workflow has been approved, the workflow may be added to the learning corpus. In this manner, the approved intelligent workflow can be added to the learning corpus. In this manner, with each application of the system the learning corpus expands and provides for more accurate enhancements of workflows in following applications.
Referring to block 8 of the process flow depicted in
The system 200 applies artificial intelligence to assess a user's workflow on a task. The system can include a hardware processor 212; and a memory that stores a computer program product. The computer program product when executed by the hardware processor, causes the hardware processor to receive data through the input interface 201 regarding a workflow of a user completing a task, and assessing the data to identify attributes of the workflow that is expressed in a series of steps. The computer program product can also employ the hardware processor to analyze the steps of the workflow to identify areas of improvement. This can be done using the learning corpus 207 and the machine learning model 203. In some embodiments, the computer program product using the processor can generate augmentations from a plurality of technology fitments matched to the areas for improvement in the steps of the workflow, and send the augmentations to a user device for communicating to the user. The computer program product can also receive confirmation of fitment to business practices of persona through the template generator 211; and adjust augmentation responsive to confirmation of fitment.
The processing system 400 includes at least one processor (CPU) 204 operatively coupled to other components via a system bus 102. A cache 106, a Read Only Memory (ROM) 208, a Random Access Memory (RAM) 110, an input/output (I/O) adapter 120, a sound adapter 130, a network adapter 140, a user interface adapter 150, and a display adapter 160, are operatively coupled to the system bus 102. The bus 102 interconnects a plurality of components has will be described herein.
The processing system 400 depicted in
A speaker 132 is operatively coupled to system bus 102 by the sound adapter 130. A transceiver 142 is operatively coupled to system bus 102 by network adapter 140. A display device 162 is operatively coupled to system bus 102 by display adapter 160.
A first user input device 152, a second user input device 154, and a third user input device 156 are operatively coupled to system bus 102 by user interface adapter 150. The user input devices 152, 154, and 156 can be any of a keyboard, a mouse, a keypad, an image capture device, a motion sensing device, a microphone, a device incorporating the functionality of at least two of the preceding devices, and so forth. Of course, other types of input devices can also be used, while maintaining the spirit of the present invention. The user input devices 152, 154, and 156 can be the same type of user input device or different types of user input devices. The user input devices 152, 154, and 156 are used to input and output information to and from system 400, which can include the system 200.
Of course, the processing system 400 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements. For example, various other input devices and/or output devices can be included in processing system 400, depending upon the particular implementation of the same, as readily understood by one of ordinary skill in the art. For example, various types of wireless and/or wired input and/or output devices can be used. Moreover, additional processors, controllers, memories, and so forth, in various configurations can also be utilized as readily appreciated by one of ordinary skill in the art. These and other variations of the processing system 400 are readily contemplated by one of ordinary skill in the art given the teachings of the present invention provided herein.
While
The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing apparatus receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, spark, R language, or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
In one embodiment, the present disclosure provides a non-transitory computer readable storage medium that includes a computer readable program for using artificial intelligence to assess a user's workflow on a task. The computer program product can include a computer readable storage medium having computer readable program code embodied therewith. The program instructions executable by a processor to cause the processor to receive data regarding a workflow of a user completing a task, and assessing the data to identify attributes of the workflow that is expressed in a series of steps. The computer program product can also employ the hardware process to analyze the steps of the workflow to identify areas of improvement. In some embodiments, the computer program product using the processor can generate augmentations from a plurality of technology fitments matched to the areas for improvement in the steps of the workflow, and send the augmentations to a user device for communicating to the user. The computer program product can also receive confirmation of fitment to business practices of persona; and adjust augmentation responsive to confirmation of fitment.
It is to be understood that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment (e.g., Internet of thing (IOT)) now known or later developed. Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models. Characteristics are as follows:
On-demand self-service: a cloud consumer can unilaterally provision computing
capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider.
Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).
Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter).
Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.
Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.
Service Models are as follows:
Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.
Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).
Deployment Models are as follows:
Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises. Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises.
Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.
Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).
A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure that includes a network of interconnected nodes.
Referring now to
Referring now to
computing environment (see
Hardware and software layer 60 includes hardware and software components. Examples of hardware components include: mainframes 61; RISC (Reduced Instruction Set Computer) architecture based servers 62; servers 63; blade servers 64; storage devices 65; and networks and networking components 66. In some embodiments, software components include network application server software 67 and database software 68.
Virtualization layer 70 provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers 71; virtual storage 72; virtual networks 73, including virtual private networks; virtual applications and operating systems 74; and virtual clients 75.
In one example, management layer 80 may provide the functions described below. Resource provisioning 81 provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing 82 provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal 83 provides access to the cloud computing environment for consumers and system administrators.
Service level management 84 provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment 85 provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA.
Workloads layer 89 provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation 91; software development and lifecycle management 92; virtual classroom education delivery 93; data analytics processing 94; transaction processing 95; and for a intelligent workflow system 96 in hardware devices in accordance with
While embodiments of the present invention have been described herein for purposes of illustration, many modifications and changes will become apparent to those skilled in the art. Accordingly, the appended claims are intended to encompass all such modifications and changes as fall within the true spirit and scope of this invention.