Method and apparatus for automated quality management of communication records

Information

  • Patent Grant
  • 11677875
  • Patent Number
    11,677,875
  • Date Filed
    Monday, August 16, 2021
    2 years ago
  • Date Issued
    Tuesday, June 13, 2023
    a year ago
Abstract
Disclosed implementations use automated transcription and intent detection and an AI model to evaluate interactions between an agent and a customer within a call center environment. The evaluation flow used for manual evaluations is leveraged so that the evaluators can correct the AI evaluations when appropriate. Based on such corrections, the AI model can be retrained to accommodate specifics of the business and center—resulting in more confidence in the AI model over time.
Description
BACKGROUND

Contact centers, also referred to as “call centers”, in which agents are assigned to queues based on skills and customer requirements are well known. FIG. 1 is an example system architecture 100, of a cloud-based contact center infrastructure solution. Customers 110 interact with a contact center 150 using voice, email, text, and web interfaces to communicate with the agents 120 through a network 130 and one or more of text or multimedia channels. The platform that controls the operation of the contact center 150 including the routing and handling of communications between customers 110 and agents 120 for the contact center 150 is referred herein as the contact routing system 153. The contact routing system 153 could be any of a contact center as a service (CCaS) system, an automated call distributor (ACD) system, or a case system, for example.


The agents 120 may be remote from the contact center 150 and handle communications (also referred to as “interactions” herein) with customers 110 on behalf of an enterprise. The agents 120 may utilize devices, such as but not limited to, work stations, desktop computers, laptops, telephones, a mobile smartphone and/or a tablet. Similarly, customers 110 may communicate using a plurality of devices, including but not limited to, a telephone, a mobile smartphone, a tablet, a laptop, a desktop computer, or other. For example, telephone communication may traverse networks such as a public switched telephone networks (PSTN), Voice over Internet Protocol (VoIP) telephony (via the Internet), a Wide Area Network (WAN) or a Large Area Network (LAN). The network types are provided by way of example and are not intended to limit types of networks used for communications.


The agents 120 may be assigned to one or more queues representing call categories and/or agent skill levels. The agents 120 assigned to a queue may handle communications that are placed in the queue by the contact routing system 153. For example, there may be queues associated with a language (e.g., English or Chinese), topic (e.g., technical support or billing), or a particular country of origin. When a communication is received by the contact routing system 153, the communication may be placed in a relevant queue, and one of the agents 120 associated with the relevant queue may handle the communication.


The agents 120 of a contact center 150 may be further organized into one or more teams. Depending on the embodiment, the agents 120 may be organized into teams based on a variety of factors including, but not limited to, skills, location, experience, assigned queues, associated or assigned customers 110, and shift. Other factors may be used to assign agents 120 to teams.


Entities that employ workers such as agents 120 typically use a Quality Management (QM) system to ensure that the agents 120 are providing customers 110 with a high-quality product or service. QM systems do this by determining when and how to evaluate, train, and coach each agent 120 based on seniority, team membership, or associated skills as well as quality of performance while handling customer 110 interactions. QM systems may further generate and provide surveys or questionnaires to customers 110 to ensure that they are satisfied with the service being provided by the contact center 150.


Historically, QM forms are built by adding multiple choice questions where different choices are worth different point values. The forms are then filled out manually by evaluators based on real time or recorded monitoring of agent interactions with customers. For example, a form for evaluating support interactions might start with a question where the quality of the greeting is evaluated. A good greeting where the agent introduced themselves and inquired about the problem might be worth 10 points and a poor greeting might be worth 0, with mediocre greetings being somewhere in between on the 1-10 scale. There might be 3 more questions about problem solving, displaying empathy, and closing. Forms can also be associated with one or more queues (also sometimes known as “ring groups”). As noted above, a queue can represent a type of work that the support center does and/or agent skills. For example, a call center might have a tier 1 voice support queue, a tier 2 voice support queue, an inbound sales queue, an outbound sales queue, and a webchat support queue. With traditional quality management based on multiple choice question forms filled outs by evaluators, it is time prohibitive to evaluate every interaction for quality and compliance. Instead, techniques like sampling are used where a small percent of each agent's interactions are monitored by and evaluator each month. This results in a less than optimum quality management process because samples are, of course, not always fully representative of an entire data set.


SUMMARY

Disclosed implementations leverage known methods of speech recognition and intent analysis to make corrections to inputs to be fed into an Artificial Intelligence (AI) model to be used for quality management scoring of communications. An AI model can be used to detect the intent of utterances that are passed to it. The AI model can be trained based on “example utterances” and then compare the passed utterances, from agent/customer interactions to the training data to determine intent with a specified level (e.g., expressed as a score) of confidence. Intent determinations with a low confidence score can be directed to a human for further review. A first aspect of the invention is a method for assessing communications between a user and an agent in a call center, the method comprising: extracting text from a plurality of communications between a call center user and a call center agent to thereby create a communication record; for each of the plurality of communications: assessing the corresponding text of a communication record by applying an AI assessment model to obtain an intent assessment of one or more aspects of the communication, wherein the AI assessment model is developed by processing a set of initial training data and supplemental training data, wherein the supplemental training data is based on reviewing manual corrections to previous assessments by the assessment model.





BRIEF DESCRIPTION OF THE DRAWING

The foregoing summary, as well as the following detailed description of the invention, will be better understood when read in conjunction with the appended drawings. For the purpose of illustrating the invention, there are shown in the drawings various illustrative embodiments. It should be understood, however, that the invention is not limited to the precise arrangements and instrumentalities shown. In the drawings:



FIG. 1 is a schematic representation of a call center architecture.



FIG. 2 is a schematic representation of a computer system for quality management in accordance with disclosed implementations.



FIG. 3 is an example of a QM form creation user interface in accordance with disclosed implementations.



FIG. 4 is an example of aa user interface showing choices detected in interactions based on the questions in an evaluation form in accordance with disclosed implementations.



FIG. 5 is an example of evaluations page user interface in accordance with disclosed implementations.



FIG. 6 is an example of an agreements review page user interface in accordance with disclosed implementations.



FIG. 7 is a flowchart of a method for quality management of agent interactions in accordance with disclosed implementations.





DETAILED DESCRIPTION

Certain terminology is used in the following description for convenience only and is not limiting. Unless specifically set forth herein, the terms “a,” “an” and “the” are not limited to one element but instead should be read as meaning “at least one.” The terminology includes the words noted above, derivatives thereof and words of similar import.


Disclosed implementations overcome the above-identified disadvantages of the prior art by adapting contact center QM analysis to artificial intelligence systems. Disclosed implementations can leverage known methods of speech recognition and intent analysis to make corrections to inputs to be fed into an Artificial Intelligence (AI) model to be used for quality management scoring of communications. Matches with a low confidence score can be directed to a human for further review. Evaluation forms that are similar to forms used in conventional manual systems can be used. Retraining of the AI model is accomplished through individual corrections in an ongoing manner, as described below, as opposed to providing a new set of training data.


Disclosed implementations use automated transcription and intent detection and an AI model to evaluate every interaction, i.e. communication, (or alternatively a large percentage of interactions) between an agent and a customer. Disclosed implementations can leverage the evaluation flow used for manual evaluations so that the evaluators can correct the AI evaluations when appropriate. Based on such corrections, the AI model can be retrained to accommodate specifics of the business and center—resulting in more confidence in the AI model over time.



FIG. 2 illustrates a computer system for quality management in accordance with disclosed implementations. System 200 in includes parsing module 220 (including recording module 222 and transcription module 224) which parses words and phrases from communications/interactions for processing in the manner descried in detail below. Assessment module 232 includes Artificial Intelligence (AI) model 232, which includes intent module 234 that determines intent of and scores interactions in the manner described below. Intent module 234 can leverages any one of many known intent engines to analyze transcriptions of transcription module 224. Form builder module 240 includes user interfaces and processing elements for building AI enabled evaluation forms as described below. Results module 250 includes user interfaces and processing elements for presenting scoring results of interactions individually and in aggregate form. The interaction of these modules will become apparent based on the description below. The modules can be implemented through computer-executable code stored on non-transient media and executed by hardware processors to accomplish the disclosed functions which are described in detail below.


As noted above, conventional QM forms are built by adding multiple choice questions where different choices are worth different point values. For example, a form for evaluating support interactions might start with a question where the quality of the greeting is evaluated. A good greeting where the agent introduced themselves and inquired about the problem might be worth 10 points and a poor greeting might be worth 0. There might be additional questions in the form relating to problem solving, displaying empathy, and closing. As noted above, forms can also be associated with one or more queues



FIG. 3 illustrates a user interface 300 of a computer-implemented form generation tool, such as form builder module 240 (FIG. 2) in accordance with disclosed implementations. User interface 300 can be used to enable forms for AI evaluation. A user can navigate the UI to select a question at drop down menu 302 for example, specify answer choices at 304 and 306, and specify one or more examples of utterances, with corresponding scores and/or weightings, for each answer choice, in text entry box 304 for example. As an example, assuming the question “Did the agent greet the caller properly?” is selected in 302, and the answers provided at 304 and 306 are “Yes” and “No” respectively, words/phrases “hello my name is”, “good morning”, “thank you for calling our helpline” can be entered into text box 308 as indications of “Yes” (i.e., a proper greeting) and words/phrases The word/phrase “fallback” can then be added to answer choice “No” in box 308 meaning that it will be selected in the absence of a positive match for the “Yes” keywords/phrases (i.e. a “Yes” intent was not detected or the confidence threshold is below acceptable level).


Form templates can be provided with the recommended best practice for sections, questions, and example utterances for each answer choice in order to maximize matching and increase confidence level. Customer users (admins) can edit the templates in accordance with their business needs. Additionally, users can specify a default answer choice which will be selected if none of the example utterances were detected with high confidence. In the example above, “no greeting given” might be a default answer choice, with 0 points, if a greeting is not detected. When an AI evaluation form created through UI 300 is saved, the example utterances are used to train AI model 232 (FIG. 1) with an intent for every question choice. In the example above, AI model 232 might have 8 intents: good greeting, poor greeting, good problem solving, poor problem solving, good empathy, poor empathy, good closing, poor closing, for example.


When a voice interaction is completed, an audio recording of the interaction, created by recording module 222 (FIG. 2) can be sent to a speech transcription engine of transcription module 224 (FIG. 2) and the resulting transcription is stored in a digital file store. When the transcription is available, a message can be sent and the transcription can be processed by an intent detection engine on intent module 234 (FIG. 2). Utterances in the transcription can be enriched via intent detection by intent module 234. An annotation, such as one or more tags, can be associated with the interaction as shown in FIG. 4 which illustrates user interface 400 and the positive or negative choices detected in the interaction being processed based on the questions in the evaluation form created with user interface 300 of FIG. 3. As shown at 402, annotations can be associated with portions of the interaction to indicate detected intent during that portion of the interaction. For example, the annotations can be green happy faces (for positive intent), red happy faces (for negative intent), and grey speech bubbles (where there wasn't a height confidence based on the automated analysis). The corresponding positive or negative choices for the interaction, as evaluated by the AI model 232, and the corresponding questions, are indicated at 404. The tags can indicate intent, the question and choice associated with that intent, and whether that choice was positive, negative, or low confidence.


Based on the positive or negative choices, a new evaluation of the corresponding interaction will be generated for the agent, by assessment module 230 of FIG. 1, with a score. For example, the score can be based on a percentage of the points achieved from the detected choices with respect to the total possible score. If both positive and negative problem solving examples are detected, then the question can be assigned as the negative option (i.e., the one worth fewer points), for example, as it might be desirable for the system to err on the side of caution and detection of potential issues. As an alternative, disclosed implementations might look for a question option that has a medium number of points and use that as the point score for the utterances. Based on these positive and negative annotations detected automatically by assessment module 230, the corresponding rating will be calculated on the evaluation form itself for that particular section. If for some questions, no intent is found with a high confidence, the default answer voice can be be selected. If for some questions, intents are found, but a low confidence level, those low confidence matches will be annotated and the form can be presented to users as pending for manual review.


Evaluations accomplished automatically by assessment module 230 are presented to the user on an evaluations page user UI 500 or results module 250 as shown in FIG. 5. Each evaluation can be tagged as “AI Scored”, “AI Pending”, “Draft” or “Completed”, in column 502, to differentiate them from forms that were manually “Completed” by an evaluator employee. In this example, Draft means the evaluation was partially filled in by a person, AI Pending means the evaluation was partially filled in by the AI but there were some answers with low confidence, AI Scored means the evaluation was completely filled in by the AI, and Completed means the evaluation was completely filled in by a person or reviewed and updated by a person after it was AI Pending or AI Scored.


Of course, other relevant data, such as Score (column 504), date of the interaction (column 506), queue associated with the interaction (column 508), and the like can be presented on evaluations page UI 500. Additionally, the average score, top skill, and bottom skill widgets (all results of calculations by assessment module 230 or results module 250) at the top of UI 500 could be based on taking the AI evaluations into account at a relatively low weighting (only 10% for example) as computer to forms completed manually by an evaluator employee. This weight may be configurable by the user.


When an AI form cannot be evaluated automatically and scored completely by the system (e.g., the intent/answer cannot be determined on one or more particular questions), then these evaluations will show in an AI Pending state in column 504 of FIG. 5 and can be designated to require manual intervention/review/correction to move to a Completed status. Users can review these AI Pending evaluations and update the question responses selected on them. Doing this converts the evaluation to the “Completed” state where they are given the full weight (same as the ones completed manually from the start). Users can also choose to review and update the AI Scored evaluation, but this is an optional step which would only occur if, for example, a correction was needed. Updates that the employee evaluator made can be sent to a corrections API of AI model 232. The corrections can be viewed on a user interface, e.g., a UI similar to UI 300 of FIG. 3, and a non AI expert, such as a contact center agent or administrator, can view the models and corrections and can choose to add the example utterance to the intent that should have been selected, or to ignore the correction. If multiple trainers all agree to add an utterance, the new training set will be tested against past responses in an Agreements Review page of the UI 600 shown in FIG. 6, and, if the AI model identifies all of them correctly, an updated model will be published and used for further analysis. As a result of this process, the training set grows and the AI model improves over time.


The UI can provide a single view into corrections from multiple systems that use intent detection enrichment. For example, incorrect classifications from a virtual agent or knowledge base search could also be reviewed on the UI. Real-time alerts can be provided based on real-time transcription and intent detection to notify a user immediately if an important question is being evaluated poorly by AI model 232. Emotion/crosstalk/silence checks can be added to the question choices on the forms in addition to example utterances. For example, for the AI model to detect Yes, it might have to both match the Yes intent via the example utterances and have a positive emotion based on word choice and tone.



FIG. 7 illustrates a method in accordance with disclosed implementations. At 702, a call center communication, such as a phone call is recorded (by recording module 22 of FIG. 3, for example). At 704, the recording is transcribed into digital format using known transcription techniques (by transcription module 224 of FIG. 2, for example). At 706, each utterance is analyzed by an AI model (such as AI model 232 of FIG. 2) based on the appropriate form to determine intent and a corresponding confidence level of the determined intent for a question on the form. At 708, if intent is detected with a high confidence (based on a threshold intent score for example), then the intent is annotated in a record associated with the communication at 710. If the intent is found with a low confidence, the intent determination is marked for human review at 712 and the results of the human review are sent back to the AI model as training data at 714. As noted above, the human review can include review by multiple persons and aggregating the responses of the multiple persons. Steps 706, 708 and 710 (and 712 and 714 when appropriate) are repeated for each question in the form based on the determination made at 716.


The reviewing user/trainer can be an agent. Corrections from multiple systems/models can be presented in the same UI view that can be used for each model. other elements of system architecture 100 (FIG. 1) can be used to make suggestions to the AI model that get fed into trainer (by being flagged as a suggestion), for transcription and/or intent, during workflow in the course of normal operations. Clustering of label collections from call transcripts can be selected and included in training. For example, “gday mate” could be a greeting that is not included in model originally but is added based on its use in the normal workflow of the call center.


The elements of the disclosed implementations can include computing devices including hardware processors and memories storing executable instructions to cause the processor to carry out the disclosed functionality. Numerous other general purpose or special purpose computing system environments or configurations may be used. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use include, but are not limited to, personal computers, servers, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, network personal computers (PCs), minicomputers, mainframe computers, embedded systems, distributed computing environments that include any of the above systems or devices, and the like. Computer-executable instructions, such as program modules, being executed by a computer may be used. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Distributed computing environments may be used where tasks are performed by remote processing devices that are linked through a communications network or other data transmission medium. In a distributed computing environment, program modules and other data may be located in both local and remote computer storage media including memory storage devices.


The computing devices can include a variety of tangible computer readable media. Computer readable media can be any available tangible media that can be accessed by device and includes both volatile and non-volatile media, removable and non-removable media. Tangible, non-transient computer storage media include volatile and non-volatile, and removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.


The various data and code can be stored in electronic storage devices which may comprise non-transitory storage media that electronically stores information. The electronic storage media of the electronic storage may include one or both of system storage that is provided integrally (i.e., substantially non-removable) with the computing devices and/or removable storage that is removably connectable to the computing devices via, for example, a port (e.g., a USB port, a firewire port, etc.) or a drive (e.g., a disk drive, etc.). The electronic storage may include one or more of optically readable storage media (e.g., optical disks, etc.), magnetically readable storage media (e.g., magnetic tape, magnetic hard drive, floppy drive, etc.), electrical charge-based storage media (e.g., EEPROM, RAM, etc.), solid-state storage media (e.g., flash drive, etc.), and/or other electronically readable storage media.


Processor(s) of the computing devices may be configured to provide information processing capabilities and may include one or more of a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information. As used herein, the term “module” may refer to any component or set of components that perform the functionality attributed to the module. This may include one or more physical processors during execution of processor readable instructions, the processor readable instructions, circuitry, hardware, storage media, or any other components.


The contact center 150 of FIG. 1 can be in a single location or may be cloud-based and distributed over a plurality of locations, i.e. a distributed computing system. The contact center 150 may include servers, databases, and other components. In particular, the contact center 150 may include, but is not limited to, a routing server, a SIP server, an outbound server, a reporting/dashboard server, automated call distribution (ACD), a computer telephony integration server (CTI), an email server, an IM server, a social server, a SMS server, and one or more databases for routing, historical information and campaigns.


It will be appreciated by those skilled in the art that changes could be made to the embodiments described above without departing from the broad inventive concept thereof. It is understood, therefore, that this invention is not limited to the particular embodiments disclosed, but it is intended to cover modifications within the spirit and scope of the present invention as defined by the appended claims.

Claims
  • 1. A method for assessing communications between a user and an agent in a call center, the method comprising: extracting text from a plurality of communications between a call center user and a call center agent to thereby create a communication record;for each of the plurality of communications: assessing the corresponding text of a communication record by applying an AI assessment model to obtain an intent assessment of one or more aspects of the communication, wherein the AI assessment model is developed by processing a set of initial training data and supplemental training data to detect intents; andwherein the intent assessment includes a confidence score of the communication and further comprising flagging the communication record for manual quality management analysis and for making annotations if a confidence score of the intent assessment is below a threshold value.
  • 2. The method of claim 1, wherein the intent assessment comprises multiple fields, each field having a value selected from a corresponding set of values and wherein the confidence level is based on a confidence sub-level determined for each value of each field.
  • 3. The method of claim 2, wherein the fields and corresponding sets of values correspond to a human-readable form used for the manual annotation.
  • 4. The method of claim 1, wherein the AI assessment model considers acceptable key words or phrases in each of a plurality of categories and the annotations include key words or phrases that are to be added to a category as acceptable.
  • 5. The method of claim 1, where wherein the supplemental training data is added to the model based on reviewing manual corrections to previous assessments by the assessment model.
  • 6. The method of claim 5, wherein the supplemental data is based on manual quality analysis by a plurality of people and determining consensus between the people.
  • 7. The method of claim 1, wherein the manual quality management analysis and annotation is accomplished by an agent in the call center.
  • 8. The method of claim 1, wherein the manual quality management analysis and annotation includes a user interface displaying suggestions that have been marked for training from multiple models.
  • 9. The method of claim 8 wherein the suggestions have been marked for training based on lack of confidence or explicit suggestion.
  • 10. The method of claim 8, wherein the suggestions come from a review of an unsupervised clustering model.
  • 11. A computer system for assessing communications between a user and an agent in a call center, the system comprising: at least one computer hardware processor; andat least one memory device operatively coupled to the at least one computer hardware processor and having instructions stored thereon which, when executed by the at least one com put er hardware processor, cause the at least one com put er hardware processor to carry out the method of: extracting text from a plurality of communications between a call center user and a call center agent to thereby create a communication record;for each of the plurality of communications: assessing the intent of corresponding text by applying an AI assessment model to obtain an intent assessment of the communication, wherein the AI assessment model is developed by processing a set of initial training data to detect intents; andwherein the intent assessment includes a confidence score of the communication and further comprising flagging the communication record for manual quality management analysis and for making annotations if a confidence level of the assessment is below a threshold score.
  • 12. The system of claim 11 wherein each intent assessment comprises multiple fields, each field having a value selected from a corresponding set of values and wherein the confidence level is based on a confidence sub-level determined for each value of each field.
  • 13. The system of claim 12, wherein the fields and corresponding sets of values correspond to a human-readable form used for the manual annotation.
  • 14. The system of claim 11, wherein the AI assessment model considers acceptable key words or phrases in each of a plurality of categories and the annotations include key words or phrases that are to be added to a category as acceptable.
  • 15. The system of claim 11, wherein supplemental training data is added to the model based on reviewing manual corrections to previous assessments by the assessment model.
  • 16. The system of claim 15, wherein the supplemental training data is based on manual quality analysis by a plurality of people and determining consensus between the people.
  • 17. The system of claim 11, wherein the manual quality management analysis and annotation is accomplished by an agent in the call center.
  • 18. The system of claim 11, wherein the manual quality management analysis and annotation includes a user interface displaying suggestions that have been marked for training from multiple models.
  • 19. The system of claim 18 wherein the suggestions have been marked for training based on lack of confidence or explicit suggestion.
  • 20. The system of claim 18, wherein the suggestions come from a review of an unsupervised clustering model.
  • 21. A method for assessing communications a contact center interaction, the method comprising: receiving communication records relating to an interaction in a contact center, wherein each communication record includes text strings extracted from the correspondingcommunication and wherein each call record has been designated by an AI assessment model trained to accomplish an assessment of one or more aspects of the communication records, wherein the AI assessment model is developed by processing a set of initial training data;for each communication record: displaying at least one of the text strings on a user interface in correspondence with at least one AI intent assessment, wherein the AI intent assessment includes a confidence score of the communication and further comprising flagging the communication record for manual quality management analysis and for making annotations if a confidence level of the assessment is below a threshold score;receiving, from a user, an assessment of the at least one text strings relating to the AI assessment;updating the communication record based on the assessment to create an updated communication record; andapplying the updated communication record to the AI assessment model as supplemental training data.
  • 22. The method of claim 21, wherein the supplemental training data is based on manual quality analysis by a plurality of people and determining consensus between the people.
US Referenced Citations (619)
Number Name Date Kind
5862203 Wulkan et al. Jan 1999 A
5897616 Kanevsky et al. Apr 1999 A
5966691 Kibre et al. Oct 1999 A
5970124 Csaszar et al. Oct 1999 A
6100891 Thorne Aug 2000 A
6128415 Hultgren et al. Oct 2000 A
6163607 Bogart et al. Dec 2000 A
6230197 Beck et al. May 2001 B1
6263057 Silverman Jul 2001 B1
6345093 Lee et al. Feb 2002 B1
6385584 McAlister et al. May 2002 B1
6411687 Bohacek et al. Jun 2002 B1
6493695 Pickering et al. Dec 2002 B1
6560222 Pounds et al. May 2003 B1
6587831 O'Brien Jul 2003 B1
6639982 Stuart et al. Oct 2003 B1
6721416 Farrell Apr 2004 B1
6754333 Flockhart et al. Jun 2004 B1
6859776 Cohen et al. Feb 2005 B1
6970829 Leamon Nov 2005 B1
7023979 Wu et al. Apr 2006 B1
7076047 Brennan et al. Jul 2006 B1
7110525 Heller et al. Sep 2006 B1
7209475 Shaffer et al. Apr 2007 B1
7274787 Schoeneberger Sep 2007 B1
7292689 Odinak et al. Nov 2007 B2
7343406 Buonanno et al. Mar 2008 B1
7372952 Wu et al. May 2008 B1
7382773 Schoeneberger et al. Jun 2008 B2
7409336 Pak et al. Aug 2008 B2
7426268 Walker et al. Sep 2008 B2
7466334 Baba Dec 2008 B1
7537154 Ramachandran May 2009 B2
7634422 Andre et al. Dec 2009 B1
7657263 Chahrouri Feb 2010 B1
7672746 Hamilton et al. Mar 2010 B1
7672845 Beranek et al. Mar 2010 B2
7676034 Wu et al. Mar 2010 B1
7698163 Reed et al. Apr 2010 B2
7752159 Nelken et al. Jul 2010 B2
7774790 Jirman et al. Aug 2010 B1
7788286 Nourbakhsh et al. Aug 2010 B2
7853006 Fama et al. Dec 2010 B1
7864946 Fama et al. Jan 2011 B1
7869998 Di Fabbrizio et al. Jan 2011 B1
7949123 Flockhart et al. May 2011 B1
7953219 Freedman et al. May 2011 B2
7966369 Briere et al. Jun 2011 B1
8060394 Woodings et al. Nov 2011 B2
8073129 Kalavar Dec 2011 B1
8116446 Kalavar Feb 2012 B1
8135125 Sidhu et al. Mar 2012 B2
8160233 Keren et al. Apr 2012 B2
8184782 Vatland et al. May 2012 B1
8223951 Edelhaus et al. Jul 2012 B1
8229761 Backhaus et al. Jul 2012 B2
8243896 Rae Aug 2012 B1
8300798 Wu et al. Oct 2012 B1
8369338 Peng et al. Feb 2013 B1
8370155 Byrd et al. Feb 2013 B2
8391466 Noble, Jr. Mar 2013 B1
8447279 Peng et al. May 2013 B1
8488769 Noble et al. Jul 2013 B1
8526576 Deich et al. Sep 2013 B1
8583466 Margulies et al. Nov 2013 B2
8594306 Laredo et al. Nov 2013 B2
8626137 Devitt et al. Jan 2014 B1
8635226 Chang et al. Jan 2014 B2
8644489 Noble et al. Feb 2014 B1
8671020 Morrison et al. Mar 2014 B1
8688557 Rose et al. Apr 2014 B2
8738739 Makar et al. May 2014 B2
8767948 Riahi et al. Jul 2014 B1
8811597 Hackbarth et al. Aug 2014 B1
8861691 De et al. Oct 2014 B1
8869245 Ranganathan et al. Oct 2014 B2
8898219 Ricci Nov 2014 B2
8898290 Siemsgluess Nov 2014 B2
8909693 Frissora et al. Dec 2014 B2
8935172 Noble, Jr. et al. Jan 2015 B1
8996509 Sundaram Mar 2015 B1
9020142 Kosiba et al. Apr 2015 B2
9026431 Moreno Mengibar et al. May 2015 B1
9060057 Danis Jun 2015 B1
9065915 Lillard et al. Jun 2015 B1
9082094 Etter et al. Jul 2015 B1
9100483 Snedden Aug 2015 B1
9117450 Cook et al. Aug 2015 B2
9123009 Etter et al. Sep 2015 B1
9137366 Medina et al. Sep 2015 B2
9152737 Micali et al. Oct 2015 B1
9160853 Daddi et al. Oct 2015 B1
9178999 Hedge et al. Nov 2015 B1
9185222 Govindarajan et al. Nov 2015 B1
9237232 Williams et al. Jan 2016 B1
9280754 Schwartz et al. Mar 2016 B1
9286413 Coates et al. Mar 2016 B1
9300801 Warford et al. Mar 2016 B1
9319524 Webster Apr 2016 B1
9386152 Riahi et al. Jul 2016 B2
9397985 Seger et al. Jul 2016 B1
9426291 Ouimette et al. Aug 2016 B1
9473637 Venkatapathy et al. Oct 2016 B1
9514463 Grigg et al. Dec 2016 B2
9595049 Showers et al. Mar 2017 B2
9609131 Placiakis et al. Mar 2017 B2
9674361 Ristock et al. Jun 2017 B2
9679265 Schwartz et al. Jun 2017 B1
9774731 Haltom et al. Sep 2017 B1
9787840 Neuer, III et al. Oct 2017 B1
9813495 Van et al. Nov 2017 B1
9823949 Ristock et al. Nov 2017 B2
9883037 Lewis et al. Jan 2018 B1
9894478 Deluca et al. Feb 2018 B1
9930181 Moran et al. Mar 2018 B1
9955021 Liu et al. Apr 2018 B1
RE46852 Petrovykh May 2018 E
9998596 Dunmire et al. Jun 2018 B1
10009465 Fang et al. Jun 2018 B1
10038788 Khalatian Jul 2018 B1
10044862 Cai et al. Aug 2018 B1
10079939 Bostick et al. Sep 2018 B1
10085073 Ray et al. Sep 2018 B2
10101974 Ristock et al. Oct 2018 B2
10115065 Fama et al. Oct 2018 B1
10135973 Algard et al. Nov 2018 B2
10154138 Te Booij et al. Dec 2018 B2
10194027 Daddi et al. Jan 2019 B1
10235999 Naughton et al. Mar 2019 B1
10241752 Lemay et al. Mar 2019 B2
10242019 Shan et al. Mar 2019 B1
10276170 Gruber et al. Apr 2019 B2
10277745 Araujo et al. Apr 2019 B1
10290017 Traasdahl et al. May 2019 B2
10331402 Spector et al. Jun 2019 B1
10380246 Clark et al. Aug 2019 B2
10440180 Jayapalan et al. Oct 2019 B1
10445742 Prendki et al. Oct 2019 B2
10460728 Anbazhagan et al. Oct 2019 B2
10497361 Rule et al. Dec 2019 B1
10554590 Cabrera-Cordon et al. Feb 2020 B2
10554817 Sullivan et al. Feb 2020 B1
10572879 Hunter et al. Feb 2020 B1
10574822 Sheshaiahgari et al. Feb 2020 B1
10601992 Dwyer et al. Mar 2020 B2
10623572 Copeland Apr 2020 B1
10635973 Dirac et al. Apr 2020 B1
10636425 Naughton et al. Apr 2020 B2
10699303 Ismail et al. Jun 2020 B2
10715648 Vashisht et al. Jul 2020 B1
10718031 Wu et al. Jul 2020 B1
10728384 Channakeshava et al. Jul 2020 B1
10735586 Johnston Aug 2020 B1
10742806 Kotak Aug 2020 B2
10750019 Petrovykh et al. Aug 2020 B1
10783568 Chandra et al. Sep 2020 B1
10789956 Dube Sep 2020 B1
10803865 Naughton et al. Oct 2020 B2
10812654 Wozniak Oct 2020 B2
10812655 Adibi et al. Oct 2020 B1
10827069 Paiva Nov 2020 B1
10827071 Adibi et al. Nov 2020 B1
10839432 Konig et al. Nov 2020 B1
10841425 Langley et al. Nov 2020 B1
10855844 Smith et al. Dec 2020 B1
10861031 Sullivan et al. Dec 2020 B2
10878479 Wu et al. Dec 2020 B2
10943589 Naughton et al. Mar 2021 B2
10970682 Aykin Apr 2021 B1
11017176 Ayers et al. May 2021 B2
11089158 Holland et al. Aug 2021 B1
20010008999 Bull Jul 2001 A1
20010024497 Campbell Sep 2001 A1
20010054072 Discolo et al. Dec 2001 A1
20020019737 Stuart et al. Feb 2002 A1
20020029272 Weller Mar 2002 A1
20020034304 Yang Mar 2002 A1
20020038420 Collins et al. Mar 2002 A1
20020067823 Walker et al. Jun 2002 A1
20020143599 Nourbakhsh et al. Oct 2002 A1
20020169664 Walker et al. Nov 2002 A1
20020174182 Wilkinson et al. Nov 2002 A1
20020181689 Rupe et al. Dec 2002 A1
20030007621 Graves et al. Jan 2003 A1
20030009520 Nourbakhsh et al. Jan 2003 A1
20030032409 Hutcheson et al. Feb 2003 A1
20030061068 Curtis Mar 2003 A1
20030112927 Brown et al. Jun 2003 A1
20030126136 Omoigui Jul 2003 A1
20030167167 Gong Sep 2003 A1
20040044585 Franco Mar 2004 A1
20040044664 Cash et al. Mar 2004 A1
20040062364 Dezonno et al. Apr 2004 A1
20040078257 Schweitzer et al. Apr 2004 A1
20040098274 Dezonno et al. May 2004 A1
20040103051 Reed et al. May 2004 A1
20040141508 Schoeneberger et al. Jul 2004 A1
20040162724 Hill et al. Aug 2004 A1
20040162753 Vogel et al. Aug 2004 A1
20040174980 Knott et al. Sep 2004 A1
20040215451 MacLeod Oct 2004 A1
20050033957 Enokida Feb 2005 A1
20050043986 Mcconnell et al. Feb 2005 A1
20050063365 Mathew et al. Mar 2005 A1
20050071178 Beckstrom et al. Mar 2005 A1
20050105712 Williams et al. May 2005 A1
20050177368 Odinak et al. Aug 2005 A1
20050226220 Kilkki et al. Oct 2005 A1
20050228774 Ronnewinkel Oct 2005 A1
20050246511 Willman et al. Nov 2005 A1
20050271198 Chin et al. Dec 2005 A1
20060095575 Sureka et al. May 2006 A1
20060126818 Berger et al. Jun 2006 A1
20060153357 Acharya et al. Jul 2006 A1
20060166669 Claussen Jul 2006 A1
20060188086 Busey et al. Aug 2006 A1
20060209797 Anisimov et al. Sep 2006 A1
20060215831 Knott et al. Sep 2006 A1
20060229931 Fligler et al. Oct 2006 A1
20060256953 Pulaski et al. Nov 2006 A1
20060271361 Vora et al. Nov 2006 A1
20060274856 Dun et al. Dec 2006 A1
20060277108 Altberg et al. Dec 2006 A1
20070016565 Evans et al. Jan 2007 A1
20070036334 Culbertson et al. Feb 2007 A1
20070038499 Margulies et al. Feb 2007 A1
20070041519 Erhart et al. Feb 2007 A1
20070061183 Seetharaman et al. Mar 2007 A1
20070078725 Koszewski et al. Apr 2007 A1
20070121902 Stoica et al. May 2007 A1
20070121903 Moore, Jr. et al. May 2007 A1
20070136284 Cobb et al. Jun 2007 A1
20070155411 Morrison Jul 2007 A1
20070157021 Whitfield Jul 2007 A1
20070160188 Sharpe et al. Jul 2007 A1
20070162296 Altberg et al. Jul 2007 A1
20070198329 Lyerly et al. Aug 2007 A1
20070201636 Gilbert et al. Aug 2007 A1
20070263810 Sterns Nov 2007 A1
20070265990 Sidhu et al. Nov 2007 A1
20070269031 Honig et al. Nov 2007 A1
20070280460 Harris et al. Dec 2007 A1
20070287430 Hosain et al. Dec 2007 A1
20080002823 Fama et al. Jan 2008 A1
20080043976 Maximo et al. Feb 2008 A1
20080065902 Spohrer et al. Mar 2008 A1
20080095355 Mahalaha et al. Apr 2008 A1
20080126957 Tysowski et al. May 2008 A1
20080205620 Odinak et al. Aug 2008 A1
20080225872 Collins et al. Sep 2008 A1
20080254774 Lee Oct 2008 A1
20080255944 Shah et al. Oct 2008 A1
20080260138 Chen et al. Oct 2008 A1
20080288770 Kline et al. Nov 2008 A1
20080300955 Hamilton et al. Dec 2008 A1
20090018996 Hunt et al. Jan 2009 A1
20090080411 Lyman Mar 2009 A1
20090086945 Buchanan et al. Apr 2009 A1
20090086949 Caspi et al. Apr 2009 A1
20090086953 Vendrow Apr 2009 A1
20090110182 Knight, Jr. et al. Apr 2009 A1
20090171164 Jung et al. Jul 2009 A1
20090222551 Neely et al. Sep 2009 A1
20090228264 Williams et al. Sep 2009 A1
20090234710 Belgaied et al. Sep 2009 A1
20090234732 Zorman et al. Sep 2009 A1
20090245479 Surendran Oct 2009 A1
20090285384 Pollock et al. Nov 2009 A1
20090306981 Cromack et al. Dec 2009 A1
20090307052 Mankani et al. Dec 2009 A1
20100106568 Grimes Apr 2010 A1
20100114646 Mcilwain et al. May 2010 A1
20100189250 Williams et al. Jul 2010 A1
20100211515 Woodings et al. Aug 2010 A1
20100235341 Bennett Sep 2010 A1
20100250196 Lawler et al. Sep 2010 A1
20100262549 Kannan et al. Oct 2010 A1
20100266115 Fedorov et al. Oct 2010 A1
20100266116 Stolyar et al. Oct 2010 A1
20100274618 Byrd et al. Oct 2010 A1
20100287131 Church Nov 2010 A1
20100293033 Hall et al. Nov 2010 A1
20100299268 Guha et al. Nov 2010 A1
20100332287 Gates et al. Dec 2010 A1
20110014932 Estevez Jan 2011 A1
20110022461 Simeonov Jan 2011 A1
20110071870 Gong Mar 2011 A1
20110077994 Segev et al. Mar 2011 A1
20110082688 Kim et al. Apr 2011 A1
20110116618 Zyarko et al. May 2011 A1
20110125697 Erhart et al. May 2011 A1
20110143323 Cohen Jun 2011 A1
20110182283 Van et al. Jul 2011 A1
20110185293 Barnett et al. Jul 2011 A1
20110216897 Laredo et al. Sep 2011 A1
20110264581 Clyne Oct 2011 A1
20110267985 Wilkinson et al. Nov 2011 A1
20110286592 Nimmagadda Nov 2011 A1
20110288897 Erhart et al. Nov 2011 A1
20120046996 Shah et al. Feb 2012 A1
20120051537 Chishti et al. Mar 2012 A1
20120084217 Kohler et al. Apr 2012 A1
20120087486 Guerrero et al. Apr 2012 A1
20120095835 Makar et al. Apr 2012 A1
20120109830 Vogel May 2012 A1
20120257116 Hendrickson et al. Oct 2012 A1
20120265587 Kinkead Oct 2012 A1
20120290373 Ferzacca et al. Nov 2012 A1
20120321073 Flockhart et al. Dec 2012 A1
20130023235 Fan et al. Jan 2013 A1
20130073361 Silver Mar 2013 A1
20130085785 Rogers et al. Apr 2013 A1
20130090963 Sharma et al. Apr 2013 A1
20130124361 Bryson May 2013 A1
20130136252 Kosiba et al. May 2013 A1
20130223608 Flockhart et al. Aug 2013 A1
20130236002 Jennings et al. Sep 2013 A1
20130257877 Davis Oct 2013 A1
20130304581 Soroca et al. Nov 2013 A1
20130325972 Boston et al. Dec 2013 A1
20140012603 Scanlon et al. Jan 2014 A1
20140016762 Mitchell et al. Jan 2014 A1
20140039944 Humbert et al. Feb 2014 A1
20140039962 Nudd et al. Feb 2014 A1
20140067375 Wooters Mar 2014 A1
20140079195 Srivastava et al. Mar 2014 A1
20140079207 Zhakov et al. Mar 2014 A1
20140099916 Mallikarjunan et al. Apr 2014 A1
20140101261 Wu et al. Apr 2014 A1
20140136346 Chris May 2014 A1
20140140494 Zhakov May 2014 A1
20140143018 Nies et al. May 2014 A1
20140143249 Cazzanti et al. May 2014 A1
20140161241 Baranovsky et al. Jun 2014 A1
20140164502 Khodorenko et al. Jun 2014 A1
20140177819 Vymenets et al. Jun 2014 A1
20140188477 Zhang Jul 2014 A1
20140200988 Kassko et al. Jul 2014 A1
20140219132 Delveaux et al. Aug 2014 A1
20140219438 Brown et al. Aug 2014 A1
20140233719 Vymenets et al. Aug 2014 A1
20140244712 Walters et al. Aug 2014 A1
20140254790 Shaffer et al. Sep 2014 A1
20140257908 Steiner et al. Sep 2014 A1
20140270108 Riahi et al. Sep 2014 A1
20140270138 Uba et al. Sep 2014 A1
20140270142 Bischoff et al. Sep 2014 A1
20140270145 Erhart et al. Sep 2014 A1
20140278605 Borucki et al. Sep 2014 A1
20140278649 Guerinik et al. Sep 2014 A1
20140279045 Shottan et al. Sep 2014 A1
20140279050 Makar et al. Sep 2014 A1
20140314225 Riahi et al. Oct 2014 A1
20140335480 Asenjo et al. Nov 2014 A1
20140372171 Martin et al. Dec 2014 A1
20140379424 Shroff Dec 2014 A1
20150006400 Eng et al. Jan 2015 A1
20150010134 Erel et al. Jan 2015 A1
20150012278 Metcalf Jan 2015 A1
20150016600 Desai et al. Jan 2015 A1
20150023484 Ni et al. Jan 2015 A1
20150030151 Bellini et al. Jan 2015 A1
20150030152 Waxman et al. Jan 2015 A1
20150051957 Griebeler et al. Feb 2015 A1
20150066632 Gonzalez et al. Mar 2015 A1
20150071418 Shaffer et al. Mar 2015 A1
20150078538 Jain Mar 2015 A1
20150100473 Manoharan et al. Apr 2015 A1
20150127400 Chan et al. May 2015 A1
20150127441 Feldman May 2015 A1
20150127677 Wang et al. May 2015 A1
20150142704 London May 2015 A1
20150172463 Quast et al. Jun 2015 A1
20150178371 Seth et al. Jun 2015 A1
20150195406 Dwyer et al. Jul 2015 A1
20150213454 Vedula Jul 2015 A1
20150215464 Shaffer et al. Jul 2015 A1
20150222751 Odinak et al. Aug 2015 A1
20150256677 Konig et al. Sep 2015 A1
20150262188 Franco Sep 2015 A1
20150262208 Bjontegard et al. Sep 2015 A1
20150269377 Gaddipati Sep 2015 A1
20150271334 Wawrzynowicz Sep 2015 A1
20150281445 Kumar et al. Oct 2015 A1
20150281449 Milstein et al. Oct 2015 A1
20150281450 Shapiro et al. Oct 2015 A1
20150281454 Milstein et al. Oct 2015 A1
20150287410 Mengibar et al. Oct 2015 A1
20150295788 Witzman et al. Oct 2015 A1
20150296081 Jeong Oct 2015 A1
20150334230 Volzke Nov 2015 A1
20150339446 Sperling et al. Nov 2015 A1
20150339620 Esposito et al. Nov 2015 A1
20150339769 Deoliveira et al. Nov 2015 A1
20150347900 Bell et al. Dec 2015 A1
20150350429 Kumar et al. Dec 2015 A1
20150350440 Steiner et al. Dec 2015 A1
20150350443 Kumar et al. Dec 2015 A1
20150379562 Spievak et al. Dec 2015 A1
20160026629 Clifford et al. Jan 2016 A1
20160034260 Ristock et al. Feb 2016 A1
20160034995 Williams et al. Feb 2016 A1
20160036981 Hollenberg et al. Feb 2016 A1
20160036983 Korolev et al. Feb 2016 A1
20160042419 Singh Feb 2016 A1
20160042749 Hirose Feb 2016 A1
20160055499 Hawkins et al. Feb 2016 A1
20160057284 Nagpal et al. Feb 2016 A1
20160065739 Brimshan et al. Mar 2016 A1
20160080567 Hooshiari et al. Mar 2016 A1
20160085891 Ter et al. Mar 2016 A1
20160112867 Martinez Apr 2016 A1
20160124937 Elhaddad May 2016 A1
20160125456 Wu et al. May 2016 A1
20160134624 Jacobson et al. May 2016 A1
20160140627 Moreau et al. May 2016 A1
20160150086 Pickford May 2016 A1
20160155080 Gnanasambandam et al. Jun 2016 A1
20160173692 Wicaksono et al. Jun 2016 A1
20160180381 Kaiser et al. Jun 2016 A1
20160191699 Agrawal et al. Jun 2016 A1
20160191709 Pullamplavil et al. Jun 2016 A1
20160191712 Bouzid et al. Jun 2016 A1
20160234386 Wawrzynowicz Aug 2016 A1
20160247165 Ryabchun et al. Aug 2016 A1
20160261747 Thirugnanasundaram et al. Aug 2016 A1
20160295018 Loftus et al. Oct 2016 A1
20160300573 Carbune et al. Oct 2016 A1
20160335576 Peng Nov 2016 A1
20160349960 Kumar et al. Dec 2016 A1
20160358611 Abel Dec 2016 A1
20160360033 Kocan Dec 2016 A1
20160378569 Ristock et al. Dec 2016 A1
20160381222 Ristock et al. Dec 2016 A1
20170004178 Ponting et al. Jan 2017 A1
20170006135 Siebel et al. Jan 2017 A1
20170006161 Riahi et al. Jan 2017 A9
20170011311 Backer et al. Jan 2017 A1
20170024762 Swaminathan Jan 2017 A1
20170032436 Disalvo et al. Feb 2017 A1
20170034226 Bostick et al. Feb 2017 A1
20170068436 Auer et al. Mar 2017 A1
20170068854 Markiewicz et al. Mar 2017 A1
20170098197 Yu et al. Apr 2017 A1
20170104875 Im et al. Apr 2017 A1
20170111505 Mcgann et al. Apr 2017 A1
20170111509 McGann et al. Apr 2017 A1
20170116173 Lev-Tov et al. Apr 2017 A1
20170118336 Tapuhi et al. Apr 2017 A1
20170132536 Goldstein et al. May 2017 A1
20170148073 Nomula et al. May 2017 A1
20170155766 Kumar et al. Jun 2017 A1
20170161439 Raduchel et al. Jun 2017 A1
20170162197 Cohen Jun 2017 A1
20170169325 McCord et al. Jun 2017 A1
20170207916 Luce et al. Jul 2017 A1
20170214795 Charlson Jul 2017 A1
20170220966 Wang Aug 2017 A1
20170223070 Lin Aug 2017 A1
20170236512 Williams et al. Aug 2017 A1
20170286774 Gaidon Oct 2017 A1
20170288866 Vanek et al. Oct 2017 A1
20170308794 Fischerstrom Oct 2017 A1
20170316386 Joshi et al. Nov 2017 A1
20170323344 Nigul Nov 2017 A1
20170337578 Chittilappilly et al. Nov 2017 A1
20170344754 Kumar et al. Nov 2017 A1
20170344988 Cusden et al. Nov 2017 A1
20170359421 Stoops et al. Dec 2017 A1
20170372436 Dalal et al. Dec 2017 A1
20180018705 Tognetti Jan 2018 A1
20180032997 Gordon et al. Feb 2018 A1
20180052664 Zhang et al. Feb 2018 A1
20180053401 Martin et al. Feb 2018 A1
20180054464 Zhang et al. Feb 2018 A1
20180060830 Abramovici et al. Mar 2018 A1
20180061256 Elchik et al. Mar 2018 A1
20180077088 Cabrera-Cordon et al. Mar 2018 A1
20180077250 Prasad et al. Mar 2018 A1
20180097910 D'Agostino et al. Apr 2018 A1
20180114234 Fighel Apr 2018 A1
20180121766 Mccord et al. May 2018 A1
20180137472 Gorzela et al. May 2018 A1
20180137555 Clausse et al. May 2018 A1
20180146093 Kumar et al. May 2018 A1
20180150749 Wu et al. May 2018 A1
20180152558 Chan et al. May 2018 A1
20180165062 Yoo et al. Jun 2018 A1
20180165691 Heater et al. Jun 2018 A1
20180165692 McCoy Jun 2018 A1
20180165723 Wright et al. Jun 2018 A1
20180174198 Wilkinson et al. Jun 2018 A1
20180189273 Campos et al. Jul 2018 A1
20180190144 Corelli et al. Jul 2018 A1
20180198917 Ristock et al. Jul 2018 A1
20180205825 Vymenets et al. Jul 2018 A1
20180248818 Zucker et al. Aug 2018 A1
20180260857 Kar et al. Sep 2018 A1
20180285423 Ciano et al. Oct 2018 A1
20180286000 Berry et al. Oct 2018 A1
20180293327 Miller et al. Oct 2018 A1
20180293532 Singh et al. Oct 2018 A1
20180300295 Maksak et al. Oct 2018 A1
20180300641 Donn et al. Oct 2018 A1
20180308072 Smith et al. Oct 2018 A1
20180309801 Rathod Oct 2018 A1
20180349858 Walker et al. Dec 2018 A1
20180361253 Grosso Dec 2018 A1
20180365651 Sreedhara et al. Dec 2018 A1
20180367672 Ristock et al. Dec 2018 A1
20180372486 Farniok et al. Dec 2018 A1
20180376002 Abraham Dec 2018 A1
20190013017 Kang et al. Jan 2019 A1
20190028587 Unitt et al. Jan 2019 A1
20190028588 Shinseki et al. Jan 2019 A1
20190037077 Konig et al. Jan 2019 A1
20190042988 Brown et al. Feb 2019 A1
20190043106 Talmor et al. Feb 2019 A1
20190058793 Konig et al. Feb 2019 A1
20190104092 Koohmarey et al. Apr 2019 A1
20190108834 Nelson et al. Apr 2019 A1
20190130329 Fama et al. May 2019 A1
20190132443 Munns et al. May 2019 A1
20190146647 Ramachandran et al. May 2019 A1
20190147045 Kim May 2019 A1
20190172291 Naseath Jun 2019 A1
20190180095 Ferguson et al. Jun 2019 A1
20190180747 Back et al. Jun 2019 A1
20190182383 Shaev et al. Jun 2019 A1
20190196676 Hillis et al. Jun 2019 A1
20190197568 Li et al. Jun 2019 A1
20190205389 Tripathi et al. Jul 2019 A1
20190236205 Jia et al. Aug 2019 A1
20190238680 Narayanan et al. Aug 2019 A1
20190253553 Chishti Aug 2019 A1
20190258825 Krishnamurthy Aug 2019 A1
20190287517 Green et al. Sep 2019 A1
20190295027 Dunne et al. Sep 2019 A1
20190306315 Portman et al. Oct 2019 A1
20190335038 Alonso Y Caloca et al. Oct 2019 A1
20190341030 Hammons et al. Nov 2019 A1
20190342450 Kulkarni et al. Nov 2019 A1
20190349477 Kotak Nov 2019 A1
20190377789 Jegannathan et al. Dec 2019 A1
20190378076 O'Gorman et al. Dec 2019 A1
20190385597 Katsamanis et al. Dec 2019 A1
20190386917 Malin Dec 2019 A1
20190392357 Surti et al. Dec 2019 A1
20190394333 Jiron et al. Dec 2019 A1
20200005375 Sharan et al. Jan 2020 A1
20200007680 Wozniak Jan 2020 A1
20200012697 Fan et al. Jan 2020 A1
20200012992 Chan et al. Jan 2020 A1
20200019893 Lu Jan 2020 A1
20200028968 Mendiratta et al. Jan 2020 A1
20200050788 Feuz et al. Feb 2020 A1
20200050996 Generes, Jr. et al. Feb 2020 A1
20200058299 Lee et al. Feb 2020 A1
20200076947 Deole Mar 2020 A1
20200097544 Alexander et al. Mar 2020 A1
20200104801 Kwon et al. Apr 2020 A1
20200118215 Rao et al. Apr 2020 A1
20200119936 Balasaygun et al. Apr 2020 A1
20200125919 Liu et al. Apr 2020 A1
20200126126 Briancon et al. Apr 2020 A1
20200134492 Copeland Apr 2020 A1
20200134648 Qi et al. Apr 2020 A1
20200137097 Zimmermann et al. Apr 2020 A1
20200154170 Wu et al. May 2020 A1
20200160870 Baughman et al. May 2020 A1
20200175478 Lee et al. Jun 2020 A1
20200193335 Sekhar et al. Jun 2020 A1
20200193983 Choi Jun 2020 A1
20200211120 Wang et al. Jul 2020 A1
20200218766 Yaseen et al. Jul 2020 A1
20200219500 Bender et al. Jul 2020 A1
20200242540 Rosati et al. Jul 2020 A1
20200250272 Kantor et al. Aug 2020 A1
20200250557 Kishimoto et al. Aug 2020 A1
20200257996 London Aug 2020 A1
20200280578 Hearty et al. Sep 2020 A1
20200280635 Barinov et al. Sep 2020 A1
20200285936 Sen Sep 2020 A1
20200329154 Baumann et al. Oct 2020 A1
20200336567 Dumaine Oct 2020 A1
20200342868 Lou et al. Oct 2020 A1
20200351375 Lepore et al. Nov 2020 A1
20200351405 Pace Nov 2020 A1
20200357026 Liu et al. Nov 2020 A1
20200364507 Berry Nov 2020 A1
20200365148 Ji et al. Nov 2020 A1
20200395008 Cohen et al. Dec 2020 A1
20200410506 Jones et al. Dec 2020 A1
20210004536 Adibi et al. Jan 2021 A1
20210005206 Adibi et al. Jan 2021 A1
20210042839 Adamec Feb 2021 A1
20210056481 Wicaksono et al. Feb 2021 A1
20210067627 Delker et al. Mar 2021 A1
20210081869 Zeelig et al. Mar 2021 A1
20210081955 Zeelig et al. Mar 2021 A1
20210082417 Zeelig et al. Mar 2021 A1
20210082418 Zeelig et al. Mar 2021 A1
20210084149 Zeelig et al. Mar 2021 A1
20210089762 Rahimi et al. Mar 2021 A1
20210091996 Mcconnell et al. Mar 2021 A1
20210105361 Bergher et al. Apr 2021 A1
20210124843 Vass et al. Apr 2021 A1
20210125275 Adibi Apr 2021 A1
20210133763 Adibi et al. May 2021 A1
20210133765 Adibi et al. May 2021 A1
20210134282 Adibi et al. May 2021 A1
20210134283 Adibi et al. May 2021 A1
20210134284 Adibi et al. May 2021 A1
20210136204 Adibi et al. May 2021 A1
20210136205 Adibi et al. May 2021 A1
20210136206 Adibi et al. May 2021 A1
20210201244 Sella et al. Jul 2021 A1
20220129905 Sethumadhavan et al. Apr 2022 A1
20230007123 Krucek Jan 2023 A1
Foreign Referenced Citations (6)
Number Date Country
1 418 519 May 2004 EP
5986065 Sep 2016 JP
2006037836 Apr 2006 WO
2012024316 Feb 2012 WO
2015099587 Jul 2015 WO
2019142743 Jul 2019 WO
Non-Patent Literature Citations (40)
Entry
Aksin et al., “The Modern Call Center: A Multi-Disciplinary Perspective on Operations Management Research”, Production and Operations Management, 2007, vol. 16, No. 6, pp. 665-688.
Buesing et al., “Getting the Best Customer Service from your IVR: Fresh eyes on an old problem,” [online] McKinsey and Co., published on Feb. 1, 2019, available at: < https://www.nnckinsey.conn/business-functions/operations/our-insights/ getting-the-best-customer-service-from-your-ivr-fresh-eyes . . . (Year: 2019).
Chiu et al., “A multi-agent infrastructure for mobile workforce management in a service oriented enterprise”, Proceedings of the 38th annual Hawaii international conference on system sciences, IEEE, 2005, pp. 10.
Diimitrios et al., “An overview of workflow management: From process modeling to workflow automation infrastructure,” Distributed and parallel Databases, 1995, vol. 3, No. 2 pp. 119-153.
Ernst et al. “An Annotated Bibliography of Personnel Scheduling and Rostering”, CSIRO Mathematical and Information Sciences, 2003, 155 pages.
Ernst et al.,“Staff scheduling and rostering: A review of applications, methods and models,” European Journal of Operational Research, 2004, vol. 153, pp. 3-27.
Federal Register, vol. 72, No. 195, Oct. 10, 2007, pp. 57526-57535.
Federal Register, vol. 75, No. 169, Sep. 1, 2010, pp. 53643-53660.
Federal register, vol. 79, No. 241 issued on Dec. 16, 2014, p. 74629, col. 2, Gottschalk v. Benson.
Federal Register, vol. 84, No. 4, Jan. 7, 2019, pp. 50-57.
Federal Register, vol. 84, No. 4, Jan. 7, 2019, p. 53-55.
Grefen et al., “A reference architecture for workflow management systems”, Data & Knowledge Engineering, 1998, vol. 27, No. 1, pp. 31-57.
Signed Aug. 20, 2010.
Huang et al., “Agent-based workflow management in collaborative product development on the Internet”, Computer-Aided Design, 2000, vol. 32, No. 2, pp. 133-144.
Janarthanam, “Hands on Chatbots and conversational UI development: Build chatbots and voice user interfaces with Chatfuel, Dialogflow, Microsoft Bot Framework, Twilio, and Alexa Skills” Dec. 2017.
Myers et al., “At the Boundary of Workflow and AI”, Proc. AAAI 1999 Workshop on Agent-Based Systems in the Business Context, 1999, 09 pages.
Niven, “Can music with prosocial lyrics heal the working world? A field intervention in a call center.” Journal of Applied Social Psychology, 2015; 45(3), 132-138. doi:10.1111/jasp.12282 ).
On Hold Marketing, “Growing Your Business with Customized on-Hold Messaging” (Published on Apr. 5, 2018 at https://adhq.com/about/ad-news/growing-your-business-with-customized-on-hold-messaging) (Year: 2018).
U.S. Appl. No. 16/668,214, NFOA mailed Nov. 10, 2021.
U.S. Appl. No. 16/668,215, NFOA mailed Dec. 7, 2021.
Van Den Bergh et al. “Personnel scheduling: A literature review”, European journal of operational research, 2013, vol. 226, No. 3 pp. 367-385.
United States Patent and Trademark Office, Non-Final Office Action for U.S. Appl. No. 16/550,961 dated Mar. 2, 2020.
United States Patent and Trademark Office, Final Office Action for U.S. Appl. No. 16/550,961 dated Jun. 17, 2020.
Aldor-Noiman et al., “Workload forecasting for a call center: Methodology and a case study”, The Annals of Applied Statistics, vol. 3, No. 4, 2009, pp. 1403-1447.
Koole et al., “An overview of routing and staffing algorithms in multi-skill customer contact centers”, 2006, 43 pages.
Krishnan, Krish, “Data Warehousing in the Age of Big Data”, Morgan Kaufmann, Chapter 5, 2013, 28 pages.
Gaietto, Molly., “What is Customer DNA?”,—NGDATA Product News, Oct. 27, 2015, 10 pages.
Fan et al., “Demystifying Big Data Analytics for Business Intelligence Through the Lens of Marketing Mix”, Big Data Research, vol. 2, Issue 1, Mar. 1, 2015, 16 pages.
An et al,, Towards Automatic Persona Generation Using Social Media Aug. 1, 2016, 2016 IEEE 4th International Conference on Future Internet of Things and Cloud Workshops (FiCloudW), 2 pages.
Bean-Mellinger, Barbara., “What Is the Difference Between Marketing and Advertising?”, available on Feb. 12, 2019, retrieved from https://smallbusiness.chron .com/difference-between-marketing-advertising-2504 7 .html, Feb. 12, 2019, 6 pages.
Twin, Alexandra., “Marketing”, URL: https://www.investopedia.com/lerms/m/marketing.asp, Mar. 29, 2019, 5 pages.
dictionary.com, “Marketing”, URL: https://www.dictionary.com/browse/marketing, Apr. 6, 2019, 7 pages.
Ponn et al., “Correlational Analysis between Weather and 311 Service Request Volume”, eil.mie.utoronto.ca., Jan. 1, 2017, 16 pages.
Zhang et al., “A Bayesian approach for modeling and analysis of call center arrivals”, Jan. 1, 2013 Winter Simulations Conference (WSC), ieeexplore.ieee.org, pp. 713-723.
Mehrotra et al., “Call Center Simulation Modeling: Methods, Challenges, and Opportunities”, Proceedings of the 2003 Winter Simulation Conference, vol. 1, Jan. 1, 2003, pp. 135-143.
Mandelbaum et al., “Staffing Many-Server Queues with Impatient Customers: Constraint Satisfaction in Call Center”, Operations Research, Sep.-Oct. 2009, vol. 57, No. 5 (Sep. 1-Oct. 2009), pp. 1189-1205.
Fukunaga et al., “Staff Scheduling for Inbound Call Centers and Customer Contact Centers”, AI Magazine, Winter, vol. 23, No. 4, Jan. 1, 2002, pp. 30-40.
Feldman et al., “Staffing of Time-Varying Queues to Achieve Time-Stable Performance”, Management Science, Feb. 1, 2008, vol. 54, No. 2, Call Center Management, pp. 324-338.
Business Wire, “Rockwell SSD announces Call Center Simulator”, Feb. 4, 1997, 4 pages.
Nathan, Stearns., “Using skills-based routing to the advantage of your contact center”, Customer Inter@ction Solutions, Technology Marketing Corporation, May 1, 2001, vol. 19 No. 11, pp. 54-56.
Related Publications (1)
Number Date Country
20230007124 A1 Jan 2023 US
Continuation in Parts (1)
Number Date Country
Parent 17366883 Jul 2021 US
Child 17403120 US