The present invention generally relates to automatic transcript generation via speech recognition, and particularly relates to mining and use of speech data based on speaker interactions to improve speech recognition and provide feedback in quality management processes.
The task of generating transcripts via automatic speech recognition faces many challenging issues. These issues are compounded, for example, in a call center environment, where one of the speakers may be relatively unknown and on a relatively poor audio channel due to the less than eight kilohertz signal quality limitations of today's telephone line connections. Thus, call centers have generally relied on recordings of conversations between customers and call center personnel which have a length of time, or size, indicating how long the call lasted. Also, transcriptions may have sometimes been obtained by sending the recording to an outsourced transcription service at great expense. Further, emotion detection has been employed to monitor voice stress characteristics of customers and operators and record implied emotional states in association with calls. Still further, one or more topics of conversation have been recorded in association with calls based on call center personnel's selection of topic-related electronic forms during a call, and/or customers 'explicit selection of topics via a key pad entry in response to a voice prompt at the beginning of a call. Yet further still, telephonic and other types of surveys have been employed to obtain feedback from customers relating to their experiences with consumptibles, such as products and/or services, and/or call center performance.
In general, the aforementioned efforts have been made in an attempt to obtain information useful as feedback to a call center quality management process and/or product/service quality management process, such as a product development process. For example, statistics relating to problems encountered by customers in regard to a company's consumptibles often correspond to occurrences of topics of calls at a call center. Also, information entered into an electronic form by call center personnel often identifies particular types of consumptibles, and/or details relating to problems encountered by customers. Further, lengths of calls and detected emotions serve as feedback to call center performance evaluations. Still further, electronic transcripts provides much of this information and more in a searchable format, but are expensive and time consuming to obtain and later process to extract information.
What is needed is a way to automatically generate a transcript by reliably recognizing speech of multiple speakers at a call center or in other domains where one or more speakers may not be known, or where adverse conditions affect speech of one or more speakers. What is also needed is a way to extract information from an automatically generated transcript that fills the need for rich, rapid feedback to a call center quality management process and/or product/service quality management process. The present invention fulfills this need.
In accordance with the present invention, a speech data mining system for use in generating a rich transcription having utility in call center management includes a speech differentiation module differentiating between speech of interacting speakers, and a speech recognition module improving automatic recognition of speech of one speaker based on interaction with another speaker employed as a reference speaker. A transcript generation module generates a rich transcript based on recognized speech of the speakers.
Further areas of applicability of the present invention will become apparent from the detailed description provided hereinafter. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
The present invention will become more fully understood from the detailed description and the accompanying drawings, wherein:
The following description of the preferred embodiment(s) is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses.
By way of overview, the present invention differentiates between multiple, interacting speakers. The preferred embodiment employs a technique for differentiating between multiple, interacting speakers that includes use of separate channels for each speaker, and identification of speech on a particular channel with speech of a particular speaker. The present invention also mines speech data of speakers during the speech recognition process. Examples of speech data mined in accordance with the preferred include customer frustration phrases, operator polity phrases, and contexts such as topics, complaints, solutions, and/or resolutions. These phrases and contexts are identified based on predetermined keywords and keyword combinations extracted during speech. recognition. Additional examples of speech data mined in accordance with the preferred embodiment include detected interruptions of one speaker by another speaker, and a number of interaction turns included in a call.
The mined speech data according to the preferred embodiment has multiple uses. On one hand, some or all of the mined speech data is useful for evaluating call center and/or consumptible performance. On the other hand, some or all of the mined speech data is useful for serving as interactive context, such as context, in an interactive speech. recognition procedure. Accordingly, the present invention uses some or all of speech data mined from speech of one of the interacting speakers as context for recognizing speech of another of the interacting speakers.
In the preferred embodiment, a call center operator employing an adapted speech model and inputting speech on a relatively high quality channel is employed as a reference speaker for recognizing speech of a customer employing a generic speech model on a relatively poor quality channel. For example, if reliably detected speech of one speaker corresponds to “You're welcome,” it is reasonable to assume that the immediately previously interacting speaker is likely to have immediately previously stated a key phrase expressing appreciation, such as “Thank-you”.
Thus, the preferred embodiment generates a transcript based on the recognized speech of the multiple, interacting speakers, and records summarized and supplemented mined speech data in association with the transcript. The result is a rapid and reliable generation of a rich transcript useful in providing rich, rapid feedback to a call center quality management process and/or product/service quality management process.
Referring now to
According to the preferred embodiment, the rich transcripts are obtained by recognition and transcription module during interaction between call center personnel and customers 12. Accordingly, a dialogue module (not shown) of recognition and transcription module 36 prompts customers 12 to select an initial topic via a corresponding keypad entry at the beginning of the call. During a call, an operator of call center personnel 38 may select one or more electronic forms 40 for recording details of the call and thereby further communicate a topic 42 to recognition and transcription module 36. In turn, recognition and transcription module 36 may select one or more of focused language models 44, which are developed specifically for one or more of the predefined and indicated topics. As the call proceeds, recognition and transcription module 36 monitors both the customer and operator channels, and uses the focused language models 44 to recognize speech of both speakers and generate transcript 46, which is communicated to the operator involved in the call. In turn, the operator may communicate edits 48 for incorrectly recognized words and/or phrases to recognition and transcription module 36 during the call.
Recognized words of low confidence in the transcript 48, are highlighted on the active display of the operator to indicate the potential need for an edit or confirmation. To edit an non-highlighted word or phrase, the operator may highlight the word or phrase with a mouse click and drag. Double left clicking on a highlighted word or phrase causes a drop down menu of alternative word recognition candidates to appear for quick selection. A text box also allows the operator to type and enter the correct word or phrase if it does not appear in the list of candidates. A single right click on a highlighted word or phrase quickly and actively confirms the word or phrase and consequently increases the confidence with which the word or phrase is recognized. Also, lack of an edit after a predetermined amount of time may be interpreted as a confirmation and employed to increase the confidence of the recognition of that word or phrase in the transcript to a lesser degree than that of the active confirmation.
Referring now to
In the preferred embodiment, at least some of focused language models 44 are interactive in that the yes/no questions do not merely relate to context of speech the speaker, but additionally or alternatively relate to context of preceding and/or subsequent speech of another, interacting speaker. Thus, the yes/no questions may relate to keywords, contexts such as additional topics, complaints, solutions, and/or resolutions, detected interruptions, whether the context is preceding or subsequent, and/or additional types of context determinable from reliably recognized speech of the reference speaker. As a result, previous and subsequent and recognized words 66 and 68 of the speaker may be employed in addition to context of previous and subsequent interactions 70 and 72 with a reference speaker. For example, an initial model traversal and related recognition attempt is based on the previous words 66 and previous interactions 70. Later, when the subsequent words 68 and subsequent interactions are available, then model traverse module 64 selects for recognized words of low confidence to perform a subsequent model traversal and related recognition attempt based on subsequent and recognized words 66 and 68, and based on previous and subsequent interactions 70 and 72. This procedure may be performed recursively at intervals using contextually correlated speech data mined from several interaction turns. The language models may thus take into account the number of turns associated with the interactive context previous or subsequent to the turn with respect to which the recognition attempt is being performed. In any event, each traversal obtains a probability distribution 74.
Referring now to
Referring now to
Referring now to
Referring now to
The method according to the present invention includes improving recognition of one speaker at step 140 based on reliably recognized speech of another, interacting speaker recognized at step 142, preferably using an adapted speech model at step 144. Preferably, focused language models are employed at step 146 based on one or more topics specified by the speakers or determined from the interaction of the speakers at step 148. According to the preferred embodiment, step, 140 includes utilizing recognized keywords, phrases and/or interaction characteristics of a reference speaker at step 150, such as data mined in step 138 from speech of the reference speaker. Step 150 includes employing the mined speech data as context in an interactive, focused, language model at step 152, supplementing a constraint list at step 154 with keywords reliably extracted from speech of the reference speaker, and/or rescoring recognition candidates at step 156 based on keywords reliably extracted from speech of the reference speaker. The method further includes generating a rich transcription at step 158 of text with metadata, such as speech data mined in step 138, which preferably indicates operator performance and/or customer satisfaction. This metadata can then be used as feedback at step 160 to improve customer relationship management and/or products and services.
The description of the invention is merely exemplary in nature and, thus, variations that do not depart from the gist of the invention are intended to be within the scope of the invention. For example, the two techniques of differentiating between multiple interacting speakers may be used in combination, especially in domains other than call centers. For example, an environment may have multiple microphones on separate channels disposed at different locations, with various speakers moving about the environment. Thus, the differentiation between speakers may in part be based on likelihood of a particular speaker to move from one channel to another, and further in part be based on use of a speech biometric useful for differentiating between the speakers. Also, the present invention may be used in courtroom transcription. In such a domain, a Judge may be employed as a reference speaker based on existence of a well-adapted speech model, and separate channels may additionally or alternatively be employed. Further, where channels are of substantially equal quality, and/or where speakers are substantially equally known or unknown, it remains possible to treat both speakers as reference speakers to one another and weight mined speech data based on confidence levels associated with the speech from which the data was mined. Further still, even where one speaker's speech is considered much more reliable than another's due to various reasons, it remains possible to employ the speaker producing the less reliable speech as a reference speaker to the more reliable speaker. In such a case, reliability of speech may be employed as a weighting factor in the recognition improvement process. Such variations are not to be regarded as a departure from the spirit and scope of the invention.