A computer program listing is submitted herewith as an Appendix and is incorporated herein by reference.
The present invention relates generally to natural language processing, and particularly to systems, methods and software for analyzing the content of conversations.
Huge amounts of information are exchanged among participants in teleconferences (meaning conversations, i.e., oral exchanges, between two or more participants over a communication network, including both telephone and packet networks). In many organizations, teleconferences are recorded and available for subsequent review. Even when the teleconferences are transcribed to textual form, however, reviewing the records is so time-consuming that the vast majority of the information cannot be exploited.
A number of tools have been developed to automate the extraction of information from teleconferences. For example, U.S. Patent Application Publication 2014/0278377 describes arrangements relating to automatically taking notes in a virtual meeting. The virtual meeting has meeting content that includes a plurality of meeting content streams. One or more of the meeting content streams is in a non-text format. The one or more meeting content streams in a non-text format can be converted into text. As a result, the plurality of meeting content streams is in text format. The text of the plurality of meeting content streams can be analyzed to identify a key element within the text. Consolidated system notes that include the key element can be generated.
As another example, U.S. Patent Application Publication 2004/0021765 describes an automated meeting facilitator, which manages and archives a telemeeting. The automated meeting facilitator includes a multimedia indexing section, which generates rich transcriptions of the telemeeting and stores documents related to the telemeeting. Through the rich transcription, the automated meeting facilitator is able to provide a number of real-time search and assistance functions to the meeting participants.
Some automated tools relate to topics of discussions. For example, U.S. Patent Application Publication 2014/0229471 describes a method, computer program product, and system for ranking meeting topics. A plurality of participants in an electronic meeting is identified. One or more interests associated with one or more individuals included in the plurality of participants are identified. One or more topics associated with the meeting are received. A ranking of the one or more topics is determined based upon, at least in part, the one or more identified interests.
Automated analysis of teleconferences can be particularly useful in the context of enterprise contact centers. For example, in this regard U.S. Pat. No. 8,611,523 describes a method and system for analyzing an electronic communication, more particularly, to analyzing a telephone communication between a customer and a contact center to determine communication objects, forming segments of like communication objects, determining strength of negotiations between the contact center and the customer from the segments, and automating setup time calculation.
As another example, U.S. Patent Application Publication 2010/0104086 describes a system and method for automatic call segmentation including steps and means for automatically detecting boundaries between utterances in the call transcripts; automatically classifying utterances into target call sections; automatically partitioning the call transcript into call segments; and outputting a segmented call transcript. A training method and apparatus for training the system to perform automatic call segmentation includes steps and means for providing at least one training transcript with annotated call sections; normalizing the at least one training transcript; and performing statistical analysis on the at least one training transcript.
Embodiments of the present invention that are described hereinbelow provide improved methods, apparatus and software for automated analysis of conversations.
There is therefore provided, in accordance with an embodiment of the invention, a method for information processing, which includes receiving in a computer a corpus of recorded conversations, with two or more speakers participating in each conversation. The computer computes respective frequencies of occurrence of multiple words in each of a plurality of chunks in each of the recorded conversations. Based on the frequencies of occurrence of the words over the conversations in the corpus, the computer derives autonomously an optimal set of topics to which the chunks can be assigned such that the optimal set maximizes a likelihood that the chunks will be generated by the topics in the set. A recorded conversation from the corpus is segmented using the derived topics into a plurality of segments, such that each segment is classified as belonging to a particular topic in the optimal set, and a distribution of the segments and respective classifications of the segments into the topics over a duration of the recorded conversation is outputted.
In a disclosed embodiment, deriving the optimal set of the topics includes extracting the topics from the conversations by the computer without using a pre-classified training set. Additionally or alternatively, receiving the corpus includes converting the conversations to a textual form, analyzing a syntax of the conversations in the textual form, and discarding from the corpus the conversations in which the analyzed syntax does not match syntactical rules of a target language.
In a disclosed embodiment, deriving the optimal set of topics includes defining a target number of topics, and applying Latent Dirichlet Allocation to the corpus in order to derive the target number of the topics.
In some embodiments, the method includes automatically assigning, by the computer, respective titles to the topics. In one embodiment, automatically assigning the respective titles includes, for each topic, extracting from the segments of the conversations in the corpus that are classified as belonging to the topic one or more n-grams that statistically differentiate the segments classified as belonging to the topic from the segments that belong to the remaining topics in the set, and selecting one of the extracted n-grams as a title for the topic.
Additionally or alternatively, deriving the optimal set of the topics includes computing, based on the frequencies of occurrence of the words in the chunks, respective probabilities of association between the words and the topics, and segmenting the recorded conversation includes classifying each segment according to the respective probabilities of association of the words occurring in the segment. In one embodiment, computing the respective probabilities of association includes computing respective word scores for each word with respect to each of the topics based on the probabilities of association, and classifying each segment includes, for each chunk of the recorded conversation, deriving respective topic scores for the topics in the set by combining the word scores of the words occurring in the chunk with respect to each of the topics, classifying the chunks into topics based on the respective topic scores, and defining the segments by grouping together adjacent chunks that are classified as belonging to a common topic.
In some embodiments, outputting the distribution includes displaying the distribution of the segments and respective classifications of the segments into the topics on a computer interface. In a disclosed embodiment, displaying the distribution includes presenting a timeline that graphically illustrates the respective classifications and durations of the segments during the recorded conversation. Typically, presenting the timeline includes showing which of the speakers was speaking at each time during the recorded conversation.
In one embodiment, deriving the optimal set of topics includes receiving seed words for one or more of the topics from a user of the computer.
In some embodiments, the method includes automatically applying, by the computer, the distribution of the segments in predicting whether a given conversation is likely to result in a specified outcome and/or in assessing whether a given conversation follows a specified pattern.
There is also provided, in accordance with an embodiment of the invention, an information processing system, including a memory, which is configured to store a corpus of recorded conversations, with two or more speakers participating in each conversation. A processor is configured to compute respective frequencies of occurrence of multiple words in each of a plurality of chunks in each of the recorded conversations, and to derive autonomously, based on the frequencies of occurrence of the words over the conversations in the corpus, an optimal set of topics to which the chunks can be assigned such that the optimal set maximizes a likelihood that any given chunk will be assigned to a single topic in the set, and to segment a recorded conversation from the corpus, using the derived topics into a plurality of segments, such that each segment is classified as belonging to a particular topic in the optimal set, and to output a distribution of the segments and respective classifications of the segments into the topics over a duration of the recorded conversation.
There is additionally provided, in accordance with an embodiment of the invention, a computer software product, including a non-transitory computer-readable medium in which program instructions are stored, which instructions, when read by a computer, cause the computer to store a corpus of recorded conversations, with two or more speakers participating in each conversation, to compute respective frequencies of occurrence of multiple words in each of a plurality of chunks in each of the recorded conversations, and to derive autonomously, based on the frequencies of occurrence of the words over the conversations in the corpus, an optimal set of topics to which the chunks can be assigned such that the optimal set maximizes a likelihood that any given chunk will be assigned to a single topic in the set, and to segment a recorded conversation from the corpus, using the derived topics into a plurality of segments, such that each segment is classified as belonging to a particular topic in the optimal set, and to output a distribution of the segments and respective classifications of the segments into the topics over a duration of the recorded conversation.
The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:
Existing tools for segmentation and classification of recorded conversations generally rely on a training set, i.e., a set of conversations that has been segmented and classified by a human expert. A computer uses this training set in deriving classification rules, which can then be applied automatically in segmenting and classifying further conversations. Although this sort of supervised computer learning is a well-established approach, it requires a large investment of time and expertise to prepare a training set that will give good results. Each new installation of the tool will generally require its own training set to be developed in this manner, since topics of conversation and vocabulary generally differ among different organizations and user groups. Furthermore, supervised learning approaches of this sort are limited by the knowledge of the export who prepares the training set, and can be biased by preconceptions of that expert. Subsequent changes in the nature of the documents require re-classification by experts.
Embodiments of the present invention that are described herein provide methods, systems and software that are capable of autonomously analyzing a corpus of conversations and outputting a digest of the topics discussed in each conversation. In contrast to tools that are known in the art, the present embodiments are capable of unsupervised learning, based on the corpus of conversations itself without any predefined training set, and thus obviate the need for expert involvement. The present embodiments are particularly useful in analyzing recorded teleconferences, but the principles of the present invention may similarly be applied to substantially any large corpus of recorded conversations.
In the disclosed embodiments, a computerized conversation processing system analyzes a corpus of recorded conversations, such as telephone calls, and identifies common patterns across the calls. The system automatically detects topics that repeat across the calls. So, for example, if several calls talk about marketing, e-mails and “open rates,” the system can infer that this is a common topic (e-mail marketing) and can thus find it in other calls and then segment the calls based on this information. Because the process works “bottom up,” it can detect topics that a human might not conceive of in advance, as long as the topics are common across multiple calls.
The system extracts the topics from the conversations in an unsupervised way, without a pre-classified training set or other human guidance. Thus, embodiments of the present invention offer the ability not merely to handle a single conversation according to predefined instructions, but rather to leverage the fact that in a given organization or domain there is a similarity in the conversations and thus to derive the topics that people actually talk about. Because the system is autonomous and does not require hand-crafted topic definitions or keywords, there is no need for a labor-intensive implementation process, and a new user with a bank of recorded conversations can start using the system immediately.
In the disclosed embodiments, a conversation processing system records or receives a group of recordings of conversations made by people in a given field, for example, sales agents working for a given company. The conversations are converted to text using methods and tools that are known in the art. Optionally, the conversations are filtered by language, for example, by automatically recognizing that a given conversation is not in a target language, such as English, and in that case discarding the conversation.
Following conversion to text and filtering, the system breaks the conversations into chunks, each typically comprising a series of several hundred words, for example. The system then processes the contents of these chunks autonomously, using a suitable machine learning algorithm, such as LDA (Latent Dirichlet Allocation), in order to derive an optimal set of topics such that the conversations are most likely to be composed of those topics. In other words, the optimal set is chosen so as to maximize the likelihood, across all conversations, that that the chunks can be generated by a mixture of those topics. The number of topics in the set can be defined in advance to be any suitable target number, for example a chosen number between ten and forty.
Topic derivation proceeds autonomously in this manner, without human supervision, to find an optimal set of topics, as well as to extract titles (labels) for the topics from among the words that commonly occur in the segments of conversation belonging to each topic. For example, for each topic, the computer may extract from the segments of the conversations that are classified as belonging to the topic one or more n-grams (i.e., recurring sequences of n words, wherein n is at least two) that statistically differentiate the segments classified as belonging to the topic from the segments that belong to other topics in the set. One of the extracted n-grams is then selected as the title for the topic.
Once the topics have been detected, the system is able to apply the topics and corresponding filtering criteria in sorting chunks of both existing and newly-recorded conversations by topic. The system thus segments recorded conversations such that each segment is classified as belonging to a particular topic in the optimal set (or to no topic when there was no topic that matched a given chunk or sequence of chunks). It can then output the distribution of the segments and respective classifications of the segments into topics over the duration of the conversation. In one embodiment, which is shown in
Based on the output of conversation segments and topics, a user of the system is able to understand how much of the conversation was devoted to each topic, as well as the sequence of discussion of different topics. The user can, if desired, move a time cursor to any desired location in the conversation timeline and receive a playback of the conversation at that point. Computer-aided statistical analyses can be applied, as well, in order to understand the relationship between topics covered in a group of conversations and results, such as success in closing sales.
In the pictured embodiment, server 22 collects and analyzes conversations between sales agents 30, using computers 26, and customers 32, using audio devices 28. These conversations may be carried out over substantially any sort of network, including both telephone and packet networks. Although the conversations shown in
Server 22 comprises a processor 36, such as a general-purpose computer processor, which is connected to network 24 by a network interface 34. Server 22 receives and stores the corpus of recorded conversations in memory 38, for processing by processor 36. Processor 36 autonomously derives an optimal set of topics and uses these topics in segmenting the recorded conversations using the methods described herein. At the conclusion of this process, processor 36 is able to present the distribution of the segments of the conversations and the respective classifications of the segments into the topics over the duration of the recorded conversations on a display 40.
Processor 36 typically carries out the functions that are described herein under the control of program instructions in software. This software may be downloaded to server 22 in electronic form, for example over a network. Additionally or alternatively, the software may be provided and/or stored on tangible, non-transitory computer-readable media, such as optical, magnetic, or electronic memory media.
To initiate the method of
Processor 36 filters the recorded conversations by language, at a conversation filtering step 54. This step can be important in the unsupervised learning process of
To begin the actual topic extraction process, processor 36 breaks the conversations into chunks, at a chunk division step 58. A “chunk” is a continuous series of words of a selected length, or within a selected length range. For example, the inventors have found chunks of approximately 300 words each to give good result, while setting chunk boundaries so as to keep monologues separate and not mix different speakers in a single chunk.
As another preliminary step, it is also useful for processor 36 to filter out of the conversation transcripts certain types of words, such as stop words and rare words, at a word filtering step 60. “Stop words” is a term used in natural language processing to denote words that have little or no semantic meaning. The inventors have found it useful in this regard to filter out roughly one hundred of the most common English words, including “a”, “able”, “about”, “across”, “after”, “all”, “almost”, etc. Because such stop words have a roughly equal chance of appearing in any topic, removing them from the chunks can be helpful in speeding up subsequent topic extraction.
Processor 36 counts the number of occurrences of the remaining words in each of the chunks and in the corpus as a whole. Absent human supervision, words that appear only once or a few times (for example, less than four times) in the corpus cannot reliably be associated with a topic. Therefore, processor 36 eliminates these rare words, as well, at step 60 in order to speed up the topic extraction.
Based on the frequencies of occurrence of the words over the chunks of the conversations in the corpus, processor 36 autonomously derives an optimal set of topics to which the chunks can be assigned, at a topic derivation step 62. The set of topics is “optimal” in the sense that it maximizes (for the given number of topics in the set) the likelihood that the chunks can be generated by that set of topics. Various algorithms that are known in the art can be used for this purpose, but the inventors have found particularly that Latent Dirichlet Allocation (LDA) gives good results while executing quickly even over large corpuses of conversations. LDA is parametric, in the sense that it accepts the target number of topics, n, as an input. In the inventors' experience, 15-30 topics is a useful target for analysis of corpuses containing hundreds to thousands of conversations in a particular domain.
The use of LDA in topic extraction is described in detail in the above-mentioned U.S. Provisional Patent Application 62/460,899. To summarize briefly, LDA attempts to find a model M, made up of a combination of n topics, that maximizes the likelihood L that the conversations d in the corpus were created by this combination. Formally speaking, LDA seeks to maximize:
Here p(wd|M) is the probability that each word wd in conversation d was created by the model. To compute this probability, each document is assumed to be a mixture of topics, with probabilities that sum up to 1. The probability that a document was created by that mixture is the probability that the sequence of words was generated by the mixture of topics. The overall probability is then the composed probability that the entire corpus was created by the set of topics. Computationally, it is generally easier to work with the log likelihood, also known as “perplexity”:
Minimizing the perplexity necessarily maximizes the corresponding likelihood.
Step 62 can be carried out using LDA tools that are known in the art. A suitable program for this purpose is available in the Machine Learning for Language Toolkit (MALLET) offered by the University of Massachusetts at Amherst (available at mallet.cs.umass.edu). MALLET source code implementing LDA, which can be used in implementing the present method, is presented in a computer program listing submitted as an appendix hereto.
Because step 62 is carried out autonomously, using a predefined number of topics, in many cases some of the topics discovered will be more informative than others to people accessing the results. Users can optionally influence the topics that are detected by inputting, for any of the topics, one or more seed words. For example, a user may enter the words “cost” and “dollar” as seed words for one topic. These words are then used as a prior to the topic.
Once step 62 is completed, a set of topics has been derived. For each such topic, a probability distribution over the words is available.
After finding the optimal set of topics, processor 36 assigns titles to the topics, at a labeling step 64. The titles are useful subsequently in presenting segmentation results to a user of server 22. For this purpose, processor 36 identifies n-grams that are typical or characteristic of each topic by extracting n-grams from the chunks of the conversations in the corpus that statistically differentiate the chunks belonging to the topic from chunks belonging to the remaining topics in the set. One of the extracted n-grams is then selected as a title for the topic.
For example, at step 64, processor 36 can identify the top 20 n-grams (2≤n≤5) that differentiate each topic from all the rest of the text in the corpus. The differentiation can be carried out using a standard statistical G-test. For this purpose, each n-gram is scored based-on a 2×2 matrix M consisting of the following four values:
Processor 36 filters the top-ranked n-grams for each topic (for example, the twenty n-grams with the highest G-test scores) in order to select the title of the topic. This filtering eliminates item overlap, by removing n-grams that are contained as a substring in another n-gram in the list. The highest ranking n-gram is determined to be the topic title; users of the system can later change this title if another one is preferred.
At the conclusion of the topic extraction process, processor 36 saves the topics in memory 38, along with the dictionary of words that were used in deriving the topics, at a topic storage step 66. For each topic, processor 36 saves the distributions of the words in the topic for future use in scoring and classifying conversation segments. Details of this scoring process are described below with reference to
Processor 36 breaks the conversation into chunks, at chunk division step 72. The size of the chunks divided in step 72 is not necessarily the same as that of the chunks divided in step 58 above. Specifically, the inventors have found that it is helpful to use a smaller chunk size, on the order of fifty words, at step 72, as these small chunks are more likely to contain only a single topic. Other considerations may also be applied in choosing chunk boundaries, such as pauses in the conversation.
Processor 36 sorts the chunks by topics, at a segmentation step 74. For this purpose, processor 36 computes, with respect to each chunk, a respective topic score for each topic. The score is based on the words appearing in the chunk as compared to the frequency of occurrence of each of the words in each of the topics. Details of this scoring process are described below with reference to
On the other hand, when none of the topic scores for a given chunk is found to exceed the threshold at step 74, processor 36 classifies the chunk topic as “unknown.” Typically, to maintain high precision of segment classification and thus high reliability in the analysis results that are presented to users, the threshold is set to a high value, for example 95% statistical confidence. Alternatively, different threshold levels may be set depending on application requirements.
Classification of neighboring chunks may also be used in refining results. Thus, for example, when a chunk with uncertain classification occurs within a sequence of other chunks that are assigned to a given topic with high confidence, this chunk may be incorporated into the same segment as the neighboring chunks.
Processor 36 presents the results of analysis of the conversation on display 40, at an output step 76. The display shows the segmentation of the conversation and the distribution of the topics among the segments.
Horizontal bars 82, labeled “Jabulani” and “Alex” (an account executive and a customer, for example), show which of these two parties to the conversation was speaking at each given moment during the conversation. A “Topics” bar 84 shows the topic of conversation at each corresponding moment during the conversation. The topics are color-coded, according to the legend appearing at the bottom of screen 80. Segments of the conversation that could not be classified with sufficient confidence on the basis of the existing set of topics receive no color code.
The user viewing screen 80 can browse through the conversation using a cursor 86. For example, to look into how pricing was negotiated between Jabulani and Alex, the user can move cursor horizontally to one of the segments labeled with the title “pricing” and then listen to or read the text of the conversation in this segment. Optionally, the user can also view a screenshot 88 of Jabulani's computer screen at each point in the conversation.
In the method shown in
In addition, processor 36 computes respective probability distributions of each of the words by topic, at a topic probability computation step 92. (These distributions are actually generated as a by-product of the inference of topics by the LDA algorithm at step 62.) The word/topic probability pi,j for any given word j in topic i is the likelihood that any given occurrence of the word in the corpus will be in a segment belonging to that topic. Thus, words that are distinctively associated with a given topic i will have relatively high values of pi,j, while other words will have much lower values. The topic probability values are normalized so that Σjpi,j=1 for any topic i.
Mathematically, however, this formulation of pi,j can result in computational errors, particularly when a word in the dictionary fails to appear in a conversation. In order to avoid errors of this sort, processor 36 smooths the probability values that it computes at step 92. For example, smoothed topic probabilities {circumflex over (p)}i,j can be calculated by merging the raw word/topic probabilities pi,j with the background probabilities qj found at step 90:
{circumflex over (p)}i,j=α·pi,j+(1−α)qj
The inventors have found that setting α=0.5 gives good results, but other values may alternatively be chosen.
Based on the probabilities found at steps 90 and 92, processor 36 computes a score si,j for each word j with respect to each topic i, at a word score computation step 94. This score is based on a comparison between the specific (smoothed) topic probability of the word and the background probability. For efficient computation, the log ratio of the topic and background probabilities may be used in this comparison:
si,j=(log {circumflex over (p)}i,j−log qj)
These scores are saved in memory 38 at step 66.
In order to sort the chunks by topic at step 74, processor 36 computes, for each chunk, a respective topic score si for each topic i by combining the word scores si,j of the words occurring in the chunk with respect to the topic, at a topic score computation step 96. The topic score si for any given topic i can be computed simply by summing the word scores of the words occurring in the chunk, i.e., given a series of words t1, . . . , tn, in a given chunk, the topic score for topic i computed for this chunk at step 96 will be:
Based on the topic scores, processor 36 attempts to classify the chunk into a topic, at a score checking step 98. Specifically, if si for a given topic i is greater than a predefined threshold, processor 36 will classify the chunk as belonging to this topic, at a classification step 100. The inventors have found that for high confidence of classification, using the normalized probabilities and logarithmic scoring function defined above, a chunk should be classified in some topic i if si>5. Otherwise, the classification of the chunk is marked as uncertain or unknown, at a non-classification step 102. Alternatively, other thresholds and classification criteria may be applied, depending on application requirements.
Further details and examples of the scoring and segmentation approach shown in
Use of Segmentation Results in Analytics and Prediction
The results of the sort of segmentation of conversations that is described above can be used in analyzing certain qualities of a conversation and possibly to predict its outcome. For example, the location and distribution of conversation segments can be used to assess whether the conversation is following a certain desired pattern. Additionally or alternatively, the location and distribution of conversation segments can be used to predict whether the conversation is likely to result in a desired business outcome.
For such purposes, processor 36 (or another computer, which receives the segmentation results) uses the segment location, distribution and related statistics, such as the duration of a given topic, the time of its first and last occurrences in a call, and the mean or median of its associated segments in calls, to predict the expected likelihood that a conversation belongs to a certain group. Example of useful groups of this sort are calls resulting in a desired business outcome, calls managed by top-performing sales representative, calls marked as good calls by team members, or calls following a desired pattern.
Based on these predictions, processor 36 provides insights and actionable recommendations for improving the sales process, for both the entire sales organization and for specific sales people or teams.
As one example, processor 36 can identify that conversations with a relatively long duration of segments titled “Product Differentiation,” which marks segments in which a company's solution is compared to those of competitors, are statistically predicted to be more successful in closing sales. Processor 36 reports this result to the user and runs an analysis of “Product Differentiation” segments for each of fifteen sales people. On this basis, processor 36 identifies that calls by John Doe have a short total duration of segments on the topic of “Product Differentiation,” and recommends that John discuss “Product Differentiation” more. Based on the analyzed calls, processor 36 provides this recommendation to several other team members but not to the rest of the team.
As another example, processor 36 can indicate, in real-time, that a sales representative is speaking too long, too little, too late or too early on a certain topic, such as speaking about the “Pricing” topic too early in a call. The rules for such alerts can be set up manually or automatically by comparing conversations against those of the best-performing sales representative.
It will be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and subcombinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.
This application claims the benefit of U.S. Provisional Patent Application 62/460,899, filed Feb. 20, 2017, which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
6185527 | Petkovic et al. | Feb 2001 | B1 |
6324282 | McIllwaine et al. | Nov 2001 | B1 |
6363145 | Shaffer et al. | Mar 2002 | B1 |
6434520 | Kanevsky et al. | Aug 2002 | B1 |
6542602 | Elazar | Apr 2003 | B1 |
6603854 | Judkins et al. | Aug 2003 | B1 |
6724887 | Eilbacher et al. | Apr 2004 | B1 |
6741697 | Benson et al. | May 2004 | B2 |
6775377 | McIllwaine et al. | Aug 2004 | B2 |
6914975 | Koehler et al. | Jul 2005 | B2 |
6922466 | Peterson et al. | Jul 2005 | B1 |
6959080 | Dezonno et al. | Oct 2005 | B2 |
6970821 | Shambaugh et al. | Nov 2005 | B1 |
7010106 | Gritzer et al. | Mar 2006 | B2 |
7076427 | Scarano et al. | Jul 2006 | B2 |
7151826 | Shambaugh et al. | Dec 2006 | B2 |
7203285 | Blair | Apr 2007 | B2 |
7281022 | Gruhl et al. | Oct 2007 | B2 |
7305082 | Elazar | Dec 2007 | B2 |
7373608 | Lentz | May 2008 | B2 |
7457404 | Hession et al. | Nov 2008 | B1 |
7460659 | Shambaugh et al. | Dec 2008 | B2 |
7474633 | Halbraich et al. | Jan 2009 | B2 |
RE40634 | Blair et al. | Feb 2009 | E |
7548539 | Kouretas et al. | Jun 2009 | B2 |
7570755 | Williams et al. | Aug 2009 | B2 |
7577246 | Idan et al. | Aug 2009 | B2 |
7596498 | Basu et al. | Sep 2009 | B2 |
7599475 | Eilam et al. | Oct 2009 | B2 |
7613290 | Williams et al. | Nov 2009 | B2 |
7631046 | Litvin et al. | Dec 2009 | B2 |
7660297 | Fisher et al. | Feb 2010 | B2 |
7664641 | Pettay et al. | Feb 2010 | B1 |
7702532 | Vigil | Apr 2010 | B2 |
7716048 | Pereg et al. | May 2010 | B2 |
7728870 | Rudnik et al. | Jun 2010 | B2 |
7739115 | Pettay et al. | Jun 2010 | B1 |
RE41608 | Blair et al. | Aug 2010 | E |
7769622 | Reid et al. | Aug 2010 | B2 |
7770221 | Frenkel et al. | Aug 2010 | B2 |
7783513 | Lee | Aug 2010 | B2 |
7817795 | Gupta et al. | Oct 2010 | B2 |
7852994 | Blair et al. | Dec 2010 | B1 |
7853006 | Fama et al. | Dec 2010 | B1 |
7869586 | Conway et al. | Jan 2011 | B2 |
7873035 | Kouretas et al. | Jan 2011 | B2 |
7881216 | Blair | Feb 2011 | B2 |
7881471 | Spohrer et al. | Feb 2011 | B2 |
7882212 | Nappier et al. | Feb 2011 | B1 |
7899176 | Calahan et al. | Mar 2011 | B1 |
7899178 | Williams, II et al. | Mar 2011 | B2 |
7904481 | Deka et al. | Mar 2011 | B1 |
7925889 | Blair | Mar 2011 | B2 |
7949552 | Korenblit et al. | May 2011 | B2 |
7953219 | Freedman et al. | May 2011 | B2 |
7953621 | Fama et al. | May 2011 | B2 |
7965828 | Calahan et al. | Jun 2011 | B2 |
7966187 | Pettay et al. | Jun 2011 | B1 |
7966265 | Schalk et al. | Jun 2011 | B2 |
7991613 | Blair | Aug 2011 | B2 |
7995717 | Conway et al. | Aug 2011 | B2 |
8000465 | Williams et al. | Aug 2011 | B2 |
8005675 | Wasserblat et al. | Aug 2011 | B2 |
8050921 | Mark et al. | Nov 2011 | B2 |
8055503 | Scarano et al. | Nov 2011 | B2 |
8078463 | Wasserblat et al. | Dec 2011 | B2 |
8086462 | Alonso et al. | Dec 2011 | B1 |
8094587 | Halbraich et al. | Jan 2012 | B2 |
8094803 | Danson et al. | Jan 2012 | B2 |
8107613 | Gumbula | Jan 2012 | B2 |
8108237 | Bourne et al. | Jan 2012 | B2 |
8112298 | Bourne et al. | Feb 2012 | B2 |
RE43255 | Blair et al. | Mar 2012 | E |
RE43324 | Blair et al. | Apr 2012 | E |
8150021 | Geva et al. | Apr 2012 | B2 |
8160233 | Keren et al. | Apr 2012 | B2 |
8165114 | Halbraich et al. | Apr 2012 | B2 |
8180643 | Pettay et al. | May 2012 | B1 |
8189763 | Blair | May 2012 | B2 |
8194848 | Zernik et al. | Jun 2012 | B2 |
8199886 | Calahan et al. | Jun 2012 | B2 |
8199896 | Portman et al. | Jun 2012 | B2 |
8204056 | Dong et al. | Jun 2012 | B2 |
8204884 | Freedman et al. | Jun 2012 | B2 |
8214242 | Agapi et al. | Jul 2012 | B2 |
8219401 | Pettay et al. | Jul 2012 | B1 |
8243888 | Cho | Aug 2012 | B2 |
8255542 | Henson | Aug 2012 | B2 |
8275843 | Anantharaman et al. | Sep 2012 | B2 |
8285833 | Blair | Oct 2012 | B2 |
8290804 | Gong | Oct 2012 | B2 |
8306814 | Dobry et al. | Nov 2012 | B2 |
8326631 | Watson | Dec 2012 | B1 |
8340968 | Gershman | Dec 2012 | B1 |
8345828 | Williams et al. | Jan 2013 | B2 |
8396732 | Nies et al. | Mar 2013 | B1 |
8411841 | Edwards et al. | Apr 2013 | B2 |
8442033 | Williams et al. | May 2013 | B2 |
8467518 | Blair | Jun 2013 | B2 |
8526597 | Geva et al. | Sep 2013 | B2 |
8527269 | Kapur | Sep 2013 | B1 |
8543393 | Barnish | Sep 2013 | B2 |
8611523 | Conway et al. | Dec 2013 | B2 |
8649499 | Koster et al. | Feb 2014 | B1 |
8670552 | Keren et al. | Mar 2014 | B2 |
8675824 | Barnes et al. | Mar 2014 | B1 |
8706498 | George | Apr 2014 | B2 |
8761376 | Pande et al. | Apr 2014 | B2 |
8718266 | Williams et al. | May 2014 | B1 |
8719016 | Ziv et al. | May 2014 | B1 |
8724778 | Barnes et al. | May 2014 | B1 |
8725518 | Waserblat et al. | May 2014 | B2 |
8738374 | Jaroker | May 2014 | B2 |
8787552 | Zhao et al. | Jul 2014 | B1 |
8798254 | Naparstek et al. | Aug 2014 | B2 |
8806455 | Katz | Aug 2014 | B1 |
8861708 | Kopparapu et al. | Oct 2014 | B2 |
8903078 | Blair | Dec 2014 | B2 |
8909590 | Newnham et al. | Dec 2014 | B2 |
8971517 | Keren et al. | Mar 2015 | B2 |
8990238 | Goldfarb | Mar 2015 | B2 |
9020920 | Haggerty et al. | Apr 2015 | B1 |
9025736 | Meng et al. | May 2015 | B2 |
9053750 | Gibbon et al. | Jun 2015 | B2 |
9083799 | Loftus et al. | Jul 2015 | B2 |
9092733 | Sneyders et al. | Jul 2015 | B2 |
9135630 | Goldfarb et al. | Sep 2015 | B2 |
9148511 | Ye et al. | Sep 2015 | B2 |
9160853 | Daddi et al. | Oct 2015 | B1 |
9160854 | Daddi et al. | Oct 2015 | B1 |
9167093 | Geffen et al. | Oct 2015 | B2 |
9195635 | Liu | Nov 2015 | B2 |
9197744 | Sittin et al. | Nov 2015 | B2 |
9213978 | Melamed et al. | Dec 2015 | B2 |
9214001 | Rawle | Dec 2015 | B2 |
9232063 | Romano et al. | Jan 2016 | B2 |
9232064 | Skiba et al. | Jan 2016 | B1 |
9253316 | Williams et al. | Feb 2016 | B1 |
9262175 | Lynch et al. | Feb 2016 | B2 |
9269073 | Sammon et al. | Feb 2016 | B2 |
9270826 | Conway et al. | Feb 2016 | B2 |
9300790 | Gainsboro et al. | Mar 2016 | B2 |
9311914 | Wasserbat et al. | Apr 2016 | B2 |
9368116 | Ziv et al. | Jun 2016 | B2 |
9401145 | Ziv et al. | Jul 2016 | B1 |
9401990 | Teitelman et al. | Jul 2016 | B2 |
9407768 | Conway et al. | Aug 2016 | B2 |
9412362 | Iannone et al. | Aug 2016 | B2 |
9418152 | Nissan et al. | Aug 2016 | B2 |
9420227 | Shires et al. | Aug 2016 | B1 |
9432511 | Conway et al. | Aug 2016 | B2 |
9460394 | Krueger et al. | Oct 2016 | B2 |
9460722 | Sidi et al. | Oct 2016 | B2 |
9497167 | Weintraub et al. | Nov 2016 | B2 |
9503579 | Watson et al. | Nov 2016 | B2 |
9508346 | Achituv et al. | Nov 2016 | B2 |
9589073 | Yishay | Mar 2017 | B2 |
9596349 | Hernandez | Mar 2017 | B1 |
9633650 | Achituv et al. | Apr 2017 | B2 |
9639520 | Yishay | May 2017 | B2 |
9690873 | Yishay | Jun 2017 | B2 |
9699409 | Reshef | Jul 2017 | B1 |
9785701 | Yishay | Oct 2017 | B2 |
9936066 | Mammen et al. | Apr 2018 | B1 |
9947320 | Lembersky et al. | Apr 2018 | B2 |
9953048 | Weisman et al. | Apr 2018 | B2 |
9977830 | Romano et al. | May 2018 | B2 |
10079937 | Nowak et al. | Sep 2018 | B2 |
10134400 | Ziv et al. | Nov 2018 | B2 |
10516782 | Cartwright | Dec 2019 | B2 |
10522151 | Cartwright | Dec 2019 | B2 |
20040021765 | Kubala et al. | Feb 2004 | A1 |
20070129942 | Ban et al. | Jun 2007 | A1 |
20080154579 | Kummamuru | Jun 2008 | A1 |
20080300872 | Basu et al. | Dec 2008 | A1 |
20100104086 | Park | Apr 2010 | A1 |
20100217592 | Gupta | Aug 2010 | A1 |
20100246799 | Lubowich et al. | Sep 2010 | A1 |
20110258188 | AbdAlmageed | Oct 2011 | A1 |
20130144603 | Lord | Jun 2013 | A1 |
20130185308 | Itoh | Jul 2013 | A1 |
20130300939 | Chou et al. | Nov 2013 | A1 |
20140229471 | Galvin, Jr. et al. | Aug 2014 | A1 |
20140278377 | Peters et al. | Sep 2014 | A1 |
20150066935 | Peters et al. | Mar 2015 | A1 |
20150243276 | Cooper | Aug 2015 | A1 |
20160110343 | Kumar Rangarajan Sridhar | Apr 2016 | A1 |
20170039265 | Steele, Jr. | Feb 2017 | A1 |
20180150698 | Guttmann | May 2018 | A1 |
20180253988 | Kanayama | Sep 2018 | A1 |
20180329884 | Xiong | Nov 2018 | A1 |
Number | Date | Country |
---|---|---|
2005071666 | Aug 2005 | WO |
2012151716 | Nov 2012 | WO |
Entry |
---|
Anguera., “Speaker Independent Discriminant Feature Extraction for Acoustic Pattern-Matching”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4 pages, Mar. 25-30, 2012. |
Church et al., “Speaker Diarization: A Perspective on Challenges and Opportunities From Theory to Practice”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4950-4954, Mar. 5-9, 2017. |
Hieu., “Speaker Diarization in Meetings Domain”, A thesis submitted to the School of Computer Engineering of the Nanyang Technological University, 149 pages, Jan. 2015. |
Shum et al., “Unsupervised Methods for Speaker Diarization: An Integrated and Iterative Approach”, IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, No. 10, pp. 2015-2028, Oct. 2013. |
Serrano, “Speaker Diarization and Tracking in Multiple-Sensor Environments”, Dissertation presented for the degree of Doctor of Philosophy, Universitat Polit{grave over ( )}ecnica de Catalunya, Spain, 323 pages, Oct. 2012. |
Friedland et al., “Multi-modal speaker diarization of real-world meetings using compressed-domain video features”, International Conference on Acoustics, Speech and Signal Processing (ICASSP'09), 4 pages, Apr. 19-24, 2009. |
Anguera., “Speaker Diarization: A Review of Recent Research”, First draft submitted to IEEE TASLP 15 pages, Aug. 19, 2010. |
Balwani et al., “Speaker Diarization: A Review and Analysis”, International Journal of Integrated Computer Applications & Research (IJICAR), vol. 1, issue 3, 5 pages, 2015. |
Evans et al., “Comparative Study of Bottom-Up and Top-Down Approaches to Speaker Diarization”, IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, No. 2, pp. 382-392, Feb. 2012. |
Sasikala et al., “A Survey on Speaker Diarization Approach for Audio and Video Content Retrieval”, International Journal of Research and Computational Technology, vol. 5, issue 4, 8 pages, Dec. 2013. |
Wang et al., “Speaker Diarization with LSTM, Electrical Engineering and Systems Science”, IEEE International Conference on Acoustics, Speech and Signal Processing, Calgary, Canada, pp. 5239-5243, Apr. 15-20, 2018. |
Moattar et al., “A review on speaker diarization systems and approaches”, Speech Communication, vol. 54, No. 10, pp. 1065-1103, year 2012. |
International Application # PCT/IB2017/058049 search report dated Apr. 12, 2018. |
Number | Date | Country | |
---|---|---|---|
20180239822 A1 | Aug 2018 | US |
Number | Date | Country | |
---|---|---|---|
62460899 | Feb 2017 | US |