Aspects of the present invention relate to speech processing, indexing, and searching. In particular, aspects of the present invention relate to searching for a phrase containing at least one Out-Of-Vocabulary (OOV) word in an Automatic Speech Recognition (ASR) system such as a Large Vocabulary Continuous Speech Recognition (LVCSR) system or a similarly suitable system.
In many contexts, users of large collections of recorded audio (audio information) value the ability to quickly perform searches for words or phrases in the audio. For example, in the context of corporate contact centers (e.g., call-in centers), recorded conversations between customers and customer service representatives (or agents) can be searched and analyzed to identify trends in customer satisfaction or customer issues, to monitor the performance of various support agents, and to locate calls relating to particular issues. As another example, searchable recordings of classroom lectures would allow students to search for and replay discussions of topics of particular interest. Searchable voicemail messages would also allow users to quickly find audio messages containing particular words. As another example, searchable recordings of complex medical procedures (e.g., surgery) can be used to locate recordings of procedures involving uses of particular devices, choices of approaches during the procedure, and various complications.
Generally, Automatic Speech Recognition (ASR) systems, and Large Vocabulary Continuous Speech Recognition (LVCSR) transcription engines in particular, include three components: A set of Language Models (LM), a set of Acoustic Models (AM), and a decoder. The LM and AM are often trained by supplying audio files and their transcriptions (e.g., known, correct transcriptions) to a learning module. Generally, the LM is a Statistical LM (SLM). The training process uses a dictionary (or “vocabulary”) which maps recognized written words into sequences of sub-words (e.g., phonemes or syllables) During recognition of speech, the decoder analyzes an audio clip (e.g., an audio file) and outputs a sequence of recognized words.
A collection of audio files (e.g., calls in a call center or set of lectures in a class) can be made searchable by processing each audio file using an LVCSR engine to generate a text transcript file in which each written word in the transcript (generally) corresponds to a spoken word in the audio file. The resulting text can then be indexed by a traditional text-based search engine such as Apache Lucene™. A user can then query the resulting index (e.g., a search index database) to search the transcripts.
Generally, the recognized words in the output of a LVCSR engine are selected from (e.g., constrained to) the words contained in the dictionary (or “vocabulary”) of the ASR system. A word that is not in the vocabulary (an “out-of-vocabulary” or “OOV” word) may be recognized (e.g., with low confidence) as a word that is in the vocabulary. For example, if the word “Amarillo” is not in the vocabulary, the LVCSR engine may transcribe the word as “ambassador” in the output. As such, when using such ASR systems, it may be impossible for an end user to search the index for any instances of words that are not in the vocabulary.
One way to overcome this problem is to add the OOV word to the dictionary (i.e., to add the word to the vocabulary) and to generate a new LM (which can be a SLM or a constrained grammar) and then reprocess the audio files. However, such an approach would increase the delay in generating the search results due to the need to reprocess the audio corpus.
In other ASR systems, the output data is sub-word level recognition data such as a phonetic transcription of the audio rather than a LVCSR output or a similar word based transcript. Such ASR systems typically do not include a word vocabulary. Instead, these engines provide a way to search for any sequence of characters. In this case, the search is performed by mapping the search phrase into a sequence of phonemes and searching for the given phonetic sequences in the phonetic transcription index. These engines are generally considered to be less accurate than LVCSR based engines because the notion of words is not inherent to the recognition process, and the use of words (e.g., the meanings of the words) are generally useful for improving the accuracy of the speech recognition.
Generally, combining word and phoneme levels of automatic speech recognition will not solve the accuracy problems of phonetic-based methods given that, the accuracy limitations of purely phonetics-based methods would still persist for queries that included at least one OOV word.
Aspects of embodiments of the present invention are directed toward systems and methods of searching spoken audio content given an LVCSR output, in which the search query. contains at least one OOV word.
An embodiment of the present invention is directed to a spoken document retrieval system and method for a fast processing of an Out-Of-Vocabulary (OOV) query in an audio file corpus that is analyzed by a LVCSR (Large Vocabulary Continuous Speech Recognition) or similar system. The “OOV query” is a user-provided search phrase of one or more words, at least one of which is OOV, where the referred vocabulary (its dictionary) here is the list of distinct words on which the system has been trained. Given a query and an index of LVCSR results, the system distinguishes between OOV and IV (In-Vocabulary) words from the query, and generates, for each word, a list of anchors (i.e., places in the audio to look for words in the search query). These anchor locations are reprocessed in a modified recognition phase to generate new search events. Because anchors span a relatively small part of the entire audio file (and hence, a relatively small part of the audio corpus), the search is much faster than a conventional method of reprocessing the entire audio file corpus.
In one embodiment of the present invention, the spoken document retrieval system is used in the context of a contact center (e.g., a call center). In such circumstances, customers place calls to a company's contact center, and the contact center records the call. An LVCSR based ASR system processes the calls to generate output transcriptions and indexes these transcriptions. Later, users such as customer support agents and supervisors can search the indexed transcriptions for particular keywords such as types of issues encountered, place names, names products, error messages, error codes, etc.
However, embodiments of the present invention are not limited to conversations between people, but may be applied to any speech corpora from any source, such as medical dictation, television programs, podcasts, academic lectures, recorded presentations, etc.
According to one embodiment of the present invention, a method includes: receiving, on a computer system, a text search query, the query including one or more query words; generating, on the computer system, for each query word in the query, one or more anchor segments within a plurality of speech recognition processed audio files, the one or more anchor segments identifying possible locations containing the query word; post-processing, on the computer system, the one or more anchor segments, the post-processing including: expanding the one or more anchor segments; sorting the one or more anchor segments; and merging overlapping ones of the one or more anchor segments; and performing, on the computer system, speech recognition on the post-processed one or more anchor segments for instances of at least one of the one or more query words using a constrained grammar.
The audio files may be processed by a speech recognizer engine, and the generating, for each query word in the query, the one or more anchor segments of the processed audio files may include: determining if the query word is in a vocabulary of a learning model of the speech recognizer engine; when the query word is in the vocabulary, identifying one or more high confidence anchor segments corresponding to the query word; and when the query word is not in the vocabulary, generating a search list of one or more sub-words of the query word and identifying one or more anchor segments containing at least one of the one or more sub-words.
The generating the one or more anchor segments may further include: collecting low confidence words in the audio files, the low confidence words having word confidences below a threshold, and the identifying the one or more anchor segments corresponding to each of the sub-words may include searching the low confidence words for only the sub-words of the query word when the query word is not in the vocabulary.
The constrained grammar may include one or more out-of-vocabulary query words of the query, wherein each of the out-of-vocabulary query words is not in the vocabulary.
The searching may include computing one or more event confidence levels, each of the event confidence levels corresponding to a confidence that an anchor segment of the one or more anchor segments contains a particular query word of the one or more query words of the query.
The method may further include outputting, from the computer system, a result of the searching, wherein the result includes the instances of the one or more query words in the audio file, sorted by event confidence level.
The method may further include: applying, on the computer system, a utility function to each of the one or more anchor segments to compute one or more corresponding anchor utility values; and sorting, on the computer system, the one or more anchor segments in accordance with the one or more anchor utility values.
The searching the one or more post-processed anchor segments may only search the one or more post-processed anchor segments having best anchor utility values of the one or more anchor utility values.
The expanding the one or more anchor segments may include: for each query word in the query: counting a first number of characters in the query before the query word and a second number of characters after the query word; multiplying the first number of characters by an average character duration to obtain a first expansion amount; and multiplying the second number of characters by the average character duration to obtain a second expansion amount; and for each anchor segment, each anchor segment being identified by an anchor word, a start time, and an end time: subtracting the first expansion amount and a first constant expansion duration from the start time; and adding the second expansion amount and a second constant expansion duration to the end time.
According to another embodiment of the present invention, a system includes a computer system including a processor, memory, and storage, the system being configured to: receive a text search query, the query including one or more query words; generate, for each query word in the query, one or more anchor segments within a plurality of speech recognition processed audio files, the one or more anchor segments identifying possible locations containing the query word; post-process the one or more anchor segments, the post-process including: expanding the one or more anchor segments; sorting the one or more anchor segments; and merging overlapping ones of the one or more anchor segments; and perform speech recognition on the one or more post-processed anchor segments for instances of at least one of the one or more query words using a constrained grammar.
The system may be further configured to process the audio files using a speech recognizer engine, and wherein the system may be further configured to generate, for each query word in the query, the one or more anchor segments of the processed audio files by: determining if the query word is in a vocabulary of a learning model of the speech recognizer engine; when the query word is in the vocabulary, identifying one or more high confidence anchor segments corresponding to the query word; and when the query word is not in the vocabulary, generating a search list of one or more sub-words of the query word and identifying one or more anchor segments corresponding to each of the one or more sub-words.
The system may be further configured to collect low confidence words in the audio files, the low confidence words having word confidences below a threshold, and wherein the identifying the one or more anchor segments corresponding to each of the sub-words may include searching the low confidence words for only the sub-words of the query word when the query word is not in the vocabulary.
The constrained grammar may include one or more out-of-vocabulary query words of the query, wherein each of the out-of-vocabulary query words is not in the vocabulary.
The system may be further configured to search the one or more post-processed anchor segments by computing one or more event confidence levels, each of the event confidence levels corresponding to a confidence that an anchor segment of the one or more anchor segments contains a particular query word of the one or more query words of the query.
The system may be further configured to output a result of the search, wherein the result includes the instances of the query words in, the audio file, sorted by event confidence level.
The system may be further configured to: apply a utility function to each of the one or more anchor segments to compute one or more corresponding anchor utility values; and sort the one or more anchor segments in accordance with the one or more anchor utility values.
The system may be configured to search the one or more post-processed anchor segments by only searching the one or more anchor segments having best anchor utility values of the one or more anchor utility values.
The system may be further configured to expand the one or more anchor segments by: for each query word in the query: counting a first number of characters in the query before the query word and a second number of characters after the query word; multiplying the first number of characters by an average character duration to obtain a first expansion amount; and multiplying the second number of characters by the average character duration to obtain a second expansion amount; and for each anchor segment, each anchor segment being identified by an anchor word, a start time, and an end time: subtracting the first expansion amount and a first constant expansion duration from the start time; and adding the second expansion amount and a second constant expansion duration to the end time.
According to another embodiment of the present invention, a system includes: means for receiving a text search query, the query including one or more query words; means for generating, for each query word in the query, one or more anchor segments identifying possible locations within a plurality of speech recognition processed audio files, the one or more anchor segments identifying possible locations containing the query word; means for post-processing the one or more anchor segments including: means for expanding the one or more anchor segments; means for sorting the one or more anchor segments; and means for merging overlapping ones of the one or more anchor segments; and means for searching the post-processed one or more anchor segments for instances of at least one of the one or more query words using a constrained grammar.
The accompanying drawings, together with the specification, illustrate exemplary embodiments of the present invention, and, together with the description, serve to explain the principles of the present invention.
In the following detailed description, only certain exemplary embodiments of the present invention are shown and described, by way of illustration. As those skilled in the art would recognize, the invention may be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Like reference numerals designate like elements throughout the specification.
As described herein, various applications and aspects of the present invention may be implemented in software, firmware, hardware, and combinations thereof. When implemented in software, the software may operate on a general purpose computing device such as a server, a desktop computer, a tablet computer, a smartphone, or a personal digital assistant. Such a general purpose computer includes a general purpose processor and memory.
Some embodiments of the present invention will be described in the context of a contact center. However, embodiments of the present invention are not limited thereto and may also be used in under other conditions involving searching recorded audio such as in computer based education systems, voice messaging systems, medical transcripts, or any speech corpora from any source.
According to one exemplary embodiment, the contact center 102 includes resources (e.g. personnel, computers, and telecommunication equipment) to enable delivery of services via telephone or other communication mechanisms. Such services may vary depending on the type of contact center, and may be customer service to help desk, emergency response, telemarketing, order taking, and the like.
Customers, potential customers, or other end users (collectively referred to as customers) desiring to receive services from the contact center 102 may initiate inbound calls to the contact center 102 via their end user devices 10a-10c (collectively referenced as 10). Each of the end user devices 10 may be a communication device conventional in the art, such as, for example, a telephone, wireless phone, smart phone, personal computer, electronic tablet, and/or the like. Users operating the end user devices 10 may initiate, manage, and respond to telephone calls, emails, chats, text messaging, web-browsing sessions, and other multi-media transactions.
Inbound and outbound calls from and to the end users devices 10 may traverse a telephone, cellular, and/or data communication network 14 depending on the type of device that is being used. For example, the communications network 14 may include a private or public switched telephone network (PSTN), local area network (LAN), private wide area network (WAN), and/or public wide area network such as, for example, the Internet. The communications network 14 may also include a wireless carrier network including a code division multiple access (CDMA) network, global system for mobile communications (GSM) network, and/or any 3G or 4G network conventional in the art.
According to one exemplary embodiment, the contact center 102 includes a switch/media gateway 12 coupled to the communications network 14 for receiving and transmitting calls between end users and the contact center 102. The switch/media gateway 12 may include a telephony switch configured to function as a central switch for agent level routing within the center. In this regard, the switch 12 may include an automatic call distributor, a private branch exchange (PBX), an IP-based software switch, and/or any other switch configured to receive Internet-sourced calls and/or telephone network-sourced calls. According to one exemplary embodiment of the invention, the switch is coupled to a call server 18 which may, for example, serve as an adapter or interface between the switch and the remainder of the routing, monitoring, and other call-handling systems of the contact center 102.
The contact center 102 may also include a multimedia/social media server for engaging in media interactions other than voice interactions with the end user devices 10 and/or web servers 32. The media interactions may be related, for example, to email, vmail (voice mail through email), chat, video, text-messaging, web, social media, screen-sharing, and the like. The web servers 32 may include, for example, social interaction site hosts for a variety of known social interaction sites to which an end user may subscribe, such as, for example, Facebook, Twitter, and the like. The web servers may also provide web pages for the enterprise that is being supported by the contact center 102. End users may browse the web pages and get information about the enterprise's products and services. The web pages may also provide a mechanism for contacting the contact center 102, via, for example, web chat, voice call, email, web real time communication (WebRTC), or the like.
According to one exemplary embodiment of the invention, the switch is coupled to an interactive voice response (IVR) server 34. The IVR server 34 is configured, for example, with an IVR script for querying customers on their needs. For example, a contact center for a bank may tell callers, via the IVR script, to “press 1” if they wish to get an account balance. If this is the case, through continued interaction with the IVR, customers may complete service without needing to speak with an agent.
If the call is to be routed to an agent, the call is forwarded to the call server 18 which interacts with a routing server 20 for finding an appropriate agent for processing the call. The call server 18 may be configured to process PSTN calls, VoIP calls, and the like. For example, the call server 18 may include a session initiation protocol (SIP) server for processing SIP calls.
In one example, while an agent is being located and until such agent becomes available, the call server may place the call in, for example, a call queue. The call queue may be implemented via any data structure conventional in the art, such as, for example, a linked list, array, and/or the like. The data structure may be maintained, for example, in buffer memory provided by the call server 18.
Once an appropriate agent is available to handle a call, the call is removed from the call queue and transferred to a corresponding agent device 38a-38c (collectively referenced as 38). Collected information about the caller and/or the caller's historical information may also be provided to the agent device for aiding the agent in better servicing the call. In this regard, each agent device 38 may include a telephone adapted for regular telephone calls, VoIP calls, and the like. The agent device 38 may also include a computer for communicating with one or more servers of the contact center 102 and performing data processing associated with contact center operations, and for interfacing with customers via a variety of communication mechanisms such as chat, instant messaging, voice calls, and the like.
The selection of an appropriate agent for routing an inbound call may be based, for example, on a routing strategy employed by the routing server 20, and further based on information about agent availability, skills, and other routing parameters provided, for example, by a statistics server 22.
The multimedia/social media server 24 may also be configured to provide, to an end user, a mobile application for downloading onto the end user device 10. The mobile application may provide user configurable settings that indicate, for example, whether the user is available, not available, or availability is unknown, for purposes of being contacted by a contact center agent. The multimedia/social media server 24 may monitor the status settings and send updates to the aggregation module each time the status information changes.
The contact center 102 may also include a reporting server 28 configured to generate reports from data aggregated by the statistics server 22. Such reports may include near real-time reports or historical reports concerning the state of resources, such as, for example, average waiting time, abandonment rate, agent occupancy, and the like. The reports may be generated automatically or in response to specific requests from a requestor (e.g. agent/administrator, contact er application, and/or the like).
According to one exemplary embodiment of the invention, the routing server 20 is enhanced with functionality for managing back-office/offline activities that are assigned to the agents. Such activities may include, for example, responding to emails, responding to letters, attending training seminars, or any other activity that does not entail real time communication with a customer. Once assigned to an agent, an activity may be pushed to the agent, or may appear in the agent's workbin 26a-26c (collectively referenced as 26) as a task to be completed by the agent. The agent's workbin may be implemented via any data structure conventional in the art, such as, for example, a linked list, array, and/or the like. The workbin may be maintained, for example, in buffer memory of each agent device 38.
According to one exemplary embodiment of the invention, the contact center 102 also includes one or more mass storage devices 30 for storing different databases relating to agent data (e.g. agent profiles, schedules, etc.), customer data (e.g. customer profiles), interaction data (e.g. details of each interaction with a customer, including reason for the interaction, disposition data, time on hold, handle time, etc.), and the like. According to one embodiment, some of the data (e.g. customer profile data) may be provided by a third party database such as, for example, a third party customer relations management (CRM) database. The mass storage device may take form of a hard disk or disk array as is conventional in the art.
According to one embodiment of the present invention, the contact center 102 also includes a call recording server 40 for recording the audio of calls conducted through the contact center 102, a call recording storage server 42 for storing the recorded audio, a speech analytics server 44 configured to process and analyze audio collected in the from the contact center 102, and a speech index database 46 for providing an index of the analyzed audio.
The various servers of
Referring to
The user interface shown in
Referring to
Referring to
In other embodiments of the present invention, the media server 24 merely stores the recorded audio in the call recording storage server 42 without sending a second copy directly to the speech analytics server.
When speech analytics server 44 receives the audio data, it will perform speech analytics on the audio data (e.g., generate transcripts and/or an LVCSR output) and index the result. The speech analytics server stores metadata and indexes about the call recordings in the speech index database 46, and a user can search and/or query the speech index database 46 for audio using the search user interface (see, e.g.,
Referring to
In some embodiments having premise deployment for call recording, the user interface for accessing call recording is the search user interface as shown, for example, in
Referring to
When a new audio clip is received by the speech analytics server 44, the speech analytics server performs standard LVCSR analysis of the audio data. The LVCSR analysis of the data produces an LVCSR text output, which includes both a transcript of the audio and a confidence level for each of the words in the text output. For simplicity, an LVCSR output is generally represented as a set of 4-tuples: word, start time, end time and word confidence: LVCSR={(wj, sj, ej, cj}. Words that are in the vocabulary of the LVCSR system are generally recognized with high confidence and spoken words that correspond to OOV words are mistakenly recognized as their closest match from among words in the dictionary, and are usually with low word confidence.
The vocabulary of the LVCSR engine is the set of distinct words that appeared in the transcription files that was used to train its associated language model. This vocabulary is the largest theoretical set of words that can be recognized by a LVCSR engine using its associated language model. The vocabulary may be denoted herein as VLM. In practice, not all of the words in VLM will appear in the LVCSR output, because, among other reasons, many of them have low prior probability, because the true spoken vocabulary is not as large as the LM's, or because the recognition quality is not high.
In one embodiment, the LVCSR output vocabulary VLVCSR is used, and the words that arent contained in it are treated as OOV. We have then VLVCSR ⊂ VLM.
The LVCSR output is stored in the speech index database 46 and an index of words in the speech index database 46 is also updated with the LVCSR output. The index of words includes references (e.g., URIs) to audio files that contain the identified word along with timestamps indicating the times within the audio files at which the words occur (e.g., the index may be a mapping from word wj to one or more audio files {(audio_URIk, timestampk)}).
Searching for a word w in a collection of audio files indexed using an LVCSR engine generally means finding all the 4-tuples having the word w as its first element. However, OOV words will not be correctly recognized by the LVCSR engine and will not be found in a search because these words will not exist in the index.
According to aspects of embodiments of the present invention, the LVCSR text output, which is composed from a set of words with associated start time, end time and a word confidence, is used to find the likely locations of OOV words within the audio to be reprocessed to determine if those sections contain the searched-for OOV words. In other words, embodiments of the present invention generate a set of anchor segments to search within.
In act 220, set of anchor segments (A) are generated for the words in the query Q, where each of the anchor segments identifies a locations within the collection of audio files corresponding to a word in the query. A method of generating the anchor segments according to one embodiment of the present invention is described in more detail below in reference to
Referring to
If wi is an OOV word, in act 230, a list of sub-word units of the word wi is generated. The sub-word units may be, for example, morphemes, syllables, phonemes, or a sequence of phonemes. The LVCSR output text is searched in act 234 for each sub-word of wi to generate a set of out of vocabulary anchors AOOV. In some embodiments, in act 232, the search of the LVCSR output text is limited to words having low confidence (e.g., word confidences below a given threshold or between two given thresholds).
In one embodiment, searching the LVCSR text output is performed on a preprocessed index, e.g., a free-text index. IV words can be searched on a word-level index and OOV words can be searched on a sub-word level index. Without loss of generality, in one embodiment, the sub-word index is an index of the phoneme transcription of the LVCSR text output. In another embodiment, the OOV words can be searched in the same word-level free text index if the sub-words are word characters (e.g., instead of phonemes).
For example, if the OOV word to be searched for is “Honda” and the sub-word index is an index of the phoneme transcription of the LVCSR text output, then the phonemes of “Honda” (/h/Q/n/,/Q/n/d/,/n/d/@) will be searched for in the phoneme transcription.
On the other hand, if the sub-word index is the word-level free text index, then the strings “hon”,“ond”, and “nda” can be searched for in the free text index.
In act 236, all the found locations (AOOV or AIV) are added to the list of anchors A (A←A ∪ AOOV ∪ AIV).
The query Q is then checked in act 238 to determine if there are more query words wi , to be processed. If there are, then the process returns to act 224 to repeat the process with the next word wi. If all of the words have been processed, then the accumulated set of anchors A is returned in act 239.
Post Processing of Anchor Segments
Referring back to
As such, the left and right (start and end) edges of each of the anchor segments aj=(wj, sj, ej) is expanded in order to increase the likelihood that the anchor segment will contain an entire searched-for phrase. To calculate the left (start time) expansion, the number of characters Li in the query before the anchor's word is multiplied by the average spoken character duration of the language μ (e.g., the average character duration of words in the dictionary,). In certain embodiments the average character duration of the caller is computed or another best known value may be calculated or looked up from storage. A constant constl is then added to the dynamically computed expansion value.
Similarly, the right expansion is computed by multiplying the number of characters Ri to the right of the anchor by μ and adding a constant constr. In some embodiments, constl=constr.
In short, for each of the anchor segments aj=(wj, sj, ej), the sj and ej values are expanded such that the expanded segment is (wj, sj−(Li×μ)−cl,ej+(charr,i×μ)+cr), where cr and cl the right and left constants respectively.
Referring to
In act 252, for each anchor aj of the anchors A (where aj=(wj, sj, ej)), the start time sj is shifted (decreased) by the left expansion expl,j corresponding to wj in act 254 and the end time ej is shifted (increased) by the right expansion expr,j corresponding to wj in act 256 so that the expanded anchor aj has the form (wj, sj−expl,j, ej+expr,j). In act 258, the set of anchors A is checked to determine if there are more anchors aj to be post-processed. If there are, then the process of acts 254 and 256 is repeated for the remaining anchors. If not, then the expanded anchors are returned in act 259.
Referring again to
Reprocessing of Audio
In act 280, for each anchor segment aj from the above set of expanded anchors A, audio recognition is run on the anchor segment to produce search events. The recognition process can utilize the recognition technology described, for example, in U.S. Pat. No. 7,487,094 “System and method of call classification with context modeling based on composite words.” Alternatively, the process can be done with other suitable phrase recognition technologies that can determine if the query word or words were spoken at the anchor location in the audio. The above method can be extended to perform recognition on multiple terms by searching in the audio for by each term separately or concurrently (e.g., simultaneously).
The recognition process takes a word or phrase (e.g., search terms in the query Q) and an audio segment (e.g., an anchor segment) and returns an event confidence level representing the confidence that the supplied audio segment contains the supplied word or phrase. As such, each of the anchor segments in the expanded, anchors A is searched to determine if these segments contain the words or phrases in the query Q. As such, search times can be shortened because, for example, a reduced set of words (referred to as a “constrained grammar”) that includes the query words are searched for in a reduced portion of the audio collection (e.g., only the previously anchor segments are searched). See U.S. Pat. No. 7,487,094 “System and method of call classification with context modeling based on composite words” for additional details on constrained grammars.
An event confidence is then computed for each event (e.g., each potential match) and events having an event confidence above a particular threshold are considered as hits (i.e., places in the audio that contain the searched-for query terms) and these search results are returned in act 290. When displayed, the search results can then be sorted according to their event confidence, with the highest likelihood matches shown first.
The procedure described above in
In one embodiment of the invention, it is possible for the system to get as a search query a phrase composed of only In-Vocabulary (IV) words. In this case the final recognition accuracy of the search may be improved over a classic LVCSR index-based search.
In another embodiment of the invention, the resulting anchor set is sorted according to a utility function in order to initially search in more promising anchors. Such a search can be used to provide time bounds for the search by only searching the top k anchors from the sorted list.
Embodiments of the invention can be practiced as methods or systems. Computer devices or systems including, for example, a microprocessor, memory, a network communications device, and a mass storage device can be used to execute the processes described above in an automated or semi-automated fashion. In other words, the above processes can be coded as computer executable code and processed by the computer device or system.
It should also be appreciated from the above that various structures and functions described herein may be incorporated into a variety of apparatus. In some embodiments, hardware components such as processors, controllers, and/or logic may be used to implement the described components or circuits. In some embodiments, code such as software or firmware executing on one or more processing devices may be used to implement one or more of the described operations or components.
While the present invention has been described in connection with certain exemplary embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims, and equivalents thereof.
This application is a continuation of U.S. patent application Ser. No. 13/886,205, filed on May 2, 2013, now U.S. Pat. No. 9,542,936, which claims the benefit of U.S. Provisional Patent Application No. 61/747,242, filed in the United States Patent and Trademark Office on Dec. 29, 2012, the contents of which are incorporated herein by reference. The U.S. patent application Ser. No. 13/886,205 also claims the benefit of U.S. Provisional Patent Application No. 61/791,581, filed in the United States Patent and Trademark Office on Mar. 15, 2013, the content of which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4852180 | Levinson | Jul 1989 | A |
5625748 | McDonough et al. | Apr 1997 | A |
6076059 | Glickman et al. | Jun 2000 | A |
6212178 | Beck et al. | Apr 2001 | B1 |
6308154 | Williams et al. | Oct 2001 | B1 |
6363346 | Walters | Mar 2002 | B1 |
6404857 | Blair et al. | Jun 2002 | B1 |
6542602 | Elazar | Apr 2003 | B1 |
6594629 | Basu et al. | Jul 2003 | B1 |
6678658 | Hogden et al. | Jan 2004 | B1 |
6687671 | Gudorf et al. | Feb 2004 | B2 |
6721416 | Farrell | Apr 2004 | B1 |
6724887 | Eilbacher et al. | Apr 2004 | B1 |
6895083 | Bers et al. | May 2005 | B1 |
6910072 | Macleod Beck et al. | Jun 2005 | B2 |
6959080 | Dezonno et al. | Oct 2005 | B2 |
7065493 | Homsi | Jun 2006 | B1 |
7092888 | McCarthy et al. | Aug 2006 | B1 |
7302392 | Thenthiruperai | Nov 2007 | B1 |
7440895 | Miller | Oct 2008 | B1 |
7487094 | Konig et al. | Feb 2009 | B1 |
7787609 | Flockhart et al. | Aug 2010 | B1 |
7912699 | Saraclar | Mar 2011 | B1 |
7925508 | Michaelis | Apr 2011 | B1 |
8275110 | Vendrow | Sep 2012 | B2 |
8463606 | Scott et al. | Jun 2013 | B2 |
8600756 | Pickering et al. | Dec 2013 | B2 |
8612272 | Aykin | Dec 2013 | B1 |
8654963 | Anisimov et al. | Feb 2014 | B2 |
8767947 | Ristock et al. | Jul 2014 | B1 |
9262213 | Gralhoz et al. | Feb 2016 | B1 |
20020029161 | Brodersen et al. | Mar 2002 | A1 |
20020112055 | Capers et al. | Aug 2002 | A1 |
20020138265 | Stevens | Sep 2002 | A1 |
20020138468 | Kermani | Sep 2002 | A1 |
20020147592 | Wilmot et al. | Oct 2002 | A1 |
20030088403 | Chan et al. | May 2003 | A1 |
20030145071 | Straut et al. | Jul 2003 | A1 |
20030161298 | Bergman | Aug 2003 | A1 |
20030187649 | Logan | Oct 2003 | A1 |
20030220792 | Kobayashi | Nov 2003 | A1 |
20040024598 | Srivastava et al. | Feb 2004 | A1 |
20040062364 | Dezonno et al. | Apr 2004 | A1 |
20040083195 | McCord et al. | Apr 2004 | A1 |
20040210443 | Kuhn | Oct 2004 | A1 |
20040215458 | Kobayashi | Oct 2004 | A1 |
20050131684 | Clelland | Jun 2005 | A1 |
20050203738 | Hwang | Sep 2005 | A1 |
20050240594 | McCormack et al. | Oct 2005 | A1 |
20060074671 | Farmaner | Apr 2006 | A1 |
20060075347 | Rehm | Apr 2006 | A1 |
20060206324 | Skilling | Sep 2006 | A1 |
20070033003 | Morris | Feb 2007 | A1 |
20070038499 | Margulies et al. | Feb 2007 | A1 |
20070198322 | Bourne et al. | Aug 2007 | A1 |
20070198330 | Korenblit et al. | Aug 2007 | A1 |
20080120164 | Hassler | May 2008 | A1 |
20080154604 | Sathish | Jun 2008 | A1 |
20080221893 | Kaiser | Sep 2008 | A1 |
20090018890 | Werth et al. | Jan 2009 | A1 |
20090030680 | Mamou | Jan 2009 | A1 |
20090037176 | Arrowood | Feb 2009 | A1 |
20090043581 | Abbott | Feb 2009 | A1 |
20090043634 | Tisdale | Feb 2009 | A1 |
20090048868 | Portnoy et al. | Feb 2009 | A1 |
20090132243 | Suzuki | May 2009 | A1 |
20090150425 | Bedingfield, Sr. | Jun 2009 | A1 |
20090204470 | Weyl et al. | Aug 2009 | A1 |
20090225971 | Miller et al. | Sep 2009 | A1 |
20090326947 | Arnold et al. | Dec 2009 | A1 |
20100057460 | Cohen | Mar 2010 | A1 |
20100107165 | Koskimies et al. | Apr 2010 | A1 |
20100131642 | Chalikouras et al. | May 2010 | A1 |
20100172485 | Bourke et al. | Jul 2010 | A1 |
20100217596 | Morris | Aug 2010 | A1 |
20100246784 | Frazier et al. | Sep 2010 | A1 |
20100278453 | King | Nov 2010 | A1 |
20100296417 | Steiner | Nov 2010 | A1 |
20100324900 | Faifkov | Dec 2010 | A1 |
20110010173 | Scott et al. | Jan 2011 | A1 |
20110010624 | Vanslette et al. | Jan 2011 | A1 |
20110035219 | Kadirkamanathan | Feb 2011 | A1 |
20110047002 | Flockhart et al. | Feb 2011 | A1 |
20110071833 | Shi | Mar 2011 | A1 |
20110082575 | Muesch | Apr 2011 | A1 |
20110125498 | Pickering et al. | May 2011 | A1 |
20110131198 | Johnson et al. | Jun 2011 | A1 |
20110153378 | Costello et al. | Jun 2011 | A1 |
20110172994 | Lindahl et al. | Jul 2011 | A1 |
20110178803 | Petrushin | Jul 2011 | A1 |
20110191106 | Khor et al. | Aug 2011 | A1 |
20110255682 | Flockhart et al. | Oct 2011 | A1 |
20110255683 | Flockhart et al. | Oct 2011 | A1 |
20110257972 | Agevik | Oct 2011 | A1 |
20120203776 | Nissan | Aug 2012 | A1 |
20120209609 | Zhao et al. | Aug 2012 | A1 |
20120226696 | Thambiratnam | Sep 2012 | A1 |
20120227044 | Arumugham et al. | Sep 2012 | A1 |
20120232904 | Zhu et al. | Sep 2012 | A1 |
20120310649 | Cannistraro | Dec 2012 | A1 |
20120323897 | Daher | Dec 2012 | A1 |
20130080384 | Briggs | Mar 2013 | A1 |
20130083916 | Flockhart et al. | Apr 2013 | A1 |
20130090921 | Liu | Apr 2013 | A1 |
20130094702 | Rodriguez | Apr 2013 | A1 |
20130132583 | McCord | May 2013 | A1 |
20130179208 | Chung et al. | Jul 2013 | A1 |
20130246053 | Scott et al. | Sep 2013 | A1 |
20130254139 | Lei | Sep 2013 | A1 |
20130273976 | Rao et al. | Oct 2013 | A1 |
20130275135 | Morales et al. | Oct 2013 | A1 |
20130289996 | Fry | Oct 2013 | A1 |
20130346077 | Mengibar et al. | Dec 2013 | A1 |
20140025379 | Ganapathiraju | Jan 2014 | A1 |
20140079210 | Kohler et al. | Mar 2014 | A1 |
20140119535 | Anisimov et al. | May 2014 | A1 |
20140142945 | Fry | May 2014 | A1 |
20140146961 | Ristock et al. | May 2014 | A1 |
20140163960 | Dimitriadis et al. | Jun 2014 | A1 |
20140172419 | John et al. | Jun 2014 | A1 |
20140188475 | Lev-Tov et al. | Jul 2014 | A1 |
20140218461 | DeLand | Aug 2014 | A1 |
20140278640 | Galloway et al. | Sep 2014 | A1 |
20140289658 | Gelernter et al. | Sep 2014 | A1 |
20140324426 | Lu et al. | Oct 2014 | A1 |
20140337072 | Tamblyn et al. | Nov 2014 | A1 |
20150089466 | Rodgers et al. | Mar 2015 | A1 |
Number | Date | Country |
---|---|---|
102257800 | Nov 2011 | CN |
1484903 | Dec 2004 | EP |
1402520 | Oct 2006 | EP |
2451938 | Feb 2009 | GB |
2005012781 | Jan 2005 | JP |
2005504452 | Feb 2005 | JP |
2012513165 | Jun 2012 | JP |
5616357 | Oct 2014 | JP |
1020110097853 | Aug 2011 | KR |
2002065741 | Aug 2002 | WO |
2010080323 | Jul 2010 | WO |
2012033505 | Mar 2012 | WO |
2014085760 | Jun 2014 | WO |
Entry |
---|
Chinese Office Action with English Translation for Application 200980151195.1, dated May 13, 2013, 15 pages. |
Chinese Office Action wiith Machine English Translation for Application No. 200980151195.7, dated Dec. 25, 2013, 12 pages. |
European Search Report for Application No. 13866559.1, dated Dec. 7, 2015, 8 pages. |
Examination Office Letter dated Jan. 24, 2013 for Japanese Application 2011-542265, 4 pages. |
International Search Report and Written Opinion for International Application PCT/US2013/077712, dated Apr. 22, 2014, 12 pages. |
International Search Report for PCT/US2013/072484, dated Feb. 28, 2014, 3 pages. |
Japanese Office Action with Machine English Translation for Application No. 2011-542265, dated Jan. 23, 2014, 4 pages. |
Johnson, Sue, Describe what is meant by the term “keyword spotting” and describe the techniques used to implement such a recognition system, Mphil Computer Speech and Language Processing Speech Recognition Essay, Apr. 24, 1997, 26 pages. |
Korean Preliminary Rejection with English Translation for Application No. 10-2011-7014074, dated Jul. 17, 2012, 10 pages. |
Koutras A., et al., “Blind Speech Separation of Moving Speakers in Real Reverberant Environments,” WCL, Jun. 2000, Electrical & Computer Engineering Department, University of Patras, 26100 Patras, HELLAS, 4 pages. |
Logan, Beth et al., Approaches to Reduce the Effects of OOV Queries on Indexed Spoken Audio, Cambridge Research Laboratory, HP Laboratories Cambridge, HPL-2003-46, Mar. 5, 2003, 18 pages. |
Written Opinion of the International Searching Authority for International Application PCT/US2009/067441 dated Jun. 28, 2010, 5 pages. |
European Patent Office Action for Application No. 13 866 559.1, dated Jul. 25, 2017, 5 pages. |
Chinese Office Action with English Translation for Application No. 201380074067.3, dated Mar. 9, 2018, 8 pages. |
European Patent Office Action for Application No. 13 866 559.1, dated Mar. 28, 2018, 5 pages. |
European Patent Office Action for Application No. 13 866 559.1, dated Nov. 23, 2018, 5 pages. |
Number | Date | Country | |
---|---|---|---|
20170186422 A1 | Jun 2017 | US |
Number | Date | Country | |
---|---|---|---|
61747242 | Dec 2012 | US | |
61791581 | Mar 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13886205 | May 2013 | US |
Child | 15402070 | US |