1. Field of the Invention
The present invention relates generally to multimedia environments and, more particularly, to systems and methods for improving recognition results of a multimedia recognition system via user-augmentation of a linguistic database.
2. Description of Related Art
Current multimedia recognition systems obtain multimedia documents from a fixed set of sources. These documents include audio documents (e.g., radio broadcasts), video documents (e.g., television broadcasts), and text documents (e.g., word processing documents). A typical recognition system processes the documents and stores them in a database. In the case of audio or video documents, the recognition system might transcribe the documents to identify information, such as the words spoken, the identity of one or more speakers, one or more topics relating to the documents, and, in the case of video, the identity of one or more entities (persons, places, objects, etc.) appearing in the video.
When a user later desires to access the documents, the user usually queries or searches the database. For example, the user might use a standard database interface to submit a query relating to documents of interest. The database would then process the query to retrieve documents that are relevant to the query and present the documents (or a list of the documents) to the user. The documents provided to the user are usually only as good, however, as the recognition system that created them.
It has been found that the recognition results of a multimedia recognition system typically degrade over time, as new words are introduced into the system. Oftentimes, the recognition system cannot accurately recognize the new words.
Accordingly, it is desirable to improve recognition results of a multimedia recognition system.
Systems and methods consistent with the present invention permit users to augment a database of a multimedia recognition system by annotating, attaching, inserting, correcting, and/or enhancing documents. The systems and methods use this user-augmentation to improve the recognition results of the recognition system.
In one aspect consistent with the principles of the invention, a system improves recognition results. The system receives multimedia data and recognizes the multimedia data based on training data to generate documents. The system receives user augmentation relating to one of the documents. The system supplements the training data with the user augmentation and retrains based on the supplemented training data.
In another aspect consistent with the principles of the invention, a multimedia recognition system receives different types of multimedia data and recognizes the multimedia data based on training data to generate recognition results. The system obtains new documents from one or more users and adds the new documents to the training data to obtain new training data. The system retrains based on the new training data.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate the invention and, together with the description, explain the invention. In the drawings,
The following detailed description of the invention refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims and equivalents.
Systems and methods consistent with the present invention permit users to augment a database of a multimedia recognition system by, for example, annotating, attaching, inserting, correcting, and/or enhancing documents. The systems and methods may use this user-augmentation to improve the recognition results of the recognition system. For example, the user-augmentation may be used to improve the documents stored in the database. The user-augmentation may also be used for system retraining.
Multimedia sources 110 may include one or more audio sources 112, one or more video sources 114, and one or more text sources 116. Audio source 112 may include mechanisms for capturing any source of audio data, such as radio, telephone, and conversations, in any language, and providing the audio data, possibly as an audio stream or file, to indexers 120. Video source 114 may include mechanisms for capturing any source of video data, with possibly integrated audio data in any language, such as television, satellite, and a camcorder, and providing the video data, possibly as a video stream or file, to indexers 120. Text source 116 may include mechanisms for capturing any source of text, such as e-mail, web pages, newspapers, and word processing documents, in any language, and providing the text, possibly as a text stream or file, to indexers 120.
Indexers 120 may include one or more audio indexers 122, one or more video indexers 124, and one or more text indexers 126. Each of indexers 122, 124, and 126 may include mechanisms that receive data from multimedia sources 110, process the data, perform feature extraction, and output analyzed, marked-up, and enhanced language metadata. In one implementation consistent with the principles of the invention, indexers 122-126 include mechanisms, such as the ones described in John Makhoul et al., “Speech and Language Technologies for Audio Indexing and Retrieval,” Proceedings of the IEEE, Vol. 88, No. 8, August 2000, pp. 1338-1353, which is incorporated herein by reference.
Audio indexer 122 may receive input audio data from audio sources 112 and generate metadata therefrom. For example, indexer 122 may segment the input data by speaker, cluster audio segments from the same speaker, identify speakers by name or gender, and transcribe the spoken words. Indexer 122 may also segment the input data based on topic and locate the names of people, places, and organizations. Indexer 122 may further analyze the input data to identify when each word was spoken (possibly based on a time value). Indexer 122 may include any or all of this information in the metadata relating to the input audio data.
Video indexer 124 may receive input video data from video sources 122 and generate metadata therefrom. For example, indexer 124 may segment the input data by speaker, cluster video segments from the same speaker, identify speakers by name or gender, identify participants using face recognition, and transcribe the spoken words. Indexer 124 may also segment the input data based on topic and locate the names of people, places, and organizations. Indexer 124 may further analyze the input data to identify when each word was spoken (possibly based on a time value). Indexer 124 may include any or all of this information in the metadata relating to the input video data.
Text indexer 126 may receive input text data from text sources 116 and generate metadata therefrom. For example, indexer 126 may segment the input data based on topic and locate the names of people, places, and organizations. Indexer 126 may further analyze the input data to identify when each word occurs (possibly based on a character offset within the text). Indexer 126 may also identify the author and/or publisher of the text. Indexer 126 may include any or all of this information in the metadata relating to the input text data.
As shown in
Statistical model 220 may include acoustic models and language models. The acoustic models may describe the time-varying evolution of feature vectors for each sound or phoneme. The acoustic models may employ continuous hidden Markov models (HMMs) to model each of the phonemes in the various phonetic contexts.
The language models may include n-gram language models, where the probability of each word is a function of the previous word (for a bi-gram language model) and the previous two words (for a tri-gram language model). Typically, the higher the order of the language model, the higher the recognition accuracy at the cost of slower recognition speeds.
Recognition system 230 may use statistical model 220 to process input audio data.
Speech recognition logic 320 may perform continuous speech recognition to recognize the words spoken in the segments that it receives from audio classification logic 310. Speech recognition logic 320 may generate a transcription of the speech using statistical model 220. Speaker clustering logic 330 may identify all of the segments from the same speaker in a single document (i.e., a body of media that is contiguous in time (from beginning to end or from time A to time B)) and group them into speaker clusters. Speaker clustering logic 330 may then assign each of the speaker clusters a unique label. Speaker identification logic 340 may identify the speaker in each speaker cluster by name or gender.
Name spotting logic 350 may locate the names of people, places, and organizations in the transcription. Name spotting logic 350 may extract the names and store them in a database. Topic classification logic 360 may assign topics to the transcription. Each of the words in the transcription may contribute differently to each of the topics assigned to the transcription. Topic classification logic 360 may generate a rank-ordered list of all possible topics and corresponding scores for the transcription.
Story segmentation logic 370 may change the continuous stream of words in the transcription into document-like units with coherent sets of topic labels and other document features generated or identified by the components of recognition system 230. This information may constitute metadata corresponding to the input audio data. Story segmentation logic 370 may output the metadata in the form of documents to memory system 130, where a document corresponds to a body of media that is contiguous in time (from beginning to end or from time A to time B).
Returning to
Database 430 may include a conventional database, such as a relational database, that stores documents from indexers 120. Database 430 may also store documents received from clients 150 via server 140. Interface 440 may include logic that interacts with server 140 to store documents in database 130, query or search database 130, and retrieve documents from database 130.
Returning to
Systems and methods consistent with the present invention permit users to augment memory system 130 to improve recognition results of system 100. For example, the user-augmentation may be used to improve the value of documents stored in memory system 130 and may also be used to retrain indexers 120. The user-augmentation may include: (1) correction and/or enhancement of the documents; (2) annotation of the documents with bookmarks, highlights, and notes; (3) attachment of rich documents to documents from memory system 130; and (4) insertion of rich documents into system 100. Each of these will be described in detail below.
Document Correction and/or Enhancement
Server 140 may present the relevant documents to the user (act 510). For example, the user may be presented with a list of relevant documents. The documents may include any combination of audio documents, video documents, and text documents. The user may select one or more documents on the list to view. In the case of an audio or video document, the user may be presented with a transcription of the audio data or video data corresponding to the document.
GUI 600 may include a speaker section 610, a transcription section 620, and a topics section 630. Speaker section 610 may identify boundaries between speakers, the gender of a speaker, and the name of a speaker (when known). In this way, speaker segments are clustered together over the entire document to group together segments from the same speaker under the same label. In the example of
Transcription section 620 may include a transcription of the document. In the example of
GUI 600 may also include a modify button 640. The user may select modify button 640 when the user desires to correct and/or enhance the document. Sometimes, the document is incomplete or incorrect in some manner. For example, the document may identify unknown speakers by gender and may visually distinguish the names of people, places, and organizations. If the user desires, the user may provide the name of an unknown speaker or identify that one of the words in the transcription is the name of a person, place, or organization by selecting modify button 640 and providing the correct information. Alternatively, the document may contain an incorrect topic or a misspelling. If the user desires, the user may correct these items by selecting modify button 640 and providing the correct information.
GUI 600 may receive the information provided by the user and modify the document onscreen. This way, the user may determine whether the information was correctly provided. GUI 600 may also send the modified (i.e., corrected/enhanced) document to server 140.
Returning to
Memory system 130 may also send the modified document to one or more of indexers 120 for retraining (act 540). Memory system 130 may send the modified document in the form of training data. For example, memory system 130 may put the modified document in a special form for use by indexers 120 to retrain. Alternatively, memory system 130 may send the modified document to indexers 120, along with an instruction to retrain.
Training system 210 (
Suppose, for example, that the user provided the name of one of the speakers who was identified simply by gender in the document. Speaker identification logic 340 (
Document Annotation
Server 140 may present the relevant documents to the user (act 710). For example, the user may be presented with a list of relevant documents. The documents may include any combination of audio documents, video documents, and text documents. The user may select one or more documents on the list to view the document(s). In the case of an audio or video document, the user may be presented with a transcription of the audio data or video data corresponding to the document.
If the user desires, the user may annotate a document. For example, the user may bookmark the document, highlight the document, and/or add a note to the document.
GUI 800 may also include annotate button 810, a highlighted block of text 820, and a note 830. If the user desires to annotate the document, the user may select annotate button 810. The user may then be presented with a list of annotation options, such as adding a bookmark, highlight, or note. If the user desires to bookmark the document, the user may select the bookmark option. In this case, GUI 800 may add a flag to the document so that the user may later be able to easily retrieve the document from memory system 130. In some instances, the user may be able to share bookmarks with other users.
If the user desires to highlight a portion of the document, the user may select the highlight option. In this case, the user may visually highlight one or more portions of the document, such as highlighted block 820. The highlight, or color of highlight, may provide meaning to highlighted block 820. For example, the highlight might correspond to the user doing the highlighting, signify that highlighted block 820 is important or unimportant, or have some other significance. When other users later retrieve this document, the users may see the highlighting added by the user.
If the user desires to add a note to the document, the user may select the note option. In this case, the user may add a note 830 to the document or a portion of the document. Note 830 may include comments from the user, a multimedia file (audio, video, or text), or a reference (e.g., a link) to another document in memory system 130. When other users later retrieve this document, the users may be able to see note 830 added by the user.
GUI 800 may receive the information (bookmark, highlight, note) provided by the user and annotate the document accordingly onscreen. This way, the user may determine whether the information was correctly provided. GUI 800 may also send the annotated document to server 140.
Returning to
Memory system 130 may also send the annotated document to one or more of indexers 120 for retraining (act 740). Memory system 130 may send the annotated document in the form of training data. For example, memory system 130 may put the annotated document in a special form for use by indexers 120 to retrain. Alternatively, memory system 130 may send the annotated document to indexers 120, along with an instruction to retrain.
Training system 210 (
Suppose, for example, that the user provided comments within a note attached to a portion of the document. The comments may include discipline-specific words that indexers 120 cannot recognize or may include names of people, places, or companies that indexers 120 have not seen before. Indexers 120 may use the comments in recognizing future occurrences of the discipline-specific words or the names. By retraining based on annotated documents, indexers 120 improve their recognition results.
Document Attachment
Server 140 may present the relevant documents to the user (act 910). For example, the user may be presented with a list of relevant documents. The documents may include any combination of audio documents, video documents, and text documents. The user may select one or more documents on the list to view the document(s). In the case of an audio or video document, the user may be presented with a transcription of the audio data or video data corresponding to the document.
If the user desires, the user may attach a rich document to a portion of the document (“original document”). The rich document may include an audio, video, or text document relevant to that particular portion of the original document or the entire original document. For example, the rich document may be relevant to a topic contained within the original document and may describe the topic in a way that the topic is not described in the original document.
GUI 1000 may also include attach document button 1010. If the user desires to attach a rich document, the user may select attach document button 1010. The user may then be presented with a list of attachment options. For example, the user may cut-and-paste text of the rich document into a window of GUI 1000. Alternatively, the user may attach a file containing the rich document or provide a link to the rich document. This may be particularly useful if the rich document is an audio or video document. GUI 1000 may receive the attached document (i.e., rich document) from the user and provide the attached document to server 140.
Returning to
Memory system 130 may also send the attached document to one or more of indexers 120 for retraining (act 940). Memory system 130 may send the attached document in the form of training data. For example, memory system 130 may put the attached document in a special form for use by indexers 120 to retrain. Alternatively, memory system 130 may send the attached document to indexers 120, along with an instruction to retrain.
Training system 210 (
Training system 210 may also extract certain information from the attached document. For example, training system 210 may generate likely pronunciations for unfamiliar words or determine that certain words are names of people, places, or organizations based on their context within the document. By retraining based on attached documents, indexers 120 improve their recognition results.
Optionally, memory system 130 may also send the attached document for recognition by an appropriate one of the indexers 120 (act 950). For example, if the attached document is an audio document, memory system 130 may provide the attached document to the input of audio indexer 122 for recognition. As described above, audio indexer 122 may segment the audio document by speaker, cluster audio segments from the same speaker, identify speakers by name or gender, and transcribe the spoken words. Audio indexer 122 may also segment the audio document based on topic and locate the names of people, places, and organizations, and identify when each word was spoken (possibly based on a time value). Audio indexer 122 may then store this metadata in memory system 130.
Document Insertion
The user may provide the new document in several ways. For example, the user may cut-and-paste text of the document. Alternatively, the user may provide a file containing the document or provide a link to the document. This may be particularly useful if the document is an audio or video document.
Server 140 may receive or obtain the document (act 1110). For example, if the user provided a link to the document, then server 140 may use the link to retrieve the document using conventional techniques. Server 140 may then process the document (act 1120). For example, if the document is a web page, server 140 may parse the document and discard advertisements and other extraneous information. Server 140 may then send the document to memory system 130. Memory system 130 may store the document in database 430 (act 1130). The document may, thereafter, be available to other users.
Memory system 130 may also send the document to one or more of indexers 120 for retraining (act 1140). Memory system 130 may send the document in the form of training data. For example, memory system 130 may put the document in a special form for use by indexers 120 to retrain. Alternatively, memory system 130 may send the document to indexers 120, along with an instruction to retrain.
Training system 210 (
Training system 210 may also extract certain information from the document. For example, training system 210 may generate likely pronunciations for unfamiliar words or determine that certain words are names of people, places, or organizations based on their context within the document. By retraining based on new documents, indexers 120 improve their recognition results.
Optionally, memory system 130 may also send the document for recognition by an appropriate one of the indexers 120 (act 1150). For example, if the document is an audio document, memory system 130 may provide the document to the input of audio indexer 122 for recognition. As described above, audio indexer 122 may segment the audio document by speaker, cluster audio segments from the same speaker, identify speakers by name or gender, and transcribe the spoken words. Audio indexer 122 may also segment the audio document based on topic and locate the names of people, places, and organizations, and identify when each word was spoken (possibly based on a time value). Audio indexer 122 may then store this metadata in memory system 130.
Systems and methods consistent with the present invention permit users to augment a database of a multimedia recognition system by, for example, annotating, attaching, inserting, correcting, and/or enhancing documents. The systems and methods may use this user-augmentation to improve the recognition results of the recognition system. For example, the user-augmentation may be used to improve the documents stored in the database. The user-augmentation may also be used for system retraining.
The foregoing description of preferred embodiments of the present invention provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention.
For example, server 140 may elicit information from the user. Server 140 may ask the user to verify that a certain word corresponds to a person, place, or organization. Alternatively, server 140 may request that the user supply a document that relates to the word.
Also, exemplary graphical user interfaces have been described with regard to
While series of acts have been described with regard to
Further, certain portions of the invention have been described as “logic” that performs one or more functions. This logic may include hardware, such as an application specific integrated circuit or a field programmable gate array, software, or a combination of hardware and software.
No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used. The scope of the invention is defined by the claims and their equivalents.
This application claims priority under 35 U.S.C. §119 based on U.S. Provisional Application Nos. 60/394,064 and 60/394,082, filed Jul. 3, 2002, and Provisional Application No. 60/419,214, filed Oct. 17, 2002, the disclosures of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4879648 | Cochran et al. | Nov 1989 | A |
4908866 | Goldwasser et al. | Mar 1990 | A |
5317732 | Gerlach, Jr. et al. | May 1994 | A |
5404295 | Katz et al. | Apr 1995 | A |
5418716 | Suematsu | May 1995 | A |
5544257 | Bellegarda et al. | Aug 1996 | A |
5559875 | Bieselin et al. | Sep 1996 | A |
5572728 | Tada et al. | Nov 1996 | A |
5613032 | Cruz et al. | Mar 1997 | A |
5614940 | Cobbley et al. | Mar 1997 | A |
5684924 | Stanley et al. | Nov 1997 | A |
5715367 | Gillick et al. | Feb 1998 | A |
5752021 | Nakatsuyama et al. | May 1998 | A |
5757960 | Murdock et al. | May 1998 | A |
5768607 | Drews et al. | Jun 1998 | A |
5777614 | Ando et al. | Jul 1998 | A |
5787198 | Agazzi et al. | Jul 1998 | A |
5806032 | Sproat | Sep 1998 | A |
5835667 | Wactlar et al. | Nov 1998 | A |
5862259 | Bokser et al. | Jan 1999 | A |
5875108 | Hoffberg et al. | Feb 1999 | A |
5960447 | Holt et al. | Sep 1999 | A |
5963940 | Liddy et al. | Oct 1999 | A |
5970473 | Gerszberg et al. | Oct 1999 | A |
6006184 | Yamada et al. | Dec 1999 | A |
6006221 | Liddy et al. | Dec 1999 | A |
6024571 | Renegar | Feb 2000 | A |
6026388 | Liddy et al. | Feb 2000 | A |
6029124 | Gillick et al. | Feb 2000 | A |
6029195 | Herz | Feb 2000 | A |
6052657 | Yamron et al. | Apr 2000 | A |
6064963 | Gainsboro | May 2000 | A |
6067514 | Chen | May 2000 | A |
6067517 | Bahl et al. | May 2000 | A |
6073096 | Gao | Jun 2000 | A |
6076053 | Juang et al. | Jun 2000 | A |
6088669 | Mayes | Jul 2000 | A |
6112172 | True et al. | Aug 2000 | A |
6119163 | Monteiro et al. | Sep 2000 | A |
6151598 | Shaw et al. | Nov 2000 | A |
6161087 | Wightman et al. | Dec 2000 | A |
6169789 | Rao et al. | Jan 2001 | B1 |
6185531 | Schwartz | Feb 2001 | B1 |
6219640 | Basu et al. | Apr 2001 | B1 |
6246983 | Zou | Jun 2001 | B1 |
6253179 | Beigi et al. | Jun 2001 | B1 |
6266667 | Olsson | Jul 2001 | B1 |
6308222 | Krueger et al. | Oct 2001 | B1 |
6317716 | Braida et al. | Nov 2001 | B1 |
6332139 | Kaneko et al. | Dec 2001 | B1 |
6332147 | Moran et al. | Dec 2001 | B1 |
6345252 | Beigi et al. | Feb 2002 | B1 |
6347295 | Vitale | Feb 2002 | B1 |
6360234 | Jain et al. | Mar 2002 | B2 |
6360237 | Schulz et al. | Mar 2002 | B1 |
6373985 | Hu et al. | Apr 2002 | B1 |
6381640 | Powers et al. | Apr 2002 | B1 |
6434520 | Kanevsky et al. | Aug 2002 | B1 |
6437818 | Lauwers et al. | Aug 2002 | B1 |
6463444 | Jain et al. | Oct 2002 | B1 |
6480826 | Petrushin | Nov 2002 | B2 |
6567980 | Jain et al. | May 2003 | B1 |
6571208 | Kuhn et al. | May 2003 | B1 |
6602300 | Ushioda et al. | Aug 2003 | B2 |
6604110 | Savage et al. | Aug 2003 | B1 |
6611803 | Furuyama et al. | Aug 2003 | B1 |
6624826 | Balabanovic | Sep 2003 | B1 |
6647383 | August et al. | Nov 2003 | B1 |
6654735 | Eichstaedt et al. | Nov 2003 | B1 |
6708148 | Gschwendtner et al. | Mar 2004 | B2 |
6711541 | Kunh et al. | Mar 2004 | B1 |
6714911 | Waryas et al. | Mar 2004 | B2 |
6718303 | Tang et al. | Apr 2004 | B2 |
6718305 | Hab-Umbach | Apr 2004 | B1 |
6728673 | Furuyama et al. | Apr 2004 | B2 |
6732183 | Graham | May 2004 | B1 |
6748350 | Rumer et al. | Jun 2004 | B2 |
6778958 | Nishimura et al. | Aug 2004 | B1 |
6778979 | Grefenstette et al. | Aug 2004 | B2 |
6792409 | Wutte | Sep 2004 | B2 |
6847961 | Silverbrook et al. | Jan 2005 | B2 |
6877134 | Fuller et al. | Apr 2005 | B1 |
6922691 | Flank | Jul 2005 | B2 |
6931376 | Lipe et al. | Aug 2005 | B2 |
6961954 | Maybury et al. | Nov 2005 | B1 |
6973428 | Boguraev et al. | Dec 2005 | B2 |
6978277 | Reed et al. | Dec 2005 | B2 |
6999918 | Ma et al. | Feb 2006 | B2 |
7131117 | Mills et al. | Oct 2006 | B2 |
7146317 | Bartosik et al. | Dec 2006 | B2 |
7171360 | Huang et al. | Jan 2007 | B2 |
7257528 | Ritchie et al. | Aug 2007 | B1 |
20010026377 | Ikegami | Oct 2001 | A1 |
20010051984 | Fukasawa | Dec 2001 | A1 |
20020001261 | Matsui et al. | Jan 2002 | A1 |
20020010575 | Haase | Jan 2002 | A1 |
20020010916 | Thong et al. | Jan 2002 | A1 |
20020049589 | Poirier | Apr 2002 | A1 |
20020059204 | Harris | May 2002 | A1 |
20020133477 | Abel | Sep 2002 | A1 |
20020184373 | Maes | Dec 2002 | A1 |
20030051214 | Graham et al. | Mar 2003 | A1 |
20030088414 | Huang et al. | May 2003 | A1 |
20030093580 | Thomas et al. | May 2003 | A1 |
20030167163 | Glover et al. | Sep 2003 | A1 |
20030224760 | Day | Dec 2003 | A1 |
20040024739 | Copperman et al. | Feb 2004 | A1 |
20040073444 | Peh et al. | Apr 2004 | A1 |
20050060162 | Mohit et al. | Mar 2005 | A1 |
20060129541 | Morgan et al. | Jun 2006 | A1 |
Number | Date | Country |
---|---|---|
0664636 | Jul 1995 | EP |
0935378 | Aug 1999 | EP |
0715298 | Jun 2000 | EP |
1079313 | Feb 2001 | EP |
1103952 | May 2001 | EP |
1176493 | Jan 2002 | EP |
1 422 692 | May 2004 | EP |
361285570 | Dec 1986 | JP |
WO-9917235 | Apr 1999 | WO |
WO-0059223 | Oct 2000 | WO |
WO-0229612 | Apr 2002 | WO |
WO-0229614 | Apr 2002 | WO |
Number | Date | Country | |
---|---|---|---|
20040006737 A1 | Jan 2004 | US |
Number | Date | Country | |
---|---|---|---|
60394064 | Jul 2002 | US | |
60394082 | Jul 2002 | US | |
60419214 | Oct 2002 | US |