System and method for providing an acoustic grammar to dynamically sharpen speech interpretation

Information

  • Patent Grant
  • 8150694
  • Patent Number
    8,150,694
  • Date Filed
    Wednesday, June 1, 2011
    13 years ago
  • Date Issued
    Tuesday, April 3, 2012
    12 years ago
Abstract
The system and method described herein may provide an acoustic grammar to dynamically sharpen speech interpretation. In particular, the acoustic grammar may be used to map one or more phonemes identified in a user verbalization to one or more syllables or words, wherein the acoustic grammar may have one or more linking elements to reduce a search space associated with mapping the phonemes to the syllables or words. As such, the acoustic grammar may be used to generate one or more preliminary interpretations associated with the verbalization, wherein one or more post-processing techniques may then be used to sharpen accuracy associated with the preliminary interpretations. For example, a heuristic model may assign weights to the preliminary interpretations based on context, user profiles, or other knowledge and a probable interpretation may be identified based on confidence scores associated with one or more candidate interpretations generated with the heuristic model.
Description
FIELD OF THE INVENTION

The invention is related generally to automated speech interpretation, and in particular, to enhancing the accuracy and performance of speech interpretation engines.


BACKGROUND OF THE INVENTION

The field of automated speech interpretation is in increasingly higher demand. One use of automated speech interpretation is to provide voice requests to electronic devices. This may enable a user to simply speak to an electronic device rather than manually inputting requests, or other information, through pressing buttons, uploading information, or by other request input methods. Controlling various electronic devices through speech may enable the user to use the electronic devices more efficiently.


However, existing technology in the field of automated speech interpretation, such as standard speech engines, automatic speech recognition (ASR), and other systems for interpreting speech, are unable to process a speech signal in an efficient manner, often constructing large grammars that include a large number of items, nodes, and transitions, which is a concern particularly for large-list recognition for embedded applications. If the grammar for an embedded application grows too much, it may not fit within the constrained space of an embedded application. With limited CPU power, response time and performance is easily affected due to the significant time needed to compile and load the grammar. Response time is further degraded because the speech engine has to parse through a large number of transition states to come up with a recognition result. Even when the speech engine is able recognize a word, the results are often unreliable because large grammars introduce greater risk of confusion between items as the size of the grammar increases. Existing techniques focus on reducing the size of a grammar tree by removing command variants or criteria items, but this approach strips functionality from the application.


In addition to the performance problems associated with speech recognition engines that employ large word grammars, existing speech processing engines are unable to interpret natural human speech with a suitable accuracy to sufficiently control some electronic devices. In particular, speech interpretation engines still have substantial problems with accuracy and interpreting words that are not defined in a predetermined vocabulary or grammar context. Poor quality microphones, extraneous noises, unclear or grammatically incorrect speech by the user, or an accent of the user may also cause shortcomings in accuracy, such as when a particular sound cannot be mapped to a word in the grammar.


In light of these and other problems, there is a need for enhanced automated speech interpretation that may interpret natural human speech with an augmented accuracy.


SUMMARY OF THE INVENTION

According to one aspect of the invention, a system for enhancing automated speech interpretation is provided. The system may include a set of techniques for use in a speech-to-text engine to enhance accuracy and performance, for example, by reducing the search space of the speech engine. The problems with large-list recognition for embedded applications may also be improved by using phonetic dictation, which may recognize a phoneme string by disregarding the notion of words. The system may also use one or more post-processing techniques to sharpen an output of a preliminary speech interpretation made by a speech engine. The system may be modeled at least partially after one or more speech pattern recognition techniques used by humans, such as interpreting speech using words, word sequences, word combinations, word positions, context, phonetic similarities between two or more words, parts of speech, or other techniques.


In one implementation of the invention, the system may receive a verbalization made by a user, where a speech engine may receive the verbalization. The speech engine may output information relating to a plurality of preliminary interpretations of the verbalization, where the plurality of preliminary interpretations represent a set of best guesses at the user verbalization. According to one aspect of the invention, the performance of the speech engine may be improved by using phoneme recognition. Phoneme recognition may disregard the notion of words, instead interpreting a verbalization as a series of phonemes, which may provide out-of-vocabulary (OOV) capabilities, such as when a user misspeaks or an electronic capture devices drops part of a speech signal, or for large-list applications, such as city and street names or song titles, for example. Phoneme recognition may be based on any suitable acoustic grammar that maps a speech signal into a phonemic representation. For example, the English language may be broken down into a detailed grammar of the phonotactic rules of the English language. Portions of a word may be represented by a syllable, which may be further broken down into core components of an onset, a nucleus, and a coda, which may be further broken down into sub-categories. Various different acoustic grammars may be formed as trees with various branches representing many different syllables forming a speech signal.


According to another aspect of the invention, the performance of the speech engine and the phonemic recognition may be improved by pruning the search space used by the speech engine using a common phonetic marker. In one implementation, the acoustic grammar may be represented entirely by a loop of phonemes. In another implementation, the speech engine may reduce the search space by reducing the number of transitions in a grammar tree, thereby speeding up the process of compiling, loading, and executing the speech engine. For example, the phoneme loop may include a linking element between transitions. This may reduce the number of grammar transitions, such that grammar paths merge after a first transition and diverge after the linking element. In one implementation of the invention, a common acoustic element that is part of a speech signal may be used as the linking element. In one implementation of the invention, the acoustic element may be one that is very likely to be triggered even if it is unpronounced. For example, a schwa in the English language may be used as the linking element because schwa represents an unstressed, central vowel that is likely to be spoken even if unintended. Those skilled in the art will appreciate that acoustic models for different languages may use other frequently elided phonemes as linking elements to reduce the search space used by the speech engine.


The speech engine may generate a plurality of preliminary interpretations representing a set of best guesses at the user verbalization. The preliminary interpretations may be stored in a matrix, array, or another form, and may be provided to an interpretation sharpening module to determine a probable interpretation of a verbalization made by a user by applying heuristic policies against the preliminary interpretation to identify dominant words and/or phrases. According to various aspects of the invention, the interpretation sharpening module may include a policy module that may manage and/or provide one or more policies that enable the sharpening module to generate a plurality of probable interpretations of the verbalization made by the user. For example, according to one aspect of the invention, the plurality of preliminary interpretations may be applied against one or more policies to generate a set of hypotheses as to a candidate recognition. Each hypothesis may be reanalyzed to generate an interpretation score that may relate to a likelihood of the probable interpretation being a correct interpretation of the verbalization, and the preliminary interpretation corresponding to the highest (or lowest) interpretation score may then be designated as a probable interpretation of the verbalization. The designated probable interpretation may be stored and used for augmenting the policies to improve accuracy.


According to one aspect of the invention, the policy module may include one or more agents that represent domains of knowledge. The agents may compete using a weighted model to revise a preliminary interpretation by determining context and intent. Relevant substitution of suspect words and phrases may be based on phonetic similarities or domain appropriateness. A domain agent may include one or more domain parameters for determining a probable interpretation from a preliminary interpretation. For example, domain parameters may include a policy vocabulary, a word position in the verbalization, a word combination, a sentence structure, or other parameters. A domain agent may include a parameter weighting scheme that may weight individual parameters according to one or more weighting factors, such as, a frequency of use, a difficulty to understand, or other factors.


According to one aspect of the invention, the domain agents may revise a preliminary interpretation into a probable interpretation using phonetic fuzzy matching (PFM). In one implementation of the invention, the speech engine may output a phoneme stream that is applied against a model of phoneme feature similarities, drawn from domain agents, to identify a closest phonetic match using a multi-pass method. Domain agents may be loaded and prioritized into an M-Tree, which accounts for the possibility of the speech engine dropping or adding phonemes. An M-Tree may be an index structure that resolves similarity queries between phonemes using a closest-distance metric based on relative weightings of phoneme misrecognition, phoneme addition, and phoneme deletion. The M-Tree may be updated using an adaptive misrecognition model. For example, information about a verbalization and its components, as well as a probability that the probable interpretation was correct, may be stored and used for adapting the policy module for the user.


In one implementation of the invention, the domain agents in the policy module may include one or more profile agents that may manage and/or provide one or more profile policies for revising a preliminary interpretation of a phoneme stream. For example, a profile agent may correspond to a user and may include one or more profile parameters tailored to the user. The profile agent may be used as a base policy to interpret any verbalizations made by the user. In other implementations, a profile agent may correspond to a particular language, a regional accent, or other profiles for interpreting a user verbalization. The profile agents may be augmented to enable the system to provide more accurate interpretations of verbalizations made by the user. The augmentation may include a user augmentation, such as providing additional vocabulary (e.g., names in an address book), one or more personalized pronunciations or other pronunciation information, or other user provided augmentations. The augmentation may also include a non-user provided augmentation, such as updates generated by a third party (e.g., a commercial administration and/or maintenance entity), or other non-user provided augmentations. The augmentation may be automated, such as adjusting a profile parameter-weighting scheme through an adaptive misrecognition model, as discussed above.


In another implementation of the invention, the domain agents in the policy module may include one or more context agents that may manage and/or provide one or more context policies for revising a preliminary interpretation of a phoneme stream. For example, a context agent may correspond to a context, such as song titles, city and street names, movie titles, finance, or other contexts. A context agent may include one or more context parameters that may be tailored to a verbalization context. The context policy may enhance an ability of the system related to interpreting verbalizations made by the user in the verbalization context corresponding to the context agent. The context agents may be augmented to enable the system to provide more accurate interpretations of verbalizations made in a verbalization context corresponding to the context agent. The augmentation may include a user provided augmentation, a non-user provided augmentation, an automated augmentation, or other augmentations. The augmentation may be automated, such as adjusting a profile parameter-weighting scheme through an adaptive misrecognition model, as discussed above.


According to various implementations of the invention, the policy module may determine which profile agents and/or which context agents to use through a set of heuristics provided in a context-tracking module. In one implementation, the context-tracking module may use phonetic fuzzy matching to track a series of verbalizations by the user to identify a verbalization context. The context-tracking module may utilize one or more M-Trees to track the series of verbalizations and determine a closest phonetic match. The context-tracking module may track one or more past verbalization contexts for the series of verbalizations, one or more current verbalization contexts for the series of verbalizations, and/or make predictions regarding one or more future verbalization contexts for the series of verbalizations. The policy module may utilize information about the verbalization context of the series of verbalizations generated by the context tracking module to manage and/or provide one or more profile and/or context agents.


According to one aspect of the invention, the system may include an interpretation history analysis module that may enable the system to augment one or more domain agents based on an analysis of past interpretations related to previously interpreted verbalizations. The augmentations enabled by the interpretation history analysis module may include a user augmentation, a third-party augmentation, an automated augmentation, or other augmentations. The interpretation history analysis module may include an information storage module that may store interpretation information related to past verbalizations, such as one or more preliminary interpretations associated with a past verbalization, one or more interpretation scores associated with a past verbalization, one or more probable interpretations associated with a past verbalization, whether or not a past verbalization was interpreted correctly, or other information. A frequency module may be included in the interpretation history module, and the frequency module may use some or all of the information stored in the information storage module to generate one or more frequencies related to one or more past verbalizations. For example, the frequency module may calculate a frequency of word usage, word combinations, phonetic homonyms, interpretation errors for a particular verbalization, or other frequencies.


The Information generated and/or stored by the interpretation history analysis module may be used to augment the profile and/or context agents in order to enhance the accuracy of subsequent interpretations. In some implementations, an adaptive misrecognition model may use one or more generated frequencies to augment one or more agents. For example, one or more parameters and/or weighting schemes of an agent or phonetic model may be augmented based on a frequency generated by the interpretation history analysis module. Other augmentations using information stored and/or generated by the interpretation history analysis module may be made, and the system may include a policy agent handler that may augment, update, remove, and/or provide one or more domain agents to the system. A domain agent may comprise a profile or context agent, and the policy agent handler may be controlled, directly or indirectly by a third party (e.g. a commercial entity, etc.). The policy agent handler may augment, update, remove, and/or provide domain agents to the system as part of a commercial agreement, such as a licensing agreement, a subscription agreement, a maintenance agreement, or other agreements.


Other objects and advantages of the invention will be apparent to those skilled in the art based on the following detailed description and accompanying drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates an exemplary system for enhancing automated speech interpretation according to one implementation of the invention.



FIG. 2 illustrates an exemplary grammar tree for enhancing the performance of a speech engine according to one implementation of the invention.



FIG. 3 illustrates an exemplary flow chart of a method for enhancing automated speech interpretation according to one implementation of the invention.





DETAILED DESCRIPTION


FIG. 1 illustrates an exemplary system for enhancing automated speech interpretation according to one implementation of the invention. A speech-to-text processing engine 112 may receive a user verbalization, and speech engine 112 may generate one or more preliminary interpretations of the user verbalization. The preliminary interpretations may represent a set of best guesses as to the user verbalization arranged in any predetermined form or data structure, such as an array, a matrix, or other forms. In one implementation of the invention, speech engine 112 may generate the preliminary interpretations by performing phonetic dictation to recognize a stream of phonemes, instead of a stream of words. Phonemic recognition provides several benefits, particularly in the embedded space, such as offering out-of-vocabulary (OOV) capabilities, improving processing performance by reducing the size of a grammar, and eliminating the need to train Statistic Language Models (SLMs). Those skilled in the art will recognize other advantages of phonemic recognition.


Speech engine 112 may apply the phoneme stream against one or more acoustic grammars that reliably map a speech signal to a phonemic representation in order to generate the plurality of preliminary interpretations. Characteristics of a speech signal may be mapped to a phonemic representation to construct a suitable acoustic grammar, and various acoustic grammars may be included in speech engine 112 to generate one or more preliminary interpretations according to the various acoustic grammars. For example, the English language may be mapped into a detailed acoustic grammar representing the phonotactic rules of English, where words may be divided into syllables, which may further be divided into core components of an onset, a nucleus, and a coda, which may be further broken down into one or more sub-categories.


Once the phonotactic rules of a speech signal have been identified, a detailed acoustic grammar tree may be constructed that accounts for the nuances of the speech signal. The acoustic grammar may include a loop of phonemes, or the phoneme loop may include a linking element to reduce a size of a search space associated with the grammar. Using the English language as an example, the grammar tree may include various branches representing English language syllables. The speech engine may traverse one or more grammar trees to generate one or more preliminary interpretations of a phoneme stream as a series of syllables that map to a word or phrase. By using phonemic recognition rather than word recognition, the size of the grammar can be reduced, which reduces the amount of time required to compile, load, and execute speech interpretation. Moreover, because the grammar maintains a high level of phonotactic constraints and therefore a large number of syllables, speech engine 112 may be very precise in generating phonemic representations of human verbalizations.


An acoustic grammar used by speech engine 112 may be further optimized to reduce compile time, load time, and execution time by reducing the size of a search space associated with the acoustic grammar. Referring now to FIG. 2, a traditional grammar tree 120 is compared to an exemplary grammar tree according to one aspect of the invention to demonstrate the performance enhancements of speech engine 112. In traditional speech processing engines, nodes in a grammar tree 210 tend to represent words, or large-list applications may be supported provided through a grammar tree 210 where the nodes represent items in the large-list. This requires the speech engine to parse through a large number of transition states to come up with a recognition result, which degrades response time. An example of this is seen in the following grammar structure:


“<street name> <city name>”→e.g., “NE 24th Street Bellevue”


In the above example, a large list of street names is followed by a large list of city names. Assuming three elements in the list of street names, and three elements in the list of city names, this results in twenty-one transitions, which may be represented by traditional grammar tree 210. Every end-node of the first list is followed by all entries in the second list, potentially leading to very large grammars because most real-world large-list applications are likely to include much more than three list items. For example, a city may have hundreds or thousands of street names, and there may be hundreds or thousands of city names. Moreover, every element in the second segment of traditional grammar tree 210 is repeated, once for each first segment, which introduces redundancy.


According to an aspect of the invention, the problems with traditional grammar trees may be resolved by using phonemic acoustic grammars instead of large-lists. The grammar may further be improved by including linking elements to reduce the number of transition states in the grammar. Thus, a grammar tree with a linking element 220 will merge after a first segment and then spread out again at a second segment, where the segments may represent a phoneme in an acoustic grammar, as discussed above. For example, assume a two-syllable word in an acoustic grammar consisting of three phonemes, which is able to reduce the number of transitions from twenty-one in a traditional grammar tree 210 to twelve in a grammar tree with a linking element 220. Two syllables and three phonemes are chosen to show the reduction in search space in a grammar tree with a linking element 220 as opposed to a corresponding traditional grammar tree 210, although a real-world acoustic grammar modeled after a language is likely to have a maximum of roughly fifty phonemes. Moreover, the search space may be further reduced by restricting available transitions based on phonotactic constraints for an acoustic model.


Using the approach described in FIG. 2, adding a linking element to an acoustic grammar may reduce both grammar size and response time. Part of a speech signal may be mapped to the linking element in order to maintain the phonotactic rules of the acoustic grammar. The linking element may be an acoustic element that is likely to be triggered even if unpronounced. For example, a schwa represents an unstressed, central vowel in the English language (e.g., the first and last sound in the word “arena” is schwa). The phoneme schwa is an ideal linking element because of how it is represented in a frequency spectrum. That is, schwa is a brief sound and when a person opens their mouth to speak, there is a strong likelihood of passing through the frequencies of schwa even if unintended. Those skilled in the art will recognize that this approach may be extended to acoustic models of speech signals for other languages by using frequently elided phonemes as linking elements to reduce the search space of an acoustic grammar.


Referring again to FIG. 1, speech engine 112 may receive a user verbalization and process the verbalization into a plurality of preliminary interpretations using the techniques described above. That is, the verbalization may be interpreted as a series of phonemes, and the series of phonemes may be mapped to one or more preliminary interpretations by traversing one or more acoustic grammars that are modeled after grammar 220 of FIG. 2. The plurality of preliminary interpretations may take the form of words, parts of words, phrases, utterances, or a combination thereof, and the plurality of preliminary interpretations may be arranged as a matrix, an array, or in another form. The plurality of preliminary interpretations are then passed to a speech sharpening engine 110 for deducing a most probable interpretation of the verbalization.


According to various aspects of the invention, speech sharpening engine 110 may include an interpretation sharpening module 116, a policy module 114, an interpretation history analysis module 118, and a policy agent handler 120. The plurality of preliminary interpretations may be received by interpretation sharpening module 116, which forwards the preliminary interpretations to policy module 114 for further processing. Policy module 114 may include one or more context agents 126, one or more profile agents 128, and a context tracking module 130 that collectively revise the plurality of preliminary interpretations into a set of hypotheses that represent candidate recognitions of the verbalization. Policy module 114 may assign each hypothesis an interpretation score, and interpretation sharpening module 116 may designate the hypothesis with the highest (or lowest) interpretation score as a probable interpretation.


According to one aspect of the invention, policy module 114 may include one or more context agents 126. Context agents 126 may represent domains of knowledge corresponding to a given context, such as song titles, city and street names, finance, movies, or other contexts. Context agents 126 may use context objects and associated dynamic languages to represent a corresponding context. Policy module 114 may also include one or more profile agents 128. Profile agents 128 may represent domains of knowledge corresponding to a given profile, such as a specific user, language, accent, or other profiles. Profile agents 128 may use profile objects and dynamic languages to represent a corresponding profile. Dynamic languages for context agents 126 or profile agents 128 may specify vocabularies, word combinations, phrases, sentence structures, criteria, and priority weightings for any given context or profile, respectively. The priority weightings may weight individual parameters according to one or more weighting factors, such as assigning a weight according to a frequency of use, a difficulty to understand, or other factors. Policy module 114 may also include a context-tracking module 130. Context tracking module 130 may track a verbalization context of a consecutive series of verbalizations. Context tracking module 130 may utilize one or more conversation trees to track the series of verbalizations. Context tracking sub-module 214 may track one or more past or current verbalization contexts of the series of verbalizations, and/or may make predictions regarding one or more future verbalization contexts of the series of verbalizations. Policy module 114 may utilize information about the verbalization context, generated by context tracking module 130, to generate one or more sharpened interpretations and corresponding interpretation scores.


In some implementations, policy module 114 may use context tracking module 130 to apply objects from one or more context agents 126 and/or profile agents 128 to the preliminary interpretations provided by speech engine 112. The various agents may compete with each other using a set of heuristics in a phonetic fuzzy matcher, where an intent or context of the user may be identified based on the set of heuristics about how a request may be phrased in a given domain. A closest phonetic match may be identified for suspect words and/or phrases among the plurality of preliminary interpretations.


The phonetic fuzzy matcher may include an M-Tree that is populated with context objects, profile objects, and/or dynamic language data from one or more of context agents 126 and/or profile agents 128. M-Trees are known to those skilled in the art. The M-Tree may assign relative priority weights to the context objects, profile objects, and/or dynamic language data in order to account for the possibility of misrecognized phonemes, extraneous phonemes, or erroneously deleted phonemes. A closest distance metric associated with the M-Tree may be used given the relative weightings of phoneme misrecognition, phoneme addition, and phoneme deletion for various contexts and/or profiles.


According to one aspect of the invention, one or more passes may be taken over the plurality of preliminary interpretations to identify dominant words and/or phrases among the plurality of preliminary interpretations. Using the M-Tree weighted model, one or more candidate interpretations may be made based on relevant substitution of suspect words and/or phrases based on phonetic similarities and/or domain appropriateness. For example, if a set of dominant words appear to be a movie name, a candidate interpretation will substitute the relevant words and/or phrases to generate a candidate interpretation about movies. After a set of candidate interpretations have been generated, the candidate interpretations are analyzed using the M-Tree weighted model. With the relevant domains constrained by the candidate interpretations, a confidence or interpretation score may be assigned to each candidate interpretation, with the interpretation score representing a likelihood that a particular candidate interpretation is a correct interpretation of the verbalization. The candidate interpretations may then be returned to interpretation sharpening module 116, and interpretation sharpening module 116 may select a candidate interpretation with a highest (or lowest) interpretation score as a probable interpretation of the verbalization.


According to various implementations of the invention, speech sharpening engine 110 may include an interpretation history analysis module 118. Interpretation history analysis module 118 may include an information storage module 122 a frequency module 124. Information storage module 122 may store information related to verbalizations, including components of verbalizations, preliminary interpretations, dominant words and/or phrases, candidate interpretations, probable interpretations, and/or interpretation scores associated with verbalizations, as well as whether or not a verbalization was interpreted correctly, or other information. Interpretation history analysis module 118 may also include a frequency module 124. Frequency module 124 may use some or all of the information stored in information storage module 122 to generate one or more frequencies related to one or more past verbalizations. For example, frequency module 124 may calculate a word usage frequency, a word combination frequency, a frequency related to a set of verbalizations that are phonetically similar but have distinct meanings, an interpretation error frequency for a particular verbalization, or other frequencies.


Information stored and/or generated by interpretation history analysis module 118 may be used to augment speech sharpening engine 110. In some implementations, the information may be used to adjust various weights used in phonetic models, such as context agents 126 or profile agents 128, as well as adapting the relative weights in the M-Tree in context tracking module 130 to enhance accuracy for subsequent verbalizations. In another implementation, the stored information may be sent to a third-party or commercial entity for analyzing the data and developing new domain agents or further improving the accuracy of speech sharpening engine 110. For example, one or more parameters and/or weighting schemes of an agent may be augmented based on a frequency generated by interpretation history analysis module 118. Other augmentations related to information stored on and/or generated by interpretation history analysis module 118 may be made. Speech sharpening engine 110 may also include a policy agent handler 120 that may augment, update, remove, and/or provide one or more domain agents to policy module 114. A domain agent may include one or more new, modified, or updated context agents 126 and/or profile agents 128. Policy agent handler 120 may also augment or update the M-Tree in context tracking module 130 to adjustments in priority weighting schemes or phonetic models. Policy agent handler 120 may be controlled, directly or indirectly, by a third party, such as a commercial entity, and domain agents may be augmented, updated, removed, and/or provided by policy agent handler 120 as part of a commercial agreement, licensing agreement, subscription agreement, maintenance agreement, or other agreement.


Referring to FIG. 3, a flow chart demonstrating an exemplary method for enhancing the performance and accuracy of speech interpretation is provided. The method may begin by receiving a user verbalization at an operation 312. The received user verbalization may be electronically captured at operation 312, such as by a microphone or other electronic audio capture device. The electronically captured verbalization may be provided to a speech interpretation engine, such as speech engine 112 in FIG. 1.


The speech interpretation may then generate one or more preliminary interpretations of the received verbalization at an operation 314. According to one implementation of the invention, the plurality of preliminary interpretations may be generated using phonetic dictation, grammar trees with linking elements, or any combination thereof to improve performance and enhance accuracy. Phonetic dictation and reducing a search space of a grammar tree by including linking elements is discussed in greater detail above. The preliminary interpretations may be arranged in any predetermined form, such as an array, a matrix, or other forms.


In an operation 320, the preliminary interpretations may be provided to a speech sharpening engine. The speech sharpening engine may take one or more passes over the plurality of preliminary interpretations to identify dominant words and/or phrases in operation 320. This information may then be used to generate one or more candidate interpretations. The candidate interpretations may be based on various domain agents, such as context agents and/or profile agents, which may be organized as a weighted domain model, such as an M-Tree. For example, if a set of dominant words sound like a movie name, apply policies operation 320 may generate a candidate interpretation that substitutes relevant words and/or phrases based on a domain agent populated with movie titles. Additional passes may be made over the candidate interpretations, which may be constrained by domain information associated with the candidate interpretations, to thereby generate a confidence score or interpretation score for each candidate interpretation. The interpretation score may represent a likelihood that a particular candidate interpretation is a correct interpretation of the verbalization received in operation 312. The operation of apply policies 320 is described in greater detail above in reference to FIG. 1.


The candidate interpretations and corresponding interpretation scores may then be analyzed to determine a probable interpretation in an operation 322. In one implementation of the invention, a candidate interpretation with a highest (or lowest) score may be designated as a probable interpretation. The probable interpretation may then be output in an operation 324, such as for use in a voice-activated vehicular navigation system, a voice-controlled server or desktop computer, or other electronic device that can be controlled using voice commands.


Information relating to the verbalization and the interpretations of the verbalization may be provided in a store interpretation operation 325. Store interpretation operation 324 may store interpretation information related to verbalizations, such as components of verbalizations, preliminary interpretations, dominant words and/or phrases, candidate interpretations, probable interpretations, and/or interpretation scores associated with verbalizations, as well as whether or not a verbalization was interpreted correctly, or other information. In some implementations of the invention, some or all of the interpretation information stored at store interpretation operation 324 may be used to determine one or more frequencies at a determine frequencies operation 326. The frequencies calculated at determine frequencies operation 326 may include one or more frequencies related to past verbalizations, such as, a word usage frequency, a word combination frequency, a frequency related to a set of verbalizations that are phonetically similar but have distinct meanings, an interpretation error frequency for a particular verbalization, or other frequencies. Determine frequencies operation 326 may be performed by interpretation history analysis module 118.


In various implementations, a decision may be made whether to augment a speech sharpening engine in an augmentation decision operation 328. The decision concerning system augmentation may be based at least in part on information generated at determine frequencies block 326, such as one or more frequencies, or other information. If it is decided that no augmentation is needed, no further action is taken until another verbalization is captured, and the method ends. In some instances, decision operation 328 may determine that augmentation may be made and control passes to an augment system operation 330. Augment system operation 330 may include making an augmentation to a speech sharpening engine. For example, one or more domain agents may be augmented to reflect probabilities of an interpretation being a correct interpretation of a verbalization, to update a user profile, or other augmentation. Dynamic languages associated with context agents and/or profile agents may be augmented, or parameters weights may be augmented to enhance accuracy when interpreting subsequent verbalizations. For example, an adaptive misrecognition technique may adjust the various weights in a phonetic model or update similarity weights for regional accents, or other augmentations may be made. In parallel to augment system operation 330, new agent policies may be received in an operation 332. For example, a third party or commercial entity may redesign or modify various domain agents, new domain agents may be developed and installed as plug-ins, domain agents that are unreliable may be removed, or other augmentations or modifications may be made. Thus, the method continually refines the domain agents and the weighting of various parameters in order to refine the accuracy of the speech sharpening engine for subsequent verbalizations.


The above disclosure has been described in terms of specific exemplary aspects, implementations, and embodiments of the invention. However, those skilled in the art will recognize various changes and modifications that may be made without departing from the scope and spirit of the invention. For example, references throughout the specification to “one implementation,” “one aspect,” “an implementation,” or “an aspect” may indicate that a particular feature, structure, or characteristic is included in at least one implementation. However, the particular features, structures, or characteristics may be combined in any suitable manner in one or more implementations. Therefore, the specification and drawings are to be regarded as exemplary only, and the scope of the invention is to be determined solely by the appended claims.

Claims
  • 1. A system for providing an acoustic grammar to dynamically sharpen speech interpretation, wherein the system comprises an electronic device configured to: represent one or more syllables with one or more series that include acoustic elements associated with an acoustic speech model; andconstruct an acoustic grammar that contains transitions between the acoustic elements in the one or more series associated with the one or more represented syllables, wherein an unstressed central vowel links sequential phonemic elements in the acoustic grammar to reduce the transitions between the acoustic elements.
  • 2. The system of claim 1, wherein the electronic device is further configured to constrain the transitions between the acoustic elements using one or more phonotactic rules associated with the acoustic speech model.
  • 3. The system of claim 1, wherein the acoustic elements in the one or more series divide the one or more represented syllables into one or more core components.
  • 4. The system of claim 3, wherein the one or more core components associated with the one or more represented syllables include an onset, a nucleus, and a coda.
  • 5. The system of claim 3, wherein the acoustic elements in the one or more series further divide the one or more core components associated with the one or more represented syllables into one or more sub-categories.
  • 6. The system of claim 1, wherein the unstressed central vowel comprises a schwa.
  • 7. A method for providing an acoustic grammar to dynamically sharpen speech interpretation, comprising: representing one or more syllables with one or more series that include acoustic elements associated with an acoustic speech model; andconstructing, via an electronic device, an acoustic grammar that contains transitions between the acoustic elements in the one or more series associated with the one or more represented syllables, wherein an unstressed central vowel links sequential phonemic elements in the acoustic grammar to reduce the transitions between the acoustic elements.
  • 8. The method of claim 7, wherein the electronic device is further configured to constrain the transitions between the acoustic elements using one or more phonotactic rules associated with the acoustic speech model.
  • 9. The method of claim 7, wherein the acoustic elements in the one or more series divide the one or more represented syllables into one or more core components.
  • 10. The method of claim 9, wherein the one or more core components associated with the one or more represented syllables include an onset, a nucleus, and a coda.
  • 11. The method of claim 9, wherein the acoustic elements in the one or more series further divide the one or more core components associated with the one or more represented syllables into one or more sub-categories.
  • 12. The method of claim 7, wherein the unstressed central vowel comprises a schwa.
  • 13. A non-transitory computer-readable storage medium that stores an acoustic grammar data structure, wherein the acoustic grammar data structure stored on the computer-readable storage medium comprises: one or more syllable data objects, wherein the one or more syllable data objects arrange acoustic elements associated with an acoustic speech model in one or more series;one or more transition data objects that represent transitions between the acoustic elements associated with the one or more syllable data objects; andan unstressed central vowel data object that links sequential phonemic elements associated with the one or more syllable data objects to reduce the transitions that the one or more transition data objects represent between the acoustic elements associated with the one or more syllable data objects.
  • 14. The computer-readable storage medium of claim 13, wherein the acoustic grammar data structure applies one or more phonotactic rules associated with the acoustic speech model to further constrain the transitions that the one or more transition data objects represent between the acoustic elements associated with the one or more syllable data objects.
  • 15. The computer-readable storage medium of claim 13, wherein the acoustic elements associated with the one or more syllable data objects divide the one or more syllable data objects into one or more core components.
  • 16. The computer-readable storage medium of claim 15, wherein the one or more core components associated with the one or more syllable data objects include an onset, a nucleus, and a coda.
  • 17. The computer-readable storage medium of claim 15, wherein the acoustic elements associated with the one or more syllable data objects further divide the one or more core components associated with the one or more syllable data objects into one or more sub-categories.
  • 18. The computer-readable storage medium of claim 13, wherein the unstressed central vowel data object represents a schwa.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 12/608,544, entitled “Dynamic Speech Sharpening,” filed Oct. 29, 2009, which issued as U.S. Pat. No. 7,983,917 on Jul. 19, 2011, and which is a divisional of U.S. patent application Ser. No. 11/513,269, entitled “Dynamic Speech Sharpening,” filed Aug. 31, 2006, which issued as U.S. Pat. No. 7,634,409 on Dec. 15, 2009, and which claims the benefit of U.S. Provisional Patent Application Ser. No. 60/712,412, entitled “Dynamic Speech Sharpening,” filed Aug. 31, 2005, the contents of which are hereby incorporated by reference in their entirety. In addition, this application is related to U.S. patent application Ser. No. 12/608,572, entitled “Dynamic Speech Sharpening,” filed Oct. 29, 2009, which issued as U.S. Pat. No. 8,069,046 on Nov. 29, 2011, and which is a continuation of above-referenced U.S. patent application Ser. No. 11/513,269, and this application is further related to U.S. patent application Ser. No. 10/452,147, entitled “Systems and Methods for Responding to Natural Language Speech Utterance,” filed Jun. 3, 2003, which issued as U.S. Pat. No. 7,398,209 on Jul. 8, 2008, U.S. patent application Ser. No. 10/618,633, entitled “Mobile Systems and Methods for Responding to Natural Language Speech Utterance,” filed Jul. 15, 2003, which issued as U.S. Pat. No. 7,693,720 on Apr. 6, 2010, U.S. patent application Ser. No. 11/197,504, entitled “Systems and Methods for Responding to Natural Language Speech Utterance,” filed Aug. 5, 2005, which issued as U.S. Pat. No. 7,640,160 on Dec. 29, 2009, U.S. patent application Ser. No. 11/200,164, entitled “System and Method of Supporting Adaptive Misrecognition in Conversational Speech,” filed Aug. 10, 2005, which issued as U.S. Pat. No. 7,620,549 on Nov. 17, 2009, and U.S. patent application Ser. No. 11/212,693, entitled “Mobile Systems and Methods of Supporting Natural Language Human-Machine Interactions,” filed Aug. 29, 2005, which issued as U.S. Pat. No. 7,949,529 on May 24, 2011, the contents of which are hereby incorporated by reference in their entirety.

US Referenced Citations (408)
Number Name Date Kind
4430669 Cheung Feb 1984 A
5027406 Roberts et al. Jun 1991 A
5155743 Jacobs Oct 1992 A
5208748 Flores et al. May 1993 A
5274560 LaRue Dec 1993 A
5357596 Takebayashi et al. Oct 1994 A
5377350 Skinner Dec 1994 A
5386556 Hedin et al. Jan 1995 A
5424947 Nagao et al. Jun 1995 A
5471318 Ahuja et al. Nov 1995 A
5475733 Eisdorfer et al. Dec 1995 A
5488652 Bielby et al. Jan 1996 A
5499289 Bruno et al. Mar 1996 A
5500920 Kupiec Mar 1996 A
5517560 Greenspan May 1996 A
5533108 Harris et al. Jul 1996 A
5537436 Bottoms et al. Jul 1996 A
5539744 Chu et al. Jul 1996 A
5557667 Bruno et al. Sep 1996 A
5563937 Bruno et al. Oct 1996 A
5577165 Takebayashi et al. Nov 1996 A
5590039 Ikeda et al. Dec 1996 A
5617407 Bareis Apr 1997 A
5633922 August et al. May 1997 A
5675629 Raffel et al. Oct 1997 A
5696965 Dedrick Dec 1997 A
5708422 Blonder et al. Jan 1998 A
5721938 Stuckey Feb 1998 A
5722084 Chakrin et al. Feb 1998 A
5740256 Castello Da Costa et al. Apr 1998 A
5742763 Jones Apr 1998 A
5748841 Morin et al. May 1998 A
5748974 Johnson May 1998 A
5752052 Richardson et al. May 1998 A
5754784 Garland et al. May 1998 A
5761631 Nasukawa Jun 1998 A
5774859 Houser et al. Jun 1998 A
5794050 Dahlgren et al. Aug 1998 A
5794196 Yegnanarayanan et al. Aug 1998 A
5797112 Komatsu et al. Aug 1998 A
5799276 Komissarchik et al. Aug 1998 A
5802510 Jones Sep 1998 A
5832221 Jones Nov 1998 A
5839107 Gupta et al. Nov 1998 A
5867817 Catallo et al. Feb 1999 A
5878385 Bralich et al. Mar 1999 A
5878386 Coughlin Mar 1999 A
5892813 Morin et al. Apr 1999 A
5895464 Bhandari et al. Apr 1999 A
5895466 Goldberg et al. Apr 1999 A
5897613 Chan Apr 1999 A
5902347 Backman et al. May 1999 A
5911120 Jarett et al. Jun 1999 A
5918222 Fukui et al. Jun 1999 A
5926784 Richardson et al. Jul 1999 A
5933822 Braden-Harder et al. Aug 1999 A
5953393 Culbreth et al. Sep 1999 A
5960397 Rahim Sep 1999 A
5960399 Barclay et al. Sep 1999 A
5960447 Holt et al. Sep 1999 A
5963894 Richardson et al. Oct 1999 A
5963940 Liddy et al. Oct 1999 A
5987404 Della Pietra et al. Nov 1999 A
5991721 Asano et al. Nov 1999 A
5995119 Cosatto et al. Nov 1999 A
5995928 Nguyen et al. Nov 1999 A
6009382 Martino et al. Dec 1999 A
6014559 Amin Jan 2000 A
6018708 Dahan et al. Jan 2000 A
6021384 Gorin et al. Feb 2000 A
6035267 Watanabe et al. Mar 2000 A
6044347 Abella et al. Mar 2000 A
6049602 Foladare et al. Apr 2000 A
6049607 Marash et al. Apr 2000 A
6058187 Chen May 2000 A
6067513 Ishimitsu May 2000 A
6078886 Dragosh et al. Jun 2000 A
6081774 De Hita et al. Jun 2000 A
6085186 Christianson et al. Jul 2000 A
6101241 Boyce et al. Aug 2000 A
6108631 Ruhl Aug 2000 A
6119087 Kuhn et al. Sep 2000 A
6134235 Goldman et al. Oct 2000 A
6144667 Doshi et al. Nov 2000 A
6144938 Surace et al. Nov 2000 A
6154526 Dahlke et al. Nov 2000 A
6160883 Jackson et al. Dec 2000 A
6167377 Gillick et al. Dec 2000 A
6173266 Marx et al. Jan 2001 B1
6173279 Levin et al. Jan 2001 B1
6175858 Bulfer et al. Jan 2001 B1
6185535 Hedin et al. Feb 2001 B1
6188982 Chiang Feb 2001 B1
6192110 Abella et al. Feb 2001 B1
6192338 Haszto et al. Feb 2001 B1
6195634 Dudemaine et al. Feb 2001 B1
6195651 Handel et al. Feb 2001 B1
6208964 Sabourin Mar 2001 B1
6208972 Grant et al. Mar 2001 B1
6219346 Maxemchuk Apr 2001 B1
6219643 Cohen et al. Apr 2001 B1
6226612 Srenger et al. May 2001 B1
6233556 Teunen et al. May 2001 B1
6233559 Balakrishnan May 2001 B1
6233561 Junqua et al. May 2001 B1
6246981 Papineni et al. Jun 2001 B1
6266636 Kosaka et al. Jul 2001 B1
6269336 Ladd et al. Jul 2001 B1
6272455 Hoshen et al. Aug 2001 B1
6275231 Obradovich Aug 2001 B1
6278968 Franz et al. Aug 2001 B1
6288319 Catona Sep 2001 B1
6292767 Jackson et al. Sep 2001 B1
6308151 Smith Oct 2001 B1
6314402 Monaco et al. Nov 2001 B1
6362748 Huang Mar 2002 B1
6366882 Bijl et al. Apr 2002 B1
6366886 Dragosh et al. Apr 2002 B1
6374214 Friedland et al. Apr 2002 B1
6377913 Coffman et al. Apr 2002 B1
6381535 Durocher et al. Apr 2002 B1
6385596 Wiser et al. May 2002 B1
6385646 Brown et al. May 2002 B1
6393428 Miller et al. May 2002 B1
6397181 Li et al. May 2002 B1
6404878 Jackson et al. Jun 2002 B1
6405170 Phillips et al. Jun 2002 B1
6408272 White et al. Jun 2002 B1
6411810 Maxemchuk Jun 2002 B1
6415257 Junqua et al. Jul 2002 B1
6418210 Sayko Jul 2002 B1
6420975 DeLine et al. Jul 2002 B1
6429813 Feigen Aug 2002 B2
6430285 Bauer et al. Aug 2002 B1
6430531 Polish Aug 2002 B1
6434523 Monaco Aug 2002 B1
6434524 Weber Aug 2002 B1
6442522 Carberry et al. Aug 2002 B1
6446114 Bulfer et al. Sep 2002 B1
6453153 Bowker et al. Sep 2002 B1
6453292 Ramaswamy et al. Sep 2002 B2
6456711 Cheung et al. Sep 2002 B1
6466654 Cooper et al. Oct 2002 B1
6466899 Yano et al. Oct 2002 B1
6470315 Netsch et al. Oct 2002 B1
6498797 Anerousis et al. Dec 2002 B1
6499013 Weber Dec 2002 B1
6501833 Phillips et al. Dec 2002 B2
6501834 Milewski et al. Dec 2002 B1
6510417 Woods et al. Jan 2003 B1
6513006 Howard et al. Jan 2003 B2
6522746 Marchok et al. Feb 2003 B1
6523061 Halverson et al. Feb 2003 B1
6532444 Weber Mar 2003 B1
6539348 Bond et al. Mar 2003 B1
6549629 Finn et al. Apr 2003 B2
6553372 Brassell et al. Apr 2003 B1
6556970 Sasaki et al. Apr 2003 B1
6556973 Lewin Apr 2003 B1
6560576 Cohen et al. May 2003 B1
6567778 Chao Chang et al. May 2003 B1
6567797 Schuetze et al. May 2003 B1
6570555 Prevost et al. May 2003 B1
6570964 Murveit et al. May 2003 B1
6574597 Mohri et al. Jun 2003 B1
6574624 Johnson et al. Jun 2003 B1
6581103 Dengler Jun 2003 B1
6587858 Strazza Jul 2003 B1
6591239 McCall et al. Jul 2003 B1
6594257 Doshi et al. Jul 2003 B1
6594367 Marash et al. Jul 2003 B1
6598018 Junqua Jul 2003 B1
6604075 Brown et al. Aug 2003 B1
6604077 Dragosh et al. Aug 2003 B2
6606598 Holthouse et al. Aug 2003 B1
6611692 Raffel et al. Aug 2003 B2
6614773 Maxemchuk Sep 2003 B1
6615172 Bennett et al. Sep 2003 B1
6622119 Ramaswamy et al. Sep 2003 B1
6629066 Jackson et al. Sep 2003 B1
6631346 Karaorman et al. Oct 2003 B1
6633846 Bennett et al. Oct 2003 B1
6643620 Contolini et al. Nov 2003 B1
6650747 Bala et al. Nov 2003 B1
6658388 Kleindienst et al. Dec 2003 B1
6678680 Woo Jan 2004 B1
6681206 Gorin et al. Jan 2004 B1
6691151 Cheyer et al. Feb 2004 B1
6701294 Ball et al. Mar 2004 B1
6704708 Pickering Mar 2004 B1
6708150 Hirayama et al. Mar 2004 B1
6721001 Berstis Apr 2004 B1
6721706 Strubbe et al. Apr 2004 B1
6735592 Neumann et al. May 2004 B1
6741931 Kohut et al. May 2004 B1
6742021 Halverson et al. May 2004 B1
6751591 Gorin et al. Jun 2004 B1
6751612 Schuetze et al. Jun 2004 B1
6754485 Obradovich et al. Jun 2004 B1
6757718 Halverson et al. Jun 2004 B1
6795808 Strubbe et al. Sep 2004 B1
6801604 Maes et al. Oct 2004 B2
6801893 Backfried et al. Oct 2004 B1
6829603 Chai et al. Dec 2004 B1
6832230 Zilliacus et al. Dec 2004 B1
6833848 Wolff et al. Dec 2004 B1
6856990 Barile et al. Feb 2005 B2
6865481 Kawazoe et al. Mar 2005 B2
6868380 Kroeker Mar 2005 B2
6877134 Fuller et al. Apr 2005 B1
6901366 Kuhn et al. May 2005 B1
6910003 Arnold et al. Jun 2005 B1
6912498 Stevens et al. Jun 2005 B2
6934756 Maes Aug 2005 B2
6937977 Gerson Aug 2005 B2
6944594 Busayapongchai et al. Sep 2005 B2
6950821 Faybishenko et al. Sep 2005 B2
6954755 Reisman Oct 2005 B2
6959276 Droppo et al. Oct 2005 B2
6968311 Knockeart et al. Nov 2005 B2
6973387 Masclet et al. Dec 2005 B2
6975993 Keiller Dec 2005 B1
6980092 Turnbull et al. Dec 2005 B2
6983055 Luo Jan 2006 B2
6990513 Belfiore et al. Jan 2006 B2
6996531 Korall et al. Feb 2006 B2
7003463 Maes et al. Feb 2006 B1
7016849 Arnold et al. Mar 2006 B2
7020609 Thrift et al. Mar 2006 B2
7024364 Guerra et al. Apr 2006 B2
7027975 Pazandak et al. Apr 2006 B1
7035415 Belt et al. Apr 2006 B2
7043425 Pao May 2006 B2
7054817 Shao May 2006 B2
7058890 George et al. Jun 2006 B2
7062488 Reisman Jun 2006 B1
7069220 Coffman et al. Jun 2006 B2
7072834 Zhou Jul 2006 B2
7082469 Gold et al. Jul 2006 B2
7092928 Elad et al. Aug 2006 B1
7107210 Deng et al. Sep 2006 B2
7110951 Lemelson et al. Sep 2006 B1
7127400 Koch Oct 2006 B2
7136875 Anderson et al. Nov 2006 B2
7137126 Coffman et al. Nov 2006 B1
7143037 Chestnut Nov 2006 B1
7146319 Hunt Dec 2006 B2
7165028 Gong Jan 2007 B2
7197069 Agazzi et al. Mar 2007 B2
7203644 Anderson et al. Apr 2007 B2
7206418 Yang et al. Apr 2007 B2
7228276 Omote et al. Jun 2007 B2
7231343 Treadgold et al. Jun 2007 B1
7236923 Gupta Jun 2007 B1
7277854 Bennett et al. Oct 2007 B2
7289606 Sibal et al. Oct 2007 B2
7301093 Sater et al. Nov 2007 B2
7305381 Poppink et al. Dec 2007 B1
7337116 Charlesworth et al. Feb 2008 B2
7340040 Saylor et al. Mar 2008 B1
7366669 Nishitani et al. Apr 2008 B2
7376645 Bernard May 2008 B2
7386443 Parthasarathy et al. Jun 2008 B1
7398209 Kennewick et al. Jul 2008 B2
7406421 Odinak et al. Jul 2008 B2
7415414 Azara et al. Aug 2008 B2
7424431 Greene et al. Sep 2008 B2
7447635 Konopka et al. Nov 2008 B1
7461059 Richardson et al. Dec 2008 B2
7472020 Brulle-Drews Dec 2008 B2
7472060 Gorin et al. Dec 2008 B1
7478036 Shen et al. Jan 2009 B2
7487088 Gorin et al. Feb 2009 B1
7493259 Jones et al. Feb 2009 B2
7493559 Wolff et al. Feb 2009 B1
7502738 Kennewick et al. Mar 2009 B2
7516076 Walker et al. Apr 2009 B2
7536297 Byrd et al. May 2009 B2
7536374 Au May 2009 B2
7558730 Davis et al. Jul 2009 B2
7574362 Walker et al. Aug 2009 B2
7606708 Hwang Oct 2009 B2
7620549 Di Cristo et al. Nov 2009 B2
7634409 Kennewick et al. Dec 2009 B2
7640160 Di Cristo et al. Dec 2009 B2
7676365 Hwang et al. Mar 2010 B2
7676369 Fujimoto et al. Mar 2010 B2
7693720 Kennewick et al. Apr 2010 B2
7729918 Walker et al. Jun 2010 B2
7788084 Brun et al. Aug 2010 B2
7809570 Kennewick et al. Oct 2010 B2
7818176 Freeman et al. Oct 2010 B2
7831433 Belvin et al. Nov 2010 B1
7873523 Potter et al. Jan 2011 B2
7902969 Obradovich Mar 2011 B2
7917367 Di Cristo et al. Mar 2011 B2
7949529 Weider et al. May 2011 B2
7949537 Walker et al. May 2011 B2
7983917 Kennewick et al. Jul 2011 B2
8015006 Kennewick et al. Sep 2011 B2
8069046 Kennewick et al. Nov 2011 B2
8073681 Baldwin et al. Dec 2011 B2
8086463 Ativanichayaphong et al. Dec 2011 B2
20010041980 Howard et al. Nov 2001 A1
20010049601 Kroeker et al. Dec 2001 A1
20020015500 Belt et al. Feb 2002 A1
20020022927 Lemelson et al. Feb 2002 A1
20020035501 Handel et al. Mar 2002 A1
20020049805 Yamada et al. Apr 2002 A1
20020065568 Silfvast et al. May 2002 A1
20020069059 Smith Jun 2002 A1
20020082911 Dunn et al. Jun 2002 A1
20020087525 Abbott et al. Jul 2002 A1
20020120609 Lang et al. Aug 2002 A1
20020124050 Middeljans Sep 2002 A1
20020138248 Corston-Oliver et al. Sep 2002 A1
20020143535 Kist et al. Oct 2002 A1
20020188602 Stubler et al. Dec 2002 A1
20020198714 Zhou Dec 2002 A1
20030014261 Kageyama Jan 2003 A1
20030016835 Elko et al. Jan 2003 A1
20030046346 Mumick et al. Mar 2003 A1
20030064709 Gailey et al. Apr 2003 A1
20030088421 Maes et al. May 2003 A1
20030097249 Walker et al. May 2003 A1
20030110037 Walker et al. Jun 2003 A1
20030112267 Belrose Jun 2003 A1
20030115062 Walker et al. Jun 2003 A1
20030120493 Gupta Jun 2003 A1
20030135488 Amir et al. Jul 2003 A1
20030144846 Denenberg et al. Jul 2003 A1
20030158731 Falcon et al. Aug 2003 A1
20030182132 Niemoeller Sep 2003 A1
20030204492 Wolf et al. Oct 2003 A1
20030206640 Malvar et al. Nov 2003 A1
20030212550 Ubale Nov 2003 A1
20030236664 Sharma Dec 2003 A1
20040006475 Ehlen et al. Jan 2004 A1
20040025115 Sienel et al. Feb 2004 A1
20040044516 Kennewick et al. Mar 2004 A1
20040098245 Walker et al. May 2004 A1
20040166832 Portman et al. Aug 2004 A1
20040167771 Duan et al. Aug 2004 A1
20040193408 Hunt Sep 2004 A1
20040193420 Kennewick et al. Sep 2004 A1
20040199375 Ehsani et al. Oct 2004 A1
20040205671 Sukehiro et al. Oct 2004 A1
20040243417 Pitts, III et al. Dec 2004 A9
20050015256 Kargman Jan 2005 A1
20050021334 Iwahashi Jan 2005 A1
20050021826 Kumar Jan 2005 A1
20050033574 Kim et al. Feb 2005 A1
20050043940 Elder Feb 2005 A1
20050114116 Fiedler May 2005 A1
20050137850 Odell Jun 2005 A1
20050137877 Oesterling et al. Jun 2005 A1
20050143994 Mori et al. Jun 2005 A1
20050246174 DeGolia Nov 2005 A1
20060206310 Ravikumar et al. Sep 2006 A1
20070033005 Cristo et al. Feb 2007 A1
20070033020 (Kelleher) Francois et al. Feb 2007 A1
20070038436 Cristo et al. Feb 2007 A1
20070043574 Coffman et al. Feb 2007 A1
20070050191 Weider et al. Mar 2007 A1
20070055525 Kennewick et al. Mar 2007 A1
20070073544 Millett et al. Mar 2007 A1
20070118357 Kasravi et al. May 2007 A1
20070179778 Gong et al. Aug 2007 A1
20070186165 Maislos et al. Aug 2007 A1
20070214182 Rosenberg Sep 2007 A1
20070250901 McIntire et al. Oct 2007 A1
20070265850 Kennewick et al. Nov 2007 A1
20070299824 Pan et al. Dec 2007 A1
20080065386 Cross et al. Mar 2008 A1
20080091406 Baldwin et al. Apr 2008 A1
20080103761 Printz et al. May 2008 A1
20080115163 Gilboa et al. May 2008 A1
20080133215 Sarukkai Jun 2008 A1
20080140385 Mahajan et al. Jun 2008 A1
20080177530 Cross et al. Jul 2008 A1
20080189110 Freeman et al. Aug 2008 A1
20080235023 Kennewick et al. Sep 2008 A1
20080235027 Cross Sep 2008 A1
20080319751 Kennewick et al. Dec 2008 A1
20090117885 Roth May 2009 A1
20090144271 Richardson et al. Jun 2009 A1
20090150156 Kennewick et al. Jun 2009 A1
20090171664 Kennewick et al. Jul 2009 A1
20090216540 Tessel et al. Aug 2009 A1
20090271194 Davis et al. Oct 2009 A1
20090299745 Kennewick et al. Dec 2009 A1
20100023320 Di Cristo et al. Jan 2010 A1
20100049501 Kennewick et al. Feb 2010 A1
20100049514 Kennewick et al. Feb 2010 A1
20100057443 Di Cristo et al. Mar 2010 A1
20100063880 Atsmon et al. Mar 2010 A1
20100145700 Kennewick et al. Jun 2010 A1
20100204986 Kennewick et al. Aug 2010 A1
20100204994 Kennewick et al. Aug 2010 A1
20100217604 Baldwin et al. Aug 2010 A1
20100286985 Kennewick et al. Nov 2010 A1
20100299142 Freeman et al. Nov 2010 A1
20110112827 Kennewick et al. May 2011 A1
20110112921 Kennewick et al. May 2011 A1
20110131036 Di Cristo et al. Jun 2011 A1
20110131045 Cristo et al. Jun 2011 A1
20110231182 Weider et al. Sep 2011 A1
20120022857 Baldwin et al. Jan 2012 A1
Foreign Referenced Citations (16)
Number Date Country
1 320 043 Jun 2003 EP
1 646 037 Apr 2006 EP
WO 9946763 Sep 1999 WO
WO 0021232 Apr 2000 WO
WO 0046792 Aug 2000 WO
WO 0178065 Oct 2001 WO
WO 2004072954 Aug 2004 WO
WO 2007019318 Feb 2007 WO
WO 2007021587 Feb 2007 WO
WO 2007027546 Mar 2007 WO
WO 2007027989 Mar 2007 WO
WO 2008098039 Aug 2008 WO
WO 2008118195 Oct 2008 WO
WO 2009075912 Jun 2009 WO
WO 2009145796 Dec 2009 WO
WO 2010096752 Aug 2010 WO
Related Publications (1)
Number Date Country
20110231188 A1 Sep 2011 US
Provisional Applications (1)
Number Date Country
60712412 Aug 2005 US
Divisions (1)
Number Date Country
Parent 11513269 Aug 2006 US
Child 12608544 US
Continuations (1)
Number Date Country
Parent 12608544 Oct 2009 US
Child 13150977 US