1. Field of the Invention
The present invention relates spoken dialog systems and more specifically to a system and method of using semantic and syntactic graphs for utterance classification.
2. Introduction
Goal-oriented spoken dialog systems aim to identify the intent of a human caller, expressed in natural language, and take actions accordingly to satisfy the caller's requests. The intent of each speaker is identified using a natural language understanding component. This step can be seen as a multi-label, multi-class call classification problem for customer care applications. An example customer care application may relate to a bank having a call-in dialog service that enables a bank customer to perform transactions over the phone audibly. As an example, consider the utterance, “I would like to know my account balance,” from a financial domain customer care application. Assuming that the utterance is recognized correctly by the automatic speech recognizer (ASR), the corresponding intent (call-type) would be “Request (Balance)” and the action would be telling the balance to the user after prompting for the account number or routing this call to the billing department.
Typically these application-specific call-types are pre-designed and large amounts of utterances manually labeled with call-types are used for training call classification systems. For classification, generally word n-grams are used as features: In the “How May I Help You?” (HMIHY) call routing system, selected word n-grams, namely “salient phrases,” which are salient to certain call types play an important role. For instance, for the above example, the salient phrase “account balance” is strongly associated with the call-type “Request (Balance).” Instead of using salient phrases, one can leave the decision of determining useful features (word n-grams) to a classification algorithm. An alternative would be using a vector space model for classification where call-types and utterances are represented as vectors including word n-grams.
Call classification is similar to text categorization, except that the utterances are much shorter than typical documents used for text categorization (such as broadcast news or newspaper articles); since it deals with spontaneous speech, the utterances frequently include disfluencies or are ungrammatical; and, ASR output is very noisy, typically one out of every four words is misrecognized.
Even though the shortness of the utterances may seem to imply the easiness of the call classification task, unfortunately, this is not the case. The call classification error rates typically range between 15% to 30%, depending on the application. This is mainly due to the data sparseness problem because of the nature of the input. Even for simple call-types like “Request (Balance),” there are many ways of uttering the same intent. Some examples include: “I would like to know my account balance,” “How much do I owe you,” “How much is my bill,” “What is my current bill,” “I'd like the balance on my account,” “account balance,” “You can help me by telling me what my phone bill is.” The current classification approaches continue to perform intent classification using only the words within the utterance.
Given this data sparseness, current classification approaches require an extensive amount of labeled data in order to train a classification system with a reasonable performance. What is needed in the art is an improved system and method for spoken language understanding and classification.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth herein.
The present invention addresses the deficiencies in the prior art. In a typical spoken language understanding system, lexical (word-based) information is obtained from an utterance to form a feature set for a classifier. Data sparseness is the result of the fact that most calls or communications are short and because of the relatively small amount of training data. There are numerous possible ways in which the same idea may be expressed. In a call situation, a user gives only a few words. A brief user utterance provides only a very small piece of information to a spoken language understanding system. This makes classification of such a user utterance by statistical methods especially difficult—in light of the contrast to the numerous ways in which the same idea may be expressed verbally. Using statistical methods to classify a user utterance relies upon a relatively sparse data set.
As a general definition of a classifier, it is a type of statistical algorithm which takes a feature representation of an object or concept and maps it to a classification label. A classification algorithm is designed to learn (to approximate the behavior of) a function which maps a vector of features into one of several classes by looking at several input-output examples of the function. Combining semantic and syntactic information with lexical information in a language understanding system improves the accuracy of the system in correctly classifying an utterance. The present invention improves upon spoken language understanding by incorporating lexical, semantic, and syntactic information into a graph corresponding to each utterance received by the system. Another aspect of the system relates to the extraction of n-gram features from these semantic and syntactic graphs. N-grams extracted from semantic and syntactic graphs are then utilized for classification of the utterance or the call. Other ways of classifying the utterance may be utilized as well. For example, the utterance may be classified by using at least one of the extracted n-grams, the syntactic and semantic graphs and writing rules as well.
The invention provides for a system, method, and computer readable medium storing instructions related to semantic and syntactic information in a language understanding system. The method embodiment of the invention is a method for classifying utterances during a natural language dialog between a human and a computing device. The method comprises receiving a user utterance; generating a semantic and syntactic graph associated with the received utterance, extracting all n-grams as features from the generated semantic and syntactic graph and classifying the utterance using the extracted n-grams or by some other means.
The invention includes aspects related to the use of semantic and syntactic information in a language understanding system, enabling more natural interaction for the human in a human-computing device dialog.
In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
Various embodiments of the invention are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the invention.
Spoken dialog systems aim to identify human intent expressed in natural language, take actions accordingly and to satisfy their requests.
ASR module 102 may analyze speech input and may provide a transcription of the speech input as output. SLU module 104 may receive the transcribed input and may use a natural language understanding model to analyze the group of words that are included in the transcribed input to derive a meaning from the input. The role of DM module 106 is to receive the derived meaning from the SLU 104 module and generate a natural language response to help the user to achieve the task that the system is designed to support. DM module 106 may receive the meaning of the speech input from SLU module 104 and may determine an action, such as, for example, providing a response, based on the input. SLG module 108 may generate a transcription of one or more words in response to the action provided by DM 106. TTS module 110 may receive the transcription as input and may provide generated audible speech as output based on the transcribed speech. There are variations that may be employed. For example, the audible speech may be generated by other means than a specific TTS module as shown.
Thus, the modules of system 100 may recognize speech input, such as speech utterances, may transcribe the speech input, may identify (or understand) the meaning of the transcribed speech, may determine an appropriate response to the speech input, may generate text of the appropriate response and from that text and may generate audible “speech” from system 100, which the user then hears. In this manner, the user can carry on a natural language dialog with system 100. Those of ordinary skill in the art will understand the programming languages and means for generating and training ASR module 102 or any of the other modules in the spoken dialog system.
The present invention related to the use of semantic and syntactic graphs is especially related to SLU module 104, but may be implemented in a system which does not contain all of the elements illustrated in system 100. Further, the modules of system 100 may operate independent of a full dialog system. For example, a computing device such as a smartphone (or any processing device having a phone or communication capability) may have an ASR module wherein a user may say “call mom” and the smartphone may act on the instruction without a “spoken dialog.”
With reference to
To enable user interaction with the computing device 200, an input device 270 represents any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. The device output 280 can also be one or more of a number of output means. In some instances, multimodal systems enable a user to provide multiple types of input to communicate with the computing device 200. The communications interface 290 generally governs and manages the user input and system output.
Embodiments within the scope of the present invention may also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or combination thereof) to a computer, the computer properly views the connection as a computer-readable medium. Thus, any such connection is properly termed a computer-readable medium. Combinations of the above should also be included within the scope of the computer-readable media.
Computer-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Computer-executable instructions also include program modules that are executed by computers in stand-alone or network environments. Generally, program modules include routines, programs, objects, components, and data structures, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps.
Those of skill in the art will appreciate that other embodiments of the invention may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. Embodiments may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination thereof) through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
Each word is prefixed by the token “WORD:” to indicate that each is a word. The “:” is usually a notation indicating a separation of input and output symbols in finite state transducers. Here, however, its use may also relate to separating the type of token and its value. FSM 300 is incapable of identifying relationships between words beyond the order in which they appear in the example utterance.
The other type of information that is encoded in these graphs is the syntactic parse of each utterance, namely the syntactic phrases with their head words. For example, in the sentence, “I paid six dollars,” “six dollars” is a noun phrase with the head word, “dollars.” In
Generic named entity tags, such as person, location, and organization names, and task-dependent named entity tags, such as drug names in a medical domain, are also incorporated into the graph, where applicable. For instance, for the example sentence, “six dollars” is a monetary amount, so the arc “NE:m” 416E is inserted parallel to that sequence.
As another type of semantic information, semantic role labels (SRL) of the utterance components are incorporated to the SSGs. The semantic role labels represent predicate/argument structure of each sentence: Given a predicate, the goal is to identify all of its arguments and their semantic roles. For example, in the example sentence the predicate is “pay”, the agent of the predicate is “I”, and the amount is “six dollars”. In the graph, the labels of the transitions for semantic roles are prefixed by the token “SRL:” 408D, 412C, 416D and the corresponding predicate. For example, the sequence “six dollars” is the amount of the predicate “pay”, and this is shown by the transition with label “SRL:pay.A1” 416D following the PropBank notation. In this case “A1” or “Arg1” indicates the object of the predicate, in this case, the amount.
While the semantic and syntactic information utilized in this example may comprise part of speech tags, syntactic parses, named entity tags, and semantic role labels—it is anticipated that insertion of further information such as supertags or word stems can also be beneficial for further processing using semantic and syntactic graphs.
N-grams extracted from the SSGs are used for call classification. The n-grams in an utterance SSG can be extracted by converting it to a finite state transducer (FST), Fi. Each transition of Fi has the labels of the arcs on the SSG as input and output. Composing this FST with another FST, FN, representing all the possible n-grams, forms the FST, FX, which includes all n-grams in the SSG.
FX=Fi∘FN
Then, extracting the n-grams in the SSG is equivalent to enumerating all paths of FX. For n=3, FN 500 is shown in
The use of SSGs for call or utterance classification is helpful because additional information is expected to provide some generalization, by allowing new n-grams to be encoded in the utterance graph since SSGs provide syntactic and semantic groupings. For example, the words ‘a’ and ‘the’ both have the part-of-speech tag category DT (determiner), or all the numbers are mapped to a cardinal number (CD), like the six in WORD:dollars will both be in the SSG. Similarly, the sentences ‘I paid six dollars’ and ‘I paid seventy five dollars and sixty five cents’ will both have the trigram WORD:I Word:paid NE:m in their SSGs.
The head words of the syntactic phrases and predicate of the arguments are included in the SSGs. This enables the classifier to handle long distance dependences better than using other simpler methods, such as extracting all gappy n-grams. For example, consider the following two utterances: ‘I need a copy of my bill’ and ‘I need a copy of a past due bill.’ As shown in
Between state 610 and state 3614,
Between state 4618 and state 5622 is shown transitions WORD:copy 620A and POS:NN 620B. Between state 5622 and state 6626 are transitions WORD:of 624A and POS:IN 624B. Between state 5622 and state 8624 is a transition PHRASE:PP_of 624C. Between state 6626 and state 7630 are transitions WORD:my 628A and POS:PRP$ 628B. Between state 6626 and state 8634 is a transition PHRASE:NP_bill 628C. Between state 7630 and state 8634 are transitions WORD:bill 632A and POS:NN 632B. The tag <eos> represents the end of the sentence.
Between transition 6626 and 7704 in
Another motivation to use SSGs is that when using simply the n-grams in an utterance, the classifier is only given lexical information. Now the classifier is provided with more and difference information using these extra syntactic and semantic features. For example, a named entity (NE) of type monetary amount may be strongly associated with some call type. Furthermore, there is a close relationship between the call-types and semantic roles. For example, if the predicate is order this is most probably the call-type Order(Item) in a retail domain application. The simple n-gram approach would consider all the appearances of the unigram order as equal. However, consider the utterance ‘I'd like to check an order’ of a different call-type, where the ‘order’ is not a predicate but an object. Word n-gram features will fail to capture this distinction.
Once the SSG of an utterance is formed, all the n-grams are extracted as features, and the decision of which one to select/use is left to the classifier.
Next the inventors discuss exemplary ways to compute an SSG using the tools for computing the information in SSGs and their performances on manually transcribed spoken dialog utterances. All of these components may be improved independently, for the specific application domain.
Part of speech tagging has been very well studied in the literature for many languages, and the approaches vary from rule-based to HMM-based and classifier-based tagging. The present invention employs a simple HMM-based tagger, where the most probable tag sequence, {circumflex over (T)}, given the words, W, is output:
Due to the lack of manually tagged part of speech tags for the application of the present invention, the developmental training set is Penn Treebank. See, Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz, “Building a large annotated corpus of English”, Computational Linguistics, 19(2): 313-330, 1994. Penn Treebank includes data from Wall Street Journal, Brown, ATIS, and Switchboard corpora. The ATIS and Switchboard sets are particularly useful for the development of the present invention because they are from spoken language and include disfluencies. As a test set, an accuracy of 94.95% on manually transcribed utterances was achieved using 2,000 manually labeled words or user utterances from a spoken dialog system application.
For syntactic parsing, the Collins' parser was used, which is reported to give over 88% labeled recall and precision on the Wall Street Journal portion of the Penn Treebank. See, Michael Collins, “Head-Driven Statistical Models for Natural Language Parsing”, Ph.D. thesis, University of Pennsylvania, Computer and Information Science, Philadelpha, Pa., 1999. Buchholz's “chunklink” script was used to extract information from parse trees. See (http://ilk.kup.nl/˜sabine/chunkling/chunklink_2-2-2000_for_conll.pl). Due to the lack of domain data, no performance figure is reported for this task. There is no specific requirement that the Collins parser be used. Any suitable parser may be employed for the step of syntactic parsing according to the invention.
For named entity extraction, a simple HMM-based approach, a simplified version of BBN's name finder, and a classifier-based tagger using Boostexter were used. See, e.g., Daniel M. Bikel, Richard Schwartz, and Ralph M. Weischedel, “An algorithm that learns what's in a name”, Machine Learning Journal Special Issue on Natural Language Learning, 34(1-3):211-231, 1999; and Robert E. Schapire and Yoram Singer, “Boostexter: A boosting-based system for text categorization,” Machine Learning, 39(2-3): 135-168, 2000. In the simple HMM-based approach, which is the same as the part of speech tagging, the goal is to find the tag sequence, {circumflex over (T)}, which maximizes P(T|W) for the word sequence, W. the tags in this case are named entity categories (such as “P” and “p” for Person names, “◯” and “∘” for Organization names, etc. where upper-case indicates the first word in the named entity) or “NA” if the word is not a part of a named entity. In the simplified version of BBN's name finder, the states of the model were word/tag combinations, where the tag ti for word wi is the named entity category of each word. Transition probabilities consist of trigram probabilities P(wi/ti|wi-1/ti-1,wd/ti-2) over these combined tokens. In the final version, this model was extended with an unknown words model. In the classifier-based approach, simple features such as the current word and surrounding four words, binary tags indicating if the word considered contains any digits or is formed from digits, and features checking capitalization. There is no specific requirement that the named entity extractor be the HMM-based approach using the BBN's name fined and the Boostexer application. Any suitable named entity extraction software or module may be utilized.
To test these approaches, data from a spoken dialog system application from a pharmaceutical domain, where some of the named entity categories were person, organization, drug name, prescription number, and date were used. The training and test sets contained around 11,000 and 5,000 utterances, respectively. Table 1 summarizes the overall F-measure results as well as F-measure for the most frequent named entity categories. Overall, the classifier based approach resulted in the best performance, so it is also used for the call classification experiments.
The goal of semantic role labeling is to extract all the constituents which fill a semantic role of a target verb. Typical semantic arguments include Agent, Patient, Instrument, etc. and also adjuncts such as Locative, Temporal, Manner, Cause, etc. An exemplary corpus for this process is the semantic roles and annotations from the Penn Treebank corpus (from the Propbank or Proposition Bank project at the University of Pennsylvania) wherein the arguments are given mnemonic names, such as Arg0, Arg1, Arg-LOC, etc. See, Paul Kingsbury, Mitch Marcus, and Martha Palmer, “Adding semantic annotation to the Penn Treebank”, Computational Linguistics, 19(2):313-330, 2002. For example, for the sentence “I have bought myself a blue jacket form your summer catalog for twenty five dollars last week”, the agent (buyer, or Arg0) is “I”, the predicate is “buy”, the thing bought (Arg1) is “a blue jacket”, the seller or source (Arg2) is “from your summer catalog”, the price paid (Arg3) is “twenty five dollars”, the benefactive (Arg4) is “myself”, and the date (ArgM-TMP) is “last week”.
Semantic role labeling can be viewed as a multi-class classification problem. Given a word (or phrase) and its features, the goal is to output the most probable semantic label. For semantic role labeling, an example feature set that may be used is the one taught by Hacioglu et al. in Kadri Hacioglu, Sameer Pradhan, Wayne Ward, James H. Martin, and Dan Jurafsky, “Semantic role labeling by tagging syntactic chunks,” Proceedings of the Conference on Computational Natural Language Learning (CoNLL), Boston, Mass., May, 2004. As mentioned above, Boostexter is an exemplary classifier that may be used. The features include token-level features (such as the current (head) word, its part of speech tag, base phrase type and position, etc.), predicate-level features (such as the predicate's lemma, frequency, part-of-speech tag, etc.) and argument-level features which capture the relationship between the token (head word/phrase) and the predicate (such as the syntactic path between the token and the predicate, their distance, token position relative to the predicate, etc.).
In order to evaluate the performance of semantic role labeling, 285 utterances from a spoken dialog system for a retail domain were manually annotated. The utterances include 645 predicates (2.3 predicates/utterance). The precision rate for identifying the predicate was evaluated at 93.04%. The recall rate was evaluated at 91.16%. More than 90% of false alarms for predicate extraction were due to the word “please,” which is very frequent in the customer care domain and is erroneously tagged. Most of the false rejections were due to difluencies and ungrammatical utterances. For example, in the utterance “I'd like to order place an order”, the predicate “place” is tagged erroneously as a noun, probably because of the preceding verb “order”.
In evaluating the argument labeling performance the inventors use a strict measure wherein labeling is considered correct if both the boundary and the role of all the arguments of a predicate are correct. In this realm, Arg0 is usually the word “I” hence, mistakes in Arg0 are ignored. In the test set, the SRL tool correctly tags all arguments of 57.6% of the predicates.
Evaluation of the invention was carried-out through call classification experiments using human-machine dialogs collected by a natural spoken dialog system used for customer care. All utterances considered were in response to the greeting prompt “How may I help you?” in order not to deal with confirmation and clarification utterances. Tests were performed using the Boostexter tool, an implementation of the Boosting algorithm, which iteratively selects the most discriminative features for a given task. Data and results are presented in Tables 2 and 3.
Table 2 summarizes the characteristics of the application including the amount of training and test data, total number of call-types, average utterance length, and call-type perplexity. Call-type perplexity is computed using the prior distribution over all the call-types in the training data.
For call classification, SSGs for the training and test set utterances were generated using the tools described above. As seen in Table 3, when n-grams are extracted from these SSGs, instead of the word graphs (Baseline), there is a huge increase in the number of features given to the classifier. The classifier has on average 15 times more features with which to work. Within the scope of this invention, the burden of analyzing these features is left to the classifier.
Table 4 presents the percentage of the features selected by Boostexter using SSGs for each information category. As expected, the lexical information is the most frequently used, and 54.06% of the selected features have at least one word in its n-gram. The total is more than 100%, since some features contain more than one category, as in the bigram feature example: “POS:DT WORD:bill”. This shows the use of other information sources as well as words.
Table 5 presents experimental results for call classification. As the evaluation metric, the inventors used the top class error rate (TCER), which is the ratio of utterances, where the top scoring call-type is not one of the true call-types assigned to each utterance by the human labelers. The baseline TCER on the text set using only word n-grams is 23.80%. When features are extracted from the SSGs, a 2.14% relative decrease in the error rate down to 23.29% is seen. When those results are analyzed, the inventors see that (1) for “easy to classify” utterances, the classifier already assigns a high score to the true call-type using just word n-grams; (2) the syntactic and semantic features extracted from the SSGs are not 100% accurate, as presented earlier. Although many of these features have been useful, there is a certain amount of noise introduced in the call classification training data; (3) the particular classifier used (Boosting) is known to handle large feature spaces in a different manner than other classifiers such as SVMs. This is important where there are more features.
Accordingly, the inventors focus on a subset of utterances that have low confidence scores. These are cases where the score given to the top scoring call-type by the baseline model is below a certain threshold. In this subset there were 333 utterances, which is about 17% of the test set. As expected, the error rates are much higher than the overall and the inventors get much larger improvement in performance when SSGs are used. The baseline fro this set is 68.77% and using extra features reduces the baseline to 62.16% which is a 9.61% relative reduction in the error rate.
The inventors' experiments suggest a cascaded approach for exploiting SSGs for call classification. That is, first the baseline word n-gram based classifier is used to classify all the utterances, then if this model fails to commit on a call-type, the system performs extra feature extraction using SSGs and use the classification model trained with SSGs. This cascaded approach reduces the overall error rate of all utterances from 23.80% to 22.67%, which is a 4.74% relative reduction in error rate.
Although the above description may contain specific details, they should not be construed as limiting the claims in any way. Other configurations of the described embodiments of the invention are part of the scope of this invention. For example, the invention is not limited to telephone calls but may apply to any communication which may be classified, such as a call-type, message type, instant message type, and so forth. In other words, the communication may also be text, which could be classified in a particular way according to the principles of the invention. Furthermore, inasmuch as the present invention involves extending the feature set of a classifier that performs a step of classifying an utterance, there may be other attributes within the feature sets discussed above that may be employed to further extend the feature set and improve classification. Accordingly, the appended claims and their legal equivalents should only define the invention, rather than any specific examples given.
The present application is a continuation of U.S. patent application Ser. No. 14/252,817, filed Apr. 15, 2014, which is a continuation of U.S. patent application Ser. No. 11/212,266, filed Aug. 27, 2005, now U.S. Pat. No. 8,700,404, issued Apr. 15, 2014, the content of which are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5146405 | Church | Sep 1992 | A |
5444617 | Merialdo | Aug 1995 | A |
5467425 | Lau et al. | Nov 1995 | A |
5502791 | Nishimura et al. | Mar 1996 | A |
5610812 | Schabes et al. | Mar 1997 | A |
5706365 | Rangarajan et al. | Jan 1998 | A |
5828999 | Bellegarda et al. | Oct 1998 | A |
6023677 | Class et al. | Feb 2000 | A |
6032111 | Mohri | Feb 2000 | A |
6070140 | Tran | May 2000 | A |
6092038 | Kanevsky et al. | Jul 2000 | A |
6167369 | Schulze | Dec 2000 | A |
6182039 | Rigazio et al. | Jan 2001 | B1 |
6243669 | Horiguchi et al. | Jun 2001 | B1 |
6260011 | Heckerman et al. | Jul 2001 | B1 |
6282507 | Horiguchi et al. | Aug 2001 | B1 |
6314399 | Deligne et al. | Nov 2001 | B1 |
6415248 | Bangalore et al. | Jul 2002 | B1 |
6442524 | Ecker | Aug 2002 | B1 |
6470306 | Pringle | Oct 2002 | B1 |
6681206 | Gorin et al. | Jan 2004 | B1 |
6681208 | Wu | Jan 2004 | B2 |
6701294 | Ball | Mar 2004 | B1 |
6836760 | Bellegarda et al. | Dec 2004 | B1 |
6865528 | Huang et al. | Mar 2005 | B1 |
6928407 | Ponceleon et al. | Aug 2005 | B2 |
6995520 | Inukai | Feb 2006 | B2 |
6996520 | Levin | Feb 2006 | B2 |
7013265 | Huang et al. | Mar 2006 | B2 |
7120582 | Young et al. | Oct 2006 | B1 |
7143040 | Durston et al. | Nov 2006 | B2 |
7389233 | Gish et al. | Jun 2008 | B1 |
7406416 | Chelba et al. | Jul 2008 | B2 |
7493293 | Kanungo et al. | Feb 2009 | B2 |
7526424 | Corston-Oliver et al. | Apr 2009 | B2 |
7882055 | Estes | Feb 2011 | B2 |
8700404 | Chotimongkol et al. | Apr 2014 | B1 |
20010051868 | Witschel | Dec 2001 | A1 |
20020032564 | Ehsani et al. | Mar 2002 | A1 |
20020059069 | Hsu | May 2002 | A1 |
20020082831 | Hwang et al. | Jun 2002 | A1 |
20020128831 | Ju et al. | Sep 2002 | A1 |
20020152207 | Lyudovyk et al. | Oct 2002 | A1 |
20030083863 | Ringger et al. | May 2003 | A1 |
20030083883 | Cyr | May 2003 | A1 |
20030105632 | Huitouze et al. | Jun 2003 | A1 |
20030130849 | Durston et al. | Jul 2003 | A1 |
20030182102 | Corston-Oliver et al. | Sep 2003 | A1 |
20030187642 | Ponceleon et al. | Oct 2003 | A1 |
20040078204 | Segond et al. | Apr 2004 | A1 |
20040111255 | Huerta et al. | Jun 2004 | A1 |
20040117189 | Bennett | Jun 2004 | A1 |
20040148170 | Acero et al. | Jul 2004 | A1 |
20040177189 | Wemelsfelder | Sep 2004 | A1 |
20040220809 | Wang et al. | Nov 2004 | A1 |
20040249628 | Chelba et al. | Dec 2004 | A1 |
20050004799 | Lyudovyk | Jan 2005 | A1 |
20050080613 | Colledge et al. | Apr 2005 | A1 |
20060041428 | Fritsch et al. | Feb 2006 | A1 |
20060074666 | Bejar et al. | Apr 2006 | A1 |
20060080101 | Chotimongkol | Apr 2006 | A1 |
20060129397 | Li et al. | Jun 2006 | A1 |
20060277031 | Ramsey | Dec 2006 | A1 |
20120290290 | Tur | Nov 2012 | A1 |
20140278424 | Deng et al. | Sep 2014 | A1 |
20160055240 | Tur | Feb 2016 | A1 |
Number | Date | Country | |
---|---|---|---|
20160086601 A1 | Mar 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14252817 | Apr 2014 | US |
Child | 14963423 | US | |
Parent | 11212266 | Aug 2005 | US |
Child | 14252817 | US |