The present disclosure relates to language detection and, in particular, to systems and methods for detecting languages in short text messages.
In general, language detection or identification is a process in which a language present in a body of text is detected automatically based on the content of the text. Language detection is useful in the context of automatic language translation, where the language of a text message must generally be known before the message can be translated accurately into a different language.
While traditional language detection is usually performed on a collection of many words and sentences (i.e., on the document level), a particularly challenging domain is the chat text domain, where messages often include only a few words (e.g., four or less), some or all of which can be informal and/or misspelled. In the chat text domain, existing language detection approaches have proven to be inaccurate and/or slow, given the lack of information and the informalities present in such messages.
Embodiments of the systems and methods described herein are used to detect the language in a text message based on, for example, content of the message, information about the keyboard used to generate the message, and/or information about the language preferences of the user who generated the message. Compared to previous language detection techniques, the systems and methods described herein are generally faster and more accurate, particularly for short text messages (e.g., of four words or less).
In various examples, the systems and methods use a plurality of language detection tests and classifiers to determine probabilities associated with possible languages in a text message. Each language detection test can output a set or vector of probabilities associated with the possible languages. The classifiers can combine the output from the language detection tests to determine a most likely language for the message. The particular language detection test(s) and classifier(s) chosen for the message can depend on a predicted accuracy, a confidence score, and/or a linguistic domain for the message.
Certain examples of the systems and methods described herein perform an initial classification of a language in a text message so that more focused language detection techniques can be performed to make a final determination of the language. For example, the systems and methods can perform initial language detection testing on a text message to identify a group or category (e.g., Cyrillic languages or Latin languages) for the language in the text message. Once the language category is identified, language detection techniques designed for the language category can be used to identify the specific language in the message. In preferred examples, extraneous elements (e.g., emoji or numerical digits or characters) are removed from the text message prior to language detection, thereby resulting in faster and more accurate language detection. The systems and methods described herein are generally more accurate and efficient than prior language detection approaches. The systems and methods can be configured to use any one or more of the language detection methods described herein.
In one aspect, the subject matter of this disclosure relates to a computer-implemented method of identifying a language in a message. The method includes: obtaining a text message; removing non-language characters from the text message to generate a sanitized text message; and detecting at least one of an alphabet and a script present in the sanitized text message, wherein detecting includes at least one of: (i) performing an alphabet-based language detection test to determine a first set of scores, wherein each score in the first set of scores represents a likelihood that the sanitized text message includes the alphabet for one of a plurality of different languages; and (ii) performing a script-based language detection test to determine a second set of scores, wherein each score in the second set of scores represents a likelihood that the sanitized text message includes the script for one of the plurality of different languages. The method also includes identifying the language in the sanitized text message based on at least one of the first set of scores, the second set of scores, and a combination of the first and second sets of scores.
In certain implementations, the non-language characters include an emoji and/or a numerical character. The combination can include an interpolation between the first and second sets of scores. In some examples, identifying the language in the sanitized text message includes performing a language detection test on the sanitized text message to generate a third set of scores, wherein each score in the third set of scores represents a likelihood that the sanitized text message includes one of a plurality of different languages. The language detection test can be selected from a plurality of language detection tests, based on the at least one of the first set of scores, the second set of scores, and the combination of the first and second sets of scores.
In certain instances, the language detection test includes a language detection method and one or more classifiers. The language detection method can include, for example, a dictionary-based language detection test, an n-gram language detection test, an alphabet-based language detection test, a script-based language detection test, a user language profile language detection test, or any combination thereof. The one or more classifiers can include, for example, a supervised learning model, a partially supervised learning model, an unsupervised learning model, an interpolation, or any combination thereof. In various implementations, the method includes processing the third set of scores using one or more classifiers to identify the language in the sanitized text message. The method can include outputting, from the one or more classifiers, an indication that the sanitized text message is in the identified language. The indication can include a confidence score.
In another aspect, the subject matter of this disclosure relates to a computer-implemented system for identifying a language in a message. The system includes a sanitizer module, a grouper module, and a language detector module. The sanitizer module obtains a text message and removes non-language characters from the text message to generate a sanitized text message. The grouper module detects at least one of an alphabet and a script present in the sanitized text message and is operable to perform operations including at least one of: performing an alphabet-based language detection test to determine a first set of scores, wherein each score in the first set of scores represents a likelihood that the sanitized text message includes the alphabet for one of a plurality of different languages; and performing a script-based language detection test to determine a second set of scores, wherein each score in the second set of scores represents a likelihood that the sanitized text message includes the script for one of the plurality of different languages. The language detector module identifies the language in the sanitized text message based on at least one of the first set of scores, the second set of scores, and a combination of the first and second sets of scores.
In various examples, the non-language characters include an emoji and/or a numerical character. The combination can include an interpolation between the first and second sets of scores. The grouper module can be operable to perform operations that include selecting the language detector module from a plurality of language detector modules based on the at least one of the first set of scores, the second set of scores, and the combination of the first and second sets of scores. The language detector module can include a language detection methods module. The language detection methods module can be operable to perform operations that include performing a language detection test on the sanitized text message to generate a third set of scores, wherein each score in the third set of scores represents a likelihood that the sanitized text message includes one of a plurality of different languages. The language detection test can include, for example, a dictionary-based language detection test, an n-gram language detection test, an alphabet-based language detection test, a script-based language detection test, a user language profile language detection test, or any combination thereof.
In some implementations, the language detector module includes a classifier module operable to perform operations that include processing the third set of scores using one or more classifiers to identify the language in the sanitized text message. The one or more classifiers can include, for example, a supervised learning model, a partially supervised learning model, an unsupervised learning model, an interpolation, or any combination thereof. The classifier module can be operable to perform operations that include outputting an indication that the sanitized text message is in the identified language. The indication can include a confidence score.
In another aspect, the subject matter of this disclosure relates to an article. The article includes: a non-transitory computer-readable medium having instructions stored thereon that, when executed by one or more computers, cause the computers to perform operations including: obtaining a text message; removing non-language characters from the text message to generate a sanitized text message; detecting at least one of an alphabet and a script present in the sanitized text message, wherein detecting includes at least one of: (i) performing an alphabet-based language detection test to determine a first set of scores, wherein each score in the first set of scores represents a likelihood that the sanitized text message includes the alphabet for one of a plurality of different languages; and (ii) performing a script-based language detection test to determine a second set of scores, wherein each score in the second set of scores represents a likelihood that the sanitized text message includes the script for one of the plurality of different languages. The operations further include identifying the language in the sanitized text message based on at least one of the first set of scores, the second set of scores, and a combination of the first and second sets of scores.
Elements of examples described with respect to a given aspect of this subject matter can be used in various examples of another aspect of the subject matter. For example, it is contemplated that features of dependent claims depending from one independent claim can be used in apparatus, systems, and/or methods of any of the other independent claims.
In general, the language detection systems and methods described herein can be used to identify the language in a text message when language information for the message (e.g., keyboard information from a client device) is absent, malformed or unreliable. The systems and methods improve the accuracy of language translation methods used to translate text messages from one language to another. Language translation generally requires the source language to be identified accurately, otherwise the resulting translation can be inaccurate.
An application, such as a web-based application, can be provided as an end-user application to allow users to provide messages to the server system 12. The end-user applications can be accessed through a network 32 by users of client devices, such as a personal computer 34, a smart phone 36, a tablet computer 38, and a laptop computer 40. Other client devices are possible. The user messages can be accompanied by information about the devices used to create the messages, such as information about the keyboard, client device, and/or operating system used to create the messages.
Although
In some implementations, the language indication from the one or more classifiers can be selected by the manager module 20 according to a computed confidence score and/or a linguistic domain. For example, the classifiers can compute a confidence score indicating a degree of confidence associated with the language prediction. Additionally or alternatively, certain classifier output can be selected according to the linguistic domain associated with the user or the message. For example, if the message originated in a computer gaming environment, a particular classifier output can be selected as providing the most accurate language prediction. Likewise, if the message originated in the context of sports (e.g., regarding a sporting event), a different classifier output can be selected as being more appropriate for the sports linguistic domain. Other possible linguistic domains include, for example, news, parliamentary proceedings, politics, health, travel, web pages, newspaper articles, microblog messages, and the like. In general, certain language detection methods or combinations of language detection methods (e.g., from a classifier) can be more accurate for certain linguistic domains, when compared to other linguistic domains. In some implementations, the domain can be determined based on the presence of words from a domain vocabulary in a message. For example, a domain vocabulary for computer gaming could include common slang words used by gamers.
The language detection methods used by the detection module 16 can include, for example, an n-gram method (e.g., a byte n-gram method), a dictionary-based method, an alphabet-based method, a script-based method, and a user language profile method. Other language detection methods are possible. Each of these language detection methods can be used to detect a language present in a message. The output from each method can be, for example, a set or vector of probabilities associated with each possible language in the message. In some instances, two or more of the language detection methods can be performed simultaneously, using parallel computing, which can reduce computation times considerably.
In one implementation, a byte n-gram method uses byte n-grams instead of word or character n-grams to detect languages. The byte n-gram method is preferably trained over a mixture of byte n-grams (e.g., with 1<n<4), using a naive Bayes classifier having a multinomial event model. The model preferably generalizes to data from different linguistic domains, such that the model's default configuration is accurate over a diverse set of domains, including newspaper articles, online gaming, web pages, and microblog messages. Information about the language identification task can be integrated from a variety of domains.
The task of attaining high accuracy can be relatively easy for language identification in a traditional text categorization setting, for which in-domain training data is available. This task can be more difficult when attempting to use learned model parameters for one linguistic domain to classify or categorize data from a separate linguistic domain. This problem can be addressed by focusing on important features that are relevant to the task of language identification. This can be based on, for example, a concept called information gain, which was originally introduced for decision trees as a splitting criteria, and later found to be useful for selecting features in text categorization. In certain implementations, a detection score can be calculated that represents the difference in information gain relative to domain and language. Features having a high detection score can provide information about language without providing information about domain. For simplicity, the candidate feature set can be pruned before information gain is calculated, by means of a feature selection based on term-frequency.
Referring to
In general, the dictionary-based language detection method counts the number of tokens or words belonging to each language by looking up words in a dictionary or other word listing associated with the language. The language having the most words in the message is chosen as the best language. In the case of multiple best languages, the more frequent or commonly used of the best languages can be chosen. The language dictionaries can be stored in the dictionaries database 24.
To ensure accuracy of the dictionary-based language detection method, particularly for short sentences, it is preferable to use dictionaries that include informal words or chat words (e.g., abbreviations, acronyms, slang words, and profanity), in addition to formal words. Informal words are commonly used in short text communications and in chat rooms. The dictionaries are preferably augmented to include informal words on an ongoing basis, as new informal words are developed and used.
The alphabet-based method is generally based on character counts for each language's alphabet and relies on the observation that many languages have unique alphabets or different sets of characters. For example, Russian, English, Korean, and Japanese each use a different alphabet. Although the alphabet-based method can be unable to distinguish some languages precisely (e.g., languages that use similar alphabets, such as Latin languages), the alphabet-based method can generally detect certain languages quickly. In some instances, it is preferable to use the alphabet-based method in combination with one or more other language detection methods (e.g., using a classifier), as discussed herein. The language alphabets can be stored in the alphabets database 26.
In general, the script-based language detection method determines the character counts for each possible script (e.g. Latin script, CJK script, etc.) that are present in the message. The script-based method relies on the observation that different languages can use different scripts, e.g., Chinese and English. The method preferably uses a mapping that maps a script to a list of languages that use the script. For example, the mapping can consider the UNICODE values for the characters or symbols present in the message, and these UNICODE values can be mapped to a corresponding language or set of possible languages for the message. The language scripts and UNICODE values or ranges can be stored in the scripts database 28.
Referring to
The user language profile based method uses the user profile information database 30, which stores historical messages sent by various users. The languages of these stored messages are detected using, for example, one or more other language detection methods described herein (e.g., the byte n-gram method), to identify the language(s) used by each user. For example, if all of a user's prior messages are in Spanish, the language profile for that user can indicate the user's preferred language is Spanish. Likewise, if a user's prior messages are in a mixture of different languages, the language profile for the user can indicate probabilities associated with the different languages (e.g., 80% English, 15% French, and 5% Spanish). In general, the user language profile based method addresses language detection issues associated with very short messages, which often do not have enough information in them to make an accurate language determination. In such an instance, the language preference of a user can be used to predict the language(s) in the user's messages, by assuming the user will continue to use the language(s) he or she has used previously.
Referring to
Referring to
The output from the various language detection methods in the detection module 16 can be combined using the classifier module 18. Referring to
The interpolation module 802 is used to perform a linear interpolation of the results from two or more language detection methods. For purposes of illustration, the language of a text message can be determined by interpolating between results from the byte n-gram method and the dictionary-based method. For the chat message “lol gtg,” the byte n-gram method can determine the likelihood of English is 0.3, the likelihood of French is 0.4, and the likelihood of Polish is 0.3 (e.g., the output from the byte n-gram method can be {en:0.3, fr:0.4, pl:0.3}). The dictionary-based method can determine the likelihood of English is 0.1, the likelihood of French is 0.2, and the likelihood of Polish is 0.7 (e.g., the output can be {en:0.1, fr:0.2, pl:0.7}). To interpolate between the results of these two methods, the output from the byte n-gram method is multiplied by a first weight and the output from the dictionary-based method is multiplied by a second weight, such that the first and second weights add to one. The weighted outputs from the two methods are then added together. For example, if the byte n-gram results are given a weight of 0.6, then the dictionary-based results are given a weight of 0.4, and the interpolation between the two methods is: {en:0.3, fr:0.4, pl:0.3}*0.6+{en:0.1, fr:0.2, pl:0.7}*0.4={en:0.22, fr:0.32, pl:0.46}. Other weightings are possible.
In general, the optimal weights for interpolating between two or more values can be determined numerically through trial and error. Different weights can be tried to identify the best set of weights for a given set of messages. In some instances, the weights can be a function of the number of words or characters in the message. Alternatively or additionally, the weights can depend on the linguistic domain of the message. For example, the optimal weights for a gaming environment can be different than the optimal weights for a sports environment. For a combination of the byte n-gram method and the dictionary-based method, good results can be obtained using a weight of 0.1 on the byte n-gram method and a weight of 0.9 on the dictionary-based method.
The SVM module 804 can be or include a supervised learning model that analyzes language data and recognizes language patterns. The SVM module 804 can be a multi-class SVM classifier, for example. For an English SVM classifier, the feature vector can be the concatenation of the two distributions above (i.e., {en:0.3, fr:0.4, pl:0.3, en:0.1, fr:0.2, pl:0.7}). The SVM classifier is preferably trained on labeled training data. The trained model acts as a predictor for an input. The features selected in the case of language detection can be, for example, sequences of bytes, words, or phrases. Input training vectors can be mapped into a multi-dimensional space. The SVM algorithm can then use kernels to identify the optimal separating hyperplane between these dimensions, which will give the algorithm a distinguishing ability to predict languages (in this case). The kernel can be, for example, a linear kernel, a polynomial kernel, or a radial basis function (RBF) kernel, although other suitable kernels are possible. A preferred kernel for the SVM classifier is the RBF kernel. After training the SVM classifier using training data, the classifier can be used to output a best language among all the possible languages.
The training data can be or include, for example, the output vectors from different language detection methods and an indication of the correct language, for a large number of messages having, for example, different message lengths, linguistic domains, and/or languages. The training data can include a large number of messages for which the language in each message is known.
The linear SVM module 806 can be or include a large-scale linear classifier. An SVM classifier with a linear kernel can perform better than other linear classifiers, such as linear regression. The linear SVM module 806 differs from the SVM module 804 at the kernel level. There are some cases when a polynomial model works better than a linear model, and vice versa. The optimal kernel can depend on the linguistic domain of the message data and/or the nature of the data.
Other possible classifiers used by the systems and methods described herein include, for example, decision tree learning, association rule learning, artificial neural networks, inductive logic programming, random forests, clustering, Bayesian networks, reinforcement learning, representation learning, similarity and metric learning, and sparse dictionary learning. One or more of these classifiers, or other classifiers, can be incorporated into and/or form part of the classifier module 18.
Referring to
In the detection module 16, one or more language detection methods are used (step 904) to detect a language in the message. Each method used by the detection module 16 can output a prediction regarding the language present in the message. The prediction can be in the form of a vector that includes a probability for each possible language that can be in the message.
The output from the detection module 16 is then delivered to the classifier module 18 where the results from two or more language detection methods can be combined (step 906). Various combinations of the results from the language detection methods can be obtained. In one example, the results from the byte n-gram method and the dictionary-based method are combined in the classifier module 18 by interpolation. In another example, a SVM combination or classification is performed on the results from the byte n-gram method, the dictionary-based method, the alphabet method, and the user profile method. Alternatively or additionally, the combination can include or consider results from the script-based method. A further example includes a large linear combination of the byte n-gram method, the language profile method, and the dictionary method. In general, however, the results from any two or more of the language detection methods can be combined in the classifier module 18.
The method 900 uses the manager module 20 to select output (step 908) from a particular classifier. The output can be selected based on, for example, a confidence score computed by a classifier, an expected language detection accuracy, and/or a linguistic domain for the message. A best language is then chosen (step 910) from the selected classifier output.
In some instances, the systems and methods described herein choose the language detection method(s) according to the length of the message. For example, referring to
Otherwise, if the candidate language is not a language with a unique alphabet and/or script, then the length of the text message is evaluated. If the message length is less than a threshold length (e.g., 4 bytes or 4 characters, although any suitable threshold length is possible) and the text message includes or is accompanied by a keyboard language used by the client device (step 1110), then the language of the message is chosen (step 1112) to be the keyboard language.
Alternatively, if the message length is greater than the threshold length or the keyboard language is not available, then the message is processed with an n-gram method (e.g., the byte n-gram method) to identify (step 1114) a first set of possible languages for the text message. The message is also then processed with the dictionary-based method to identify (step 1116) a second set of possible languages for the text message. If a user language profile exists for the user (step 1118), then the user language profile is obtained (step 1120) and combined (e.g., using an SVM classifier or a large linear classifier) with the first set of possible languages and the second set of possible languages to obtain a first combination of possible languages (step 1122). The language of the text message is then chosen (step 1124), based on the first combination of possible languages. Otherwise, if the user language profile is not available, then the first set of possible languages and the second set of possible languages are combined (e.g., using a linear interpolator or other classifier) to obtain a second combination of possible languages (step 1126). Finally, the language of the text message is chosen (step 1128), based on the second combination of possible languages.
In some instances, language detection is performed by combining the output from multiple language detection methods in two or more steps. For example, a first step can use the alphabet-script based method to detect special languages that use their own unique alphabets or scripts, such as, for example, Chinese (cn), Japanese (ja), Korean (ko), Russian (ru), Hebrew (he), Greek (el), and Arabic (ar). The alphabet-script based method refers to, for example, using one or both of the alphabet-based method and the script-based method. If necessary, the second step can use a combination (e.g., from a classifier) of multiple detection methods (e.g., the byte n-gram method, the user language profile based method, and the dictionary-based method) to detect other languages (e.g., Latin languages) in the message.
In certain examples, the message provided or received for language detection includes certain digits, characters, or images (e.g., emoticons or emoji) that are not specific to any particular language and/or are recognizable to any user, regardless of language preference. The systems and methods described herein can ignore such characters or images when performing language detection and can ignore messages that include only such characters or images. Alternatively or additionally, the systems and methods can remove such characters or images from messages, prior to performing language detection. The process of removing extraneous characters or images from messages can be referred to herein as sanitizing the messages. The sanitizing process can result in faster detection times and/or improved language detection accuracy.
In the depicted example method 1200, the detection module 16 includes ten different language detection methods. Three of the language detection methods in the detection module 16 are Byte n-gram A 1206, Byte n-gram B 1208, and Byte n-gram C 1210, which are all byte n-gram methods and can be configured to detect a different set or number of languages. For example, Byte n-gram A 1206 can be configured to detect 97 languages, Byte n-gram B 1208 can be configured to detect 27 languages, and Byte n-gram C 1210 can be configured to detect 20 languages. Two of the language detection methods in the detection module 16 are Dictionary A 1212 and Dictionary B 1214, which are both dictionary-based methods and can be configured to detect a different set or number of languages. For example, Dictionary A 1212 can be configured to detect 9 languages, and Dictionary B 1214 can be configured to detect 10 languages. Two of the language detection methods in the detection module 16 are Language Profile A 1216 and Language Profile B 1218, which are user language profile methods and can be configured to detect a different set or number of languages. For example, Language Profile A 1216 can be configured to detect 20 languages, and Language Profile B 1218 can be configured to detect 27 languages. Two of the language detection methods in the detection module 16 are Alphabet A 1220 and Alphabet B 1222, which are alphabet-based methods and can be configured to detect a different set or number of languages. For example, Alphabet A 1220 can be configured to detect 20 languages, and Alphabet B 1222 can be configured to detect 27 languages. The detection module 16 also includes a script-based language detection method 1224.
Output from the different language detection methods in the detection module 16 is combined and processed by the classifier module 18. For example, an interpolation classifier 1226 combines output from Byte n-gram B 1208 and Dictionary B 1214. Weights for the interpolation can be, for example, 0.1 for Byte n-gram B 1208 and 0.9 for Dictionary B 1214. The classifier module 18 can also use an SVM classifier 1228 that combines output from Byte n-gram C 1210, Dictionary B 1214, Language Profile B 1218, and Alphabet B 1222. The classifier module 18 can also use a first combination 1230 of the script-based method 1224 and an SVM classifier combination of Byte n-gram C 1210, Dictionary A 1212, Language Profile A 1216, and Alphabet A 1220. Additionally, the classifier module 18 can use a second combination 1232 of the script based method 1224 and a Linear SVM classifier combination of Byte n-gram C 1210, Dictionary A 1212, and Language Profile A 1216. While
For both the first combination 1230 and the second combination 1232, the script-based method 1224 and the classifier can be used in a tiered approach. For example, the script-based method 1224 can be used to quickly identify languages having unique scripts. When such a language is identified in the message 1204, use of the SVM classifier in the first combination 1230 or the Linear SVM classifier in the second combination may not be required.
In general, the manager module 20 can select specific language detection methods, classifiers, and/or combinations of detection method output to identify the language in the message 1204. The manager module 20 can make the selection according to the linguistic domain or according to an anticipated language for the message. The manager module 20 can select specific classifiers according to a confidence score determined by the classifiers. For example, the manager module 20 can select the output from the classifier that is the most confident in its prediction.
In certain implementations, the systems and methods described herein are suitable for making language detection available as a service to a plurality of users. Such a service is made possible and/or enhanced by the speed at which the systems and methods identify languages, and by the ability of the systems and methods to handle multiple identification techniques at runtime, based on service requests from diverse clients.
Referring to
In general, the grouper module 1306 is used to perform an initial classification of the language in the text message 1302 and, based on the initial classification, select one or more subsequent language detection methods to make a final determination of the language in the text message 1302. In preferred examples, the grouper module 1306 performs the initial classification by detecting an alphabet and/or a script present in the text message 1302. The alphabet and/or the script can be detected using, for example, the alphabet-based method and/or the script-based method, described herein. In some instances, the alphabet-based method can determine a first set of scores for the text message 1302, with each score representing a probability or likelihood that the alphabet is for one of a plurality of different languages. The grouper module 1306 can detect the alphabet in the text message 1302 based on the highest score from the first set of scores. Likewise, the script-based method can determine a second set of scores for the text message 1302, with each score representing a probability or likelihood that the script is for one of a plurality of different languages. The grouper module 1306 can detect the script in the text message 1302 based on the highest score from the second set of scores. Alternatively or additionally, the grouper module 1306 can combine results or scores (e.g., using an interpolator or other classifier) from the alphabet-based method and the script-based method to detect the alphabet and/or the script in the text message 1302. Once the alphabet and/or the script have been detected, the grouper module 1306 selects a language detector module to use for making a final determination of the language in the text message 1302, as described below and herein. The grouper module 1306 can pass results or other information (e.g., one or more scores) from the alphabet-based method and/or the script-based method to the selected language detector module.
In the depicted example, the language detection system 1300 can include or utilize the following language detector modules: an alphabet-distinguishable language detector 1308, a Cyrillic language detector 1310, a Latin language detector 1312, and a backoff language detector 1314. However, other additional or alternative language detector modules can be included or utilized. Each of these language detector modules 1308, 1310, 1312, and 1314 can include a detection methods module and a classifier module. For example, the alphabet-distinguishable language detector 1308 can include a detection methods module 1316 and a classifier module 1318, the Cyrillic language detector 1310 can include a detection methods module 1320 and a classifier module 1322, the Latin language detector 1312 can include a detection methods module 1324 and a classifier module 1326, and the backoff language detector 1314 can include a detection methods module 1328 and a classifier module 1330.
In general, the detection methods modules 1316, 1320, 1324, and 1328 include or utilize one or more language detection methods, which can be or include, for example, the n-gram method (e.g., the byte n-gram method), the dictionary-based method, the alphabet-based method, the script-based method, and/or the user language profile method. Other language detection methods are contemplated. The detection methods modules 1316, 1320, 1324, and 1328 can use the language detection methods to produce output providing an indication of the language present in the text message 1302. The output can be or include, for example, one or more scores representing a likelihood that the text message 1302 is in one or more languages. In some instances, the language in the text message 1302 is determined directly from the output of one of the detection methods modules 1316, 1320, 1324, or 1328. Alternatively or additionally, the language in the text message 1302 can be determined from the output of one of the classifier modules 1318, 1322, 1326, or 1330. In general, each classifier module 1318, 1322, 1326, or 1330 processes output from a corresponding detection methods module 1316, 1320, 1324, or 1328 to provide a further indication of the language present in a text message. The classifier modules 1318, 1322, 1326, and 1330 preferably use or include one or more classifiers, such as, for example, a supervised learning model, a partially supervised learning model, an unsupervised learning model, and/or an interpolation.
For example, when the alphabet and/or script detected by the grouper module 1306 are associated with one or more alphabet-distinguishable languages, the grouper module 1306 selects the alphabet-distinguishable language detector 1308. In general, an alphabet-distinguishable language is a language that has a unique alphabet and/or a unique script, such that the language in the text message 1302 can be determined once the alphabet and/or the script for the language are detected. Examples of alphabet-distinguishable languages include, for example, Simplified Chinese (cn), Traditional Chinese (tw), Japanese (ja), Arabic (ar), Hebrew (he), Greek (el), Korean (ko), and Thai (th). In various instances, the grouper module 1306 passes results (e.g., one or more scores or probabilities, a detected alphabet, and/or a detected script) from the alphabet-based method and/or the script-based method to the alphabet-distinguishable language detector 1308. Alternatively or additionally, if the grouper module 1306 does not pass such results to the alphabet-distinguishable language detector 1308, the detection methods module 1316 can perform the alphabet-based method and/or the script-based method to detect the alphabet and/or the script in the text message 1302. The alphabet-distinguishable language detector 1308 can determine the language in the text message 1302 once the alphabet and/or the script are detected. In some instances, such a determination can be made using the classifier module 1318 to process any output from the detection methods module 1316.
In some examples, when the alphabet and/or script detected by the grouper module 1306 are associated with one or more Cyrillic languages, the grouper module 1306 selects the Cyrillic language detector 1310. Examples of Cyrillic languages include, for example, Bulgarian (bg), Ukrainian (uk), and Russian (ru). To determine the specific Cyrillic language in the text message 1302, the detection methods module 1320 can include or utilize one or more language detection methods described herein, such as the byte n-gram method and/or the dictionary-based method. In a preferred example, the detection methods module 1320 utilizes the dictionary-based method, which can use one or more dictionaries specific to Cyrillic languages. The dictionary-based method can count the number of tokens or words in the text message 1302 that belong to one or more Cyrillic languages by looking up words in the one or more dictionaries. In some examples, the Cyrillic language having the most tokens or words in the text message 1302 is determined to be the language in the text message 1302. Alternatively or additionally, the detection methods module 1320 can provide output from one or more language detection methods (e.g., the dictionary-based method) to the classifier module 1322, which can process the output to determine the language in the text message 1302. For example, the classifier module 1322 can receive a set of scores from the detection methods module 1320 and can determine the Cyrillic language in the text message 1302 by identifying the language having the highest score.
In certain instances, when the alphabet and/or script detected by the grouper module 1306 are associated with one or more Latin languages, the grouper module 1306 selects the Latin language detector 1312. Examples of Latin languages include, for example, English (en), French (fr), Spanish (es), German (de), Portuguese (pt), Dutch (nl), Polish (pl), Italian (it), Turkish (tr), Catalan (ca), Czech (cs), Danish (da), Finnish (fi), Hungarian (hu), Indonesian (id), Norwegian (no), Romanian (ro), Slovak (sk), Swedish (sv), Malay (ms), Vietnamese (vi). To determine the specific Latin language in the text message 1302, the detection methods module 1324 can include or utilize one or more language detection methods described herein. In preferred examples, the detection methods module 1324 includes or utilizes the byte n-gram method and/or the dictionary-based method. The output from one or both of these preferred methods can be processed or combined using the classifier module 1326 to determine the specific Latin language in the text message 1302. For example, the n-gram method and the dictionary-based method can each output a set of scores, with each score representing a likelihood that the text message 1302 is in one of a plurality of different Latin languages. The classifier module 1326 can process the sets of scores using, for example, one or more classifiers and/or interpolation techniques described herein, to determine the Latin language in the text message 1302.
In some examples, the grouper module 1306 selects the backoff language detector 1314 to detect a language in the text message 1302. The backoff language detector 1314 can be selected, for example, when the grouper module 1306 does not select the alphabet-distinguishable language detector 1308, the Cyrillic language detector 1310, or the Latin language detector 1312. Such a situation may occur, for example, when the grouper module 1306 fails to detect an alphabet and/or a script associated with an alphabet-distinguishable language, a Cyrillic language, or a Latin language. When the backoff language detector 1314 is selected, the detection methods module 1328 and/or the classifier module 1330 can be used to identify the language in the text message 1302. The language detection methods used by the detection methods module 1328 can be or include, for example, the n-gram method (e.g., the byte n-gram method), the dictionary-based method, the alphabet-based method, the script-based method, the user language profile method, and any combination thereof. The specific classifiers used by the classifier module 1330 can be or include, for example, a supervised learning model, a partially supervised learning model, an unsupervised learning model, an interpolation, and/or any combination thereof. Other language detection methods and/or classifiers can be used. In general, the backoff language detector 1314 can use any of the language detection methods and classifiers described herein. The backoff language detector 1314 is preferably flexible and can be configured to include or use new detection methods and/or new combinations of detection methods as such new methods and/or combinations are developed or become available. In some instances, by resorting to the backoff language detector 1314, the language detection system 1300 is able to provide a valid output rather than a NULL output.
For purposes of illustration,
Embodiments of the subject matter and the operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, data processing apparatus. Alternatively or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. A computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially-generated propagated signal. The computer storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices).
The operations described in this specification can be implemented as operations performed by a data processing apparatus on data stored on one or more computer-readable storage devices or received from other sources.
The term “data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing. The apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program can, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic disks, magneto-optical disks, optical disks, or solid state drives. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), to name just a few. Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including, by way of example, semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse, a trackball, a touchpad, or a stylus, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser.
Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter-network (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks).
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits data (e.g., an HTML page) to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device). Data generated at the client device (e.g., a result of the user interaction) can be received from the client device at the server.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any inventions or of what can be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features can be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination can be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing can be advantageous. For example, parallel processing can be used to perform multiple language detection methods simultaneously. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing can be advantageous.
This application is a continuation of U.S. application Ser. No. 15/283,646, filed Oct. 3, 2016, which is a continuation-in-part of U.S. application Ser. No. 15/161,913, filed May 23, 2016 (now U.S. Pat. No. 9,535,896, issued Jan. 3, 2017), which is a continuation of U.S. application Ser. No. 14/517,183, filed Oct. 17, 2014 (now U.S. Pat. No. 9,372,848, issued Jun. 21, 2016), the entire contents of each of which are incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
4460973 | Tanimoto et al. | Jul 1984 | A |
4502128 | Okajima et al. | Feb 1985 | A |
4706212 | Toma | Nov 1987 | A |
5289375 | Fukumochi et al. | Feb 1994 | A |
5313534 | Burel | May 1994 | A |
5526259 | Kaji | Jun 1996 | A |
5603031 | White et al. | Feb 1997 | A |
5873055 | Okunishi | Feb 1999 | A |
5884246 | Boucher et al. | Mar 1999 | A |
5991710 | Papineni et al. | Nov 1999 | A |
6125362 | Elworthy | Sep 2000 | A |
6157905 | Powell | Dec 2000 | A |
6167369 | Schulze | Dec 2000 | A |
6182029 | Friedman | Jan 2001 | B1 |
6278967 | Akers et al. | Aug 2001 | B1 |
6278969 | King et al. | Aug 2001 | B1 |
6285978 | Bernth et al. | Sep 2001 | B1 |
6304841 | Berger et al. | Oct 2001 | B1 |
6415250 | van den Akker | Jul 2002 | B1 |
6425119 | Jones et al. | Jul 2002 | B1 |
6722989 | Hayashi | Apr 2004 | B1 |
6799303 | Blumberg | Sep 2004 | B2 |
6801190 | Robinson et al. | Oct 2004 | B1 |
6848080 | Lee et al. | Jan 2005 | B1 |
6993473 | Cartus | Jan 2006 | B2 |
6996520 | Levin | Feb 2006 | B2 |
7165019 | Lee et al. | Jan 2007 | B1 |
7174289 | Sukehiro | Feb 2007 | B2 |
7451188 | Cheung et al. | Nov 2008 | B2 |
7475343 | Mielenhausen | Jan 2009 | B1 |
7478033 | Wu et al. | Jan 2009 | B2 |
7533013 | Marcu | May 2009 | B2 |
7539619 | Seligman et al. | May 2009 | B1 |
7895576 | Chang et al. | Feb 2011 | B2 |
7912852 | McElroy | Mar 2011 | B1 |
7970598 | Flanagan et al. | Jun 2011 | B1 |
8010338 | Thorn | Aug 2011 | B2 |
8010474 | Bill | Aug 2011 | B1 |
8027438 | Daigle et al. | Sep 2011 | B2 |
8112497 | Gougousis et al. | Feb 2012 | B1 |
8145472 | Shore et al. | Mar 2012 | B2 |
8170868 | Gamon | May 2012 | B2 |
8244567 | Estill | Aug 2012 | B2 |
8270606 | Caskey et al. | Sep 2012 | B2 |
8311800 | Delaney et al. | Nov 2012 | B1 |
8326601 | Ribeiro et al. | Dec 2012 | B2 |
8380488 | Liu | Feb 2013 | B1 |
8392173 | Davis et al. | Mar 2013 | B2 |
8401839 | Kim et al. | Mar 2013 | B2 |
8442813 | Popat | May 2013 | B1 |
8468149 | Lung et al. | Jun 2013 | B1 |
8473555 | Lai et al. | Jun 2013 | B2 |
8510328 | Hatton | Aug 2013 | B1 |
8543374 | Dymetman | Sep 2013 | B2 |
8566306 | Jones | Oct 2013 | B2 |
8606297 | Simkhai et al. | Dec 2013 | B1 |
8606800 | Lagad et al. | Dec 2013 | B2 |
8626486 | Och et al. | Jan 2014 | B2 |
8655644 | Kanevsky et al. | Feb 2014 | B2 |
8671019 | Barclay et al. | Mar 2014 | B1 |
8682529 | Church et al. | Mar 2014 | B1 |
8688433 | Davis et al. | Apr 2014 | B2 |
8688451 | Grost et al. | Apr 2014 | B2 |
8738355 | Gupta et al. | May 2014 | B2 |
8762128 | Brants et al. | Jun 2014 | B1 |
8788259 | Buryak | Jul 2014 | B1 |
8818791 | Xiao et al. | Aug 2014 | B2 |
8825467 | Chen et al. | Sep 2014 | B1 |
8825469 | Duddu et al. | Sep 2014 | B1 |
8832204 | Gailloux et al. | Sep 2014 | B1 |
8838437 | Buryak | Sep 2014 | B1 |
8886518 | Wang et al. | Nov 2014 | B1 |
8914395 | Jiang | Dec 2014 | B2 |
8918308 | Caskey et al. | Dec 2014 | B2 |
8928591 | Swartz | Jan 2015 | B2 |
8935147 | Stern et al. | Jan 2015 | B2 |
8990064 | Marcu et al. | Mar 2015 | B2 |
8990068 | Orsini et al. | Mar 2015 | B2 |
8996352 | Orsini et al. | Mar 2015 | B2 |
8996353 | Orsini et al. | Mar 2015 | B2 |
8996355 | Orsini et al. | Mar 2015 | B2 |
9031828 | Leydon et al. | May 2015 | B2 |
9031829 | Leydon et al. | May 2015 | B2 |
9141607 | Lee | Sep 2015 | B1 |
9231898 | Orsini et al. | Jan 2016 | B2 |
9245278 | Orsini et al. | Jan 2016 | B2 |
9298703 | Leydon et al. | Mar 2016 | B2 |
9336206 | Orsini et al. | May 2016 | B1 |
9348818 | Leydon et al. | May 2016 | B2 |
9372848 | Bojja et al. | Jun 2016 | B2 |
9448996 | Orsini et al. | Sep 2016 | B2 |
9535896 | Bojja et al. | Jan 2017 | B2 |
9600473 | Leydon et al. | Mar 2017 | B2 |
9665571 | Leydon et al. | May 2017 | B2 |
20010020225 | Zerber | Sep 2001 | A1 |
20010029455 | Chin et al. | Oct 2001 | A1 |
20020022954 | Shimohata et al. | Feb 2002 | A1 |
20020029146 | Nir | Mar 2002 | A1 |
20020037767 | Ebin | Mar 2002 | A1 |
20020099744 | Coden et al. | Jul 2002 | A1 |
20020152063 | Tokieda et al. | Oct 2002 | A1 |
20020169592 | Aityan | Nov 2002 | A1 |
20020198699 | Greene et al. | Dec 2002 | A1 |
20030009320 | Furuta | Jan 2003 | A1 |
20030033152 | Cameron | Feb 2003 | A1 |
20030033595 | Takagi et al. | Feb 2003 | A1 |
20030046350 | Chintalapati et al. | Mar 2003 | A1 |
20030101044 | Krasnov | May 2003 | A1 |
20030125927 | Seme | Jul 2003 | A1 |
20030176995 | Sukehiro | Sep 2003 | A1 |
20030191626 | Al-Onaizan et al. | Oct 2003 | A1 |
20040030750 | Moore et al. | Feb 2004 | A1 |
20040030781 | Etesse et al. | Feb 2004 | A1 |
20040044517 | Palmquist | Mar 2004 | A1 |
20040093567 | Schabes et al. | May 2004 | A1 |
20040102201 | Levin | May 2004 | A1 |
20040102956 | Levin | May 2004 | A1 |
20040102957 | Levin | May 2004 | A1 |
20040158471 | Davis et al. | Aug 2004 | A1 |
20040205671 | Sukehiro et al. | Oct 2004 | A1 |
20040210443 | Kuhn et al. | Oct 2004 | A1 |
20040215647 | Farn et al. | Oct 2004 | A1 |
20040243409 | Nakagawa | Dec 2004 | A1 |
20040267527 | Creamer et al. | Dec 2004 | A1 |
20050038643 | Koehn | Feb 2005 | A1 |
20050076240 | Appleman | Apr 2005 | A1 |
20050102130 | Quirk et al. | May 2005 | A1 |
20050160075 | Nagahara | Jul 2005 | A1 |
20050165642 | Brouze et al. | Jul 2005 | A1 |
20050171758 | Palmquist | Aug 2005 | A1 |
20050197829 | Okumura | Sep 2005 | A1 |
20050209844 | Wu et al. | Sep 2005 | A1 |
20050234702 | Komiya | Oct 2005 | A1 |
20050251384 | Yang | Nov 2005 | A1 |
20050283540 | Fux et al. | Dec 2005 | A1 |
20050288920 | Green et al. | Dec 2005 | A1 |
20060053203 | Mijatovic | Mar 2006 | A1 |
20060101021 | Davis et al. | May 2006 | A1 |
20060133585 | Daigle et al. | Jun 2006 | A1 |
20060136223 | Brun et al. | Jun 2006 | A1 |
20060167992 | Cheung et al. | Jul 2006 | A1 |
20060173839 | Knepper et al. | Aug 2006 | A1 |
20060206309 | Curry et al. | Sep 2006 | A1 |
20060217955 | Nagao et al. | Sep 2006 | A1 |
20060242232 | Murillo et al. | Oct 2006 | A1 |
20060247917 | Fux | Nov 2006 | A1 |
20060271352 | Nikitin et al. | Nov 2006 | A1 |
20060287848 | Li et al. | Dec 2006 | A1 |
20070011132 | Zhou et al. | Jan 2007 | A1 |
20070011235 | Mutikainen et al. | Jan 2007 | A1 |
20070016399 | Gao et al. | Jan 2007 | A1 |
20070038758 | Mu et al. | Feb 2007 | A1 |
20070050182 | Sneddon et al. | Mar 2007 | A1 |
20070077975 | Warda | Apr 2007 | A1 |
20070088793 | Landsman | Apr 2007 | A1 |
20070124133 | Wang et al. | May 2007 | A1 |
20070124202 | Simons | May 2007 | A1 |
20070129935 | Uchimoto et al. | Jun 2007 | A1 |
20070130258 | Almberg | Jun 2007 | A1 |
20070143410 | Kraft et al. | Jun 2007 | A1 |
20070168450 | Prajapat et al. | Jul 2007 | A1 |
20070218997 | Cho | Sep 2007 | A1 |
20070219774 | Quirk et al. | Sep 2007 | A1 |
20070219776 | Gamon | Sep 2007 | A1 |
20070219777 | Chu et al. | Sep 2007 | A1 |
20070294076 | Shore et al. | Dec 2007 | A1 |
20080005319 | Anderholm et al. | Jan 2008 | A1 |
20080005325 | Wynn et al. | Jan 2008 | A1 |
20080052289 | Kolo et al. | Feb 2008 | A1 |
20080065369 | Fux | Mar 2008 | A1 |
20080097745 | Bagnato et al. | Apr 2008 | A1 |
20080097746 | Tagata | Apr 2008 | A1 |
20080120374 | Kawa et al. | May 2008 | A1 |
20080126077 | Thorn | May 2008 | A1 |
20080147380 | Barliga et al. | Jun 2008 | A1 |
20080147408 | Da Palma et al. | Jun 2008 | A1 |
20080176655 | James et al. | Jul 2008 | A1 |
20080177528 | Drewes | Jul 2008 | A1 |
20080183459 | Simonsen et al. | Jul 2008 | A1 |
20080208596 | Heinze | Aug 2008 | A1 |
20080243834 | Rieman et al. | Oct 2008 | A1 |
20080249760 | Marcu et al. | Oct 2008 | A1 |
20080270553 | Mu | Oct 2008 | A1 |
20080274694 | Castell et al. | Nov 2008 | A1 |
20080281577 | Suzuki | Nov 2008 | A1 |
20080313534 | Cheung et al. | Dec 2008 | A1 |
20080320086 | Callanan et al. | Dec 2008 | A1 |
20090011829 | Yang | Jan 2009 | A1 |
20090049513 | Root et al. | Feb 2009 | A1 |
20090055175 | Terrell, II et al. | Feb 2009 | A1 |
20090068984 | Burnett | Mar 2009 | A1 |
20090100141 | Kirkland et al. | Apr 2009 | A1 |
20090106695 | Perry et al. | Apr 2009 | A1 |
20090125477 | Lu et al. | May 2009 | A1 |
20090204400 | Shields et al. | Aug 2009 | A1 |
20090204596 | Brun et al. | Aug 2009 | A1 |
20090221372 | Casey et al. | Sep 2009 | A1 |
20090234635 | Bhatt et al. | Sep 2009 | A1 |
20090271212 | Savjani et al. | Oct 2009 | A1 |
20090276500 | Karmarkar | Nov 2009 | A1 |
20090324005 | Georgiev et al. | Dec 2009 | A1 |
20100015581 | DeLaurentis | Jan 2010 | A1 |
20100036661 | Boucher et al. | Feb 2010 | A1 |
20100099444 | Coulter et al. | Apr 2010 | A1 |
20100114559 | Kim et al. | May 2010 | A1 |
20100138210 | Seo et al. | Jun 2010 | A1 |
20100145900 | Zheng et al. | Jun 2010 | A1 |
20100180199 | Wu et al. | Jul 2010 | A1 |
20100204981 | Ribeiro et al. | Aug 2010 | A1 |
20100235751 | Stewart | Sep 2010 | A1 |
20100241482 | Knyphausen et al. | Sep 2010 | A1 |
20100261534 | Lee et al. | Oct 2010 | A1 |
20100268730 | Kazeoka | Oct 2010 | A1 |
20100293230 | Lai et al. | Nov 2010 | A1 |
20100312545 | Sites | Dec 2010 | A1 |
20100324894 | Potkonjak | Dec 2010 | A1 |
20110022381 | Gao et al. | Jan 2011 | A1 |
20110035210 | Rosenfeld et al. | Feb 2011 | A1 |
20110055233 | Weber et al. | Mar 2011 | A1 |
20110066421 | Lee et al. | Mar 2011 | A1 |
20110071817 | Siivola | Mar 2011 | A1 |
20110077933 | Miyamoto et al. | Mar 2011 | A1 |
20110077934 | Kanevsky et al. | Mar 2011 | A1 |
20110082683 | Soricut et al. | Apr 2011 | A1 |
20110082684 | Soricut et al. | Apr 2011 | A1 |
20110098117 | Tanaka | Apr 2011 | A1 |
20110184736 | Slotznick | Jul 2011 | A1 |
20110191096 | Sarikaya et al. | Aug 2011 | A1 |
20110202334 | Abir | Aug 2011 | A1 |
20110202344 | Meyer et al. | Aug 2011 | A1 |
20110213607 | Onishi | Sep 2011 | A1 |
20110219084 | Borra et al. | Sep 2011 | A1 |
20110238406 | Chen et al. | Sep 2011 | A1 |
20110238411 | Suzuki | Sep 2011 | A1 |
20110239278 | Downey et al. | Sep 2011 | A1 |
20110246881 | Kushman et al. | Oct 2011 | A1 |
20110307241 | Waibel et al. | Dec 2011 | A1 |
20110307356 | Wiesinger et al. | Dec 2011 | A1 |
20110307495 | Shoshan | Dec 2011 | A1 |
20110313779 | Herzog et al. | Dec 2011 | A1 |
20110320019 | Lanciani et al. | Dec 2011 | A1 |
20120072204 | Nasri | Mar 2012 | A1 |
20120095748 | Li | Apr 2012 | A1 |
20120109631 | Gopal et al. | May 2012 | A1 |
20120156668 | Zelin | Jun 2012 | A1 |
20120173502 | Kumar et al. | Jul 2012 | A1 |
20120179449 | Raskino et al. | Jul 2012 | A1 |
20120179451 | Miyamoto et al. | Jul 2012 | A1 |
20120191445 | Markman et al. | Jul 2012 | A1 |
20120209852 | Dasgupta et al. | Aug 2012 | A1 |
20120226491 | Yamazaki | Sep 2012 | A1 |
20120233191 | Ramanujam | Sep 2012 | A1 |
20120240039 | Walker et al. | Sep 2012 | A1 |
20120246564 | Kolo | Sep 2012 | A1 |
20120253785 | Hamid | Oct 2012 | A1 |
20120262296 | Bezar | Oct 2012 | A1 |
20120265518 | Lauder | Oct 2012 | A1 |
20120277003 | Eliovits et al. | Nov 2012 | A1 |
20120290288 | Ait-Mokhtar | Nov 2012 | A1 |
20120303355 | Liu et al. | Nov 2012 | A1 |
20130006954 | Nikoulina et al. | Jan 2013 | A1 |
20130084976 | Kumaran et al. | Apr 2013 | A1 |
20130085747 | Li et al. | Apr 2013 | A1 |
20130091429 | Weng et al. | Apr 2013 | A1 |
20130096911 | Beaufort et al. | Apr 2013 | A1 |
20130103493 | Gao et al. | Apr 2013 | A1 |
20130124185 | Sarr et al. | May 2013 | A1 |
20130124186 | Donabedian et al. | May 2013 | A1 |
20130130792 | Crocker et al. | May 2013 | A1 |
20130138428 | Chandramouli et al. | May 2013 | A1 |
20130144599 | Davis et al. | Jun 2013 | A1 |
20130151237 | Hyde | Jun 2013 | A1 |
20130173247 | Hodson | Jul 2013 | A1 |
20130197896 | Chalabi et al. | Aug 2013 | A1 |
20130211821 | Tseng et al. | Aug 2013 | A1 |
20130226553 | Ji | Aug 2013 | A1 |
20130253834 | Slusar | Sep 2013 | A1 |
20130297316 | Cragun et al. | Nov 2013 | A1 |
20140006003 | Soricut et al. | Jan 2014 | A1 |
20140058807 | Altberg et al. | Feb 2014 | A1 |
20140142917 | D'Penha | May 2014 | A1 |
20140163951 | Nikoulina et al. | Jun 2014 | A1 |
20140188453 | Marcu et al. | Jul 2014 | A1 |
20140199975 | Lou et al. | Jul 2014 | A1 |
20140200878 | Mylonakis et al. | Jul 2014 | A1 |
20140208367 | DeWeese et al. | Jul 2014 | A1 |
20140330760 | Meier et al. | Nov 2014 | A1 |
20140379329 | Dong et al. | Dec 2014 | A1 |
20150006148 | Goldszmit | Jan 2015 | A1 |
20150127322 | Clark | May 2015 | A1 |
20150161104 | Buryak | Jun 2015 | A1 |
20150161114 | Buryak | Jun 2015 | A1 |
20150161227 | Buryak | Jun 2015 | A1 |
20150186355 | Baldwin | Jul 2015 | A1 |
20150199333 | Nekhay | Jul 2015 | A1 |
20150363394 | Marciano et al. | Dec 2015 | A1 |
20160036740 | Barber | Feb 2016 | A1 |
20160267070 | Bojja et al. | Sep 2016 | A1 |
20170300453 | Shen et al. | Oct 2017 | A1 |
Number | Date | Country |
---|---|---|
1819018 | Aug 2006 | CN |
101414294 | Apr 2009 | CN |
101563683 | Oct 2009 | CN |
101645269 | Feb 2010 | CN |
1691299 | Aug 2006 | EP |
2000-194696 | Jul 2000 | JP |
2002041432 | Feb 2002 | JP |
2003054841 | Feb 2003 | JP |
2003529845 | Oct 2003 | JP |
2004252881 | Sep 2004 | JP |
2006221658 | Aug 2006 | JP |
2006277103 | Oct 2006 | JP |
2006302091 | Nov 2006 | JP |
2006350628 | Dec 2006 | JP |
2009134344 | Jun 2009 | JP |
2009140073 | Jun 2009 | JP |
2010129057 | Jun 2010 | JP |
2010152785 | Jul 2010 | JP |
2012103554 | May 2012 | JP |
2014519104 | Aug 2014 | JP |
WO-2008075161 | Jun 2008 | WO |
WO-2009129315 | Oct 2009 | WO |
WO-2013133966 | Sep 2013 | WO |
WO-2014124397 | Aug 2014 | WO |
Entry |
---|
“Arabic script in Unicode,” downloaded Dec. 22, 2014, from <http://en.wikipedia.org/wiki/Arabic_script_in_Unicode>, 18 pages. |
“Bleu,” accessed on the internet at: https://en.wikipedia.org/wiki/BLEU; downloaded Dec. 1, 2018; 5 pgs. |
“Chromium-compact-language-detector,” downloaded Dec. 22, 2014, from <https://code.google.com/p/chromium-compact-language-detector/>, 1 page. |
“CJK Unified Ideographs,” downloaded Dec. 22, 2014, from <http://en.wikipedia.org/wiki/CJK_Unified_Ideographs>, 11 pages. |
“cld2,” downloaded Dec. 22, 2014, from <https://code.google.com/p/cld2/>, 2 pages. |
“Cloud Translation API documentation,” accessed on the internet at: <https://cloud.google.com/translate/docs/>; downloaded Dec. 1, 2018; 2 pgs. |
“Cyrillic script in Unicode,” downloaded Dec. 22, 2014, from <http://en.wikipedia.org/wiki/Cyrillic_script_in_Unicode>, 22 pages. |
“Dakuten and handakuten,” accessed on the internet at: https://en.wikipedia.org/wiki/Dakuten_and_handakuten>; downloaded Dec. 1, 2018; 4 pgs. |
“Detect Method,” downloaded Dec. 22, 2014, from <http://msdn.microsoft.com/en-us/library/ff512411.aspx>, 5 pages. |
“GitHub,” downloaded Dec. 22, 2014, from <https://github.com/feedbackmine/language_detector>, 1 page. |
“Google Translate API,” downloaded Dec. 22, 2014, from <https://cloud.google.com/translate/vZ/using_rest>, 12 pages. |
“Idig (Language Detection with Infinity Gram),” downloaded Dec. 22, 2014, from <https://github.com/shuyo/Idig>, 3 pages. |
“Language identification,” downloaded Dec. 22, 2014, from <http://en.wikipedia.org/wiki/Language_identification>, 5 pages. |
“Languages and Scripts, CLDR Charts,” downloaded Dec. 22, 2014, from <http://www.unicode.org/cldr/charts/latest/supplemental/languages_and_scripts.html>, 23 pages. |
“Latin script in Unicode,” downloaded Dec. 22, 2014, from <http://en.wikipedia.org/wiki/Latin_script_in_Unicode>, 5 pages. |
“Microsoft Translator Text API,” accessed on the internet at: https://www.microsoft.com/en-us/translator/translatorapi.aspx; downloaded on Dec. 1, 2018. |
“Mimer SQL Unicode Collation Charts,” downloaded Dec. 22, 2014, from <http://developer.mimer.com/charts/index.tml>, 2 pages. |
“Multi Core and Parallel Processing,” accessed on the internet at stackoverflow.com/questions/1922465/multi-core-and-parallel-processing, published Dec. 17, 2009; downloaded on Jun. 30, 2015; 2 pgs. |
“Scripts and Languages,” downloaded Dec. 22, 2014, from <http://www.unicode.org/cldr/charts/latest/supplemental/scripts_and_languages.html>, 23 pages. |
“Supported Script,” downloaded Dec. 22, 2014, from <http://www.unicode.org/standard/supported.html>, 3 pages. |
“Unicode Character Ranges,” downloaded Dec. 22, 2014, from <http://jrgraghix.net/research/unicode_blocks.php>, 1 page. |
“Uscript.h File Reference,” downloaded Dec. 22, 2014, from <http://icu-project.org/apiref/icu4c/uscript_8h.html>, 34 pages. |
Ahmed, B. et al., “Language Identification from Text Using n-gram Based Cumulative Frequency Addition,” In Proceedings of Student/Faculty Research Day, CSIS, Pace University; pp. 12.1-12.8; May 2004. |
Aikawa et al., “The Impact of Crowdsourcing Post-editing with the Collaborative Translation Framework,” JapTAL Oct. 22-24, 2012; LNAI; 7614:1-10. |
Ambati et al., “Collaborative Workflow for Crowdsourcing Translation,” Proc. of the ACM 2012 conf. on Computer Supported Cooperative Work, ACM; 1191-1194; Feb. 11-15, 2012. |
Baldwin, et al., “Language identification: The long and the short of the matter,” In Proceedings of NAACL-HLT, 2010, pp. 229-237. |
Bender, O. et al., “Maximum Entropy Models for Named Entity Recognition,” CONLL '03 Proc. of the 7th Conference on Natural language Learning at HLT-NAACL; vol. 4, pp. 148-151; May 31, 2003. |
Bergsma et al., “Language identification for creating language-specific Twitter collections,” IN Proceeding of the Second Workshop on Language in Social Media, 2012, pp. 65-74. |
Bontcheva, K. et al., “TwitIE: An Open-Source Information Extraction Pipeline for Microblog Text,” Proc. of the Int'l Conference on Recent Advances in Natural Language Processing, ACL; 8pgs; Sep. 5, 2013. |
Brown, Ralf D. “Adding Linguistic Knowledge to a Lexical Example-Based Translation System,” Proc. of the 8th Int'l Conference on Theoretical and Methodological Issues in Machine Translation (TMI-99); pp. 22-32; Aug. 1999. |
Callison-Burch et al., “Creating Speech and Language Data with Amazon's Mechanical Turk”, Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk; 1-12, Jun. 6, 2010. |
Callison-Burch, C. “Fast, Cheap, and Creative: Evaluating Translation Quality Using Amazon's Mechanical Turk,” Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pp. 286-295, Singapore, Aug. 6-7, 2009. |
Carter et al., “Microblog language identification: Overcoming the limitations of short, unedited and idiomatic text,” Language Recourses and Evaluation, 2013, vol. 47, No. 1, pp. 195-215. |
Cavnar et al., “N-gram-based text categorization,” In Proceedings of the Third Symposium on Document Analysis and Information Retrieval, 1994, 14 pgs. |
Ceylan et al., “Language identification of search engine queries,” In Proceedings of ACL-IJCNLP, 2009, pp. 1066-1074. |
Chang et al., “LIBSVM: A library for support vector machines, ACM Transactions on Intelligent Systems and Technology,” vol. 2, No. 27, pp. 1-27:27, 2011, Software available at http://www.csie.ntu.edu.tw/*cjlin/libsvm. |
Chieu H.L. and Ng, H.T., “Named Entity Recognition with a Maximum Entropy Approach,” CONLL '03 Proc. of the 7th Conference on Natural language Learning at HLT-NAACL; vol. 4, pp. 160-163; May 31, 2003. |
Ciaramita et al., “Named-Entity Recognition in Novel Domains with External Lexical Knowledge,” Proceedings of the NIPS Workshop on Advances in Structured Learning for Text and Speech Processing; Canada; Dec. 9, 2005; abstract, Section 2. |
Cunningham, H., et al., “Gate: An Architecture for Development of Robust hlt Applications,” ACL '02 Proc. of the 40th Annual Meeting on Association for Computational Linguistics; pp. 168-175; Jul. 6, 2002. |
Curran, J.R. and Clark, S., “Language Independent NER using a Maximum Entropy Tagger,” CONLL '03 Proc. of the 7th Conference on Natural language Learning at HLT-NAACL; vol. 4, pp. 164-167; May 31, 2003. |
Dunning, “Statistical identification of language,” Computing Research Laboratory, New Mexico State University, 1994, 31 pgs. |
Examiner's Report for Canadian Application No. 2,913,984; dated Oct. 19, 2016; 5 pgs. |
Extended European Search Report of the EPO in EP2954522; dated Sep. 7, 2016; 7 pgs. |
Fan et al., “LIBLINEAR: A library for large linear classification,” Journal of Machine Learning Research, 2008, vol. 9, pp. 1871-1874. |
Finkel, J., et al., “Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling,” ACL '05 Proc. of the 43rd Annual Meeting on Association for Computational Linguistics , pp. 363-370; Jun. 25, 2005. |
Foster et al., “Hardtoparse: POS tagging and pursing the twitterverse,” In Proceedings of the AAAI Workshop on Analyzing Microtext, 2011, 7 pgs. |
Gottron et al., “A comparison of language identification approaches on short, query-style texts,” In Advances in information retrieval, 2010, pp. 611-614. |
Grothe et al., “A comparative study on language identification methods,” In Proceedings of LREC, 2008, pp. 980-985. |
Hakkinen et al., “N-Gram and Decision Tree Based Language Identification for Written Words,” IEEE Workshop, Dec. 9, 2001, pp. 335-338. |
Hughes et al., “Reconsidering language identification for written language resources,” In Proceedings of LREC, 2006, 5 pgs. |
Hulin et al., “Applications of Item Response Theory to Analysis of Attitude Scale Translations,” American Psychological Association; vol. 67(6); Dec. 1982; 51 pgs. |
Int'l Search Report and Written Opinion of the ISA/EP in PCT/US2014/040676; dated May 6, 2015; 16 pgs. |
Int'l Search Report and Written Opinion of the ISA/EP in PCT/US2014/061141; dated Jun. 16, 2015; 13 pgs. |
Int'l Search Report and Written Opinion of the ISA/EP in PCT/US2017/012102; dated Apr. 18, 2017; 14 pgs. |
Int'l Search Report and Written Opinion of the ISA/EP in PCT/US2017/054722; dated Jan. 10, 2018; 13 pgs. |
Int'l Search Report and Written Opinion of the ISA/EP in PCT/US2018/051646; dated Jan. 4, 2019; 13 pgs. |
Int'l Search Report of the ISA/US in PCT/US2014/015632; dated Jul. 8, 2014; 8 pgs. |
Lafferty, J., et al., “Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data,” ICML '01 Proc. of the Eighteenth International Conference on Machine Learning; pp. 282-289; Jun. 28, 2001. |
Little, G., “Turkit: Tools for Iterative Tasks on Mechanical Turk,” IEEE Symposium on Visual Languages and Human-Centric Computing; pp. 252-253; Sep. 20, 2009. |
Liu et al., A broad-coverage normalization system for social media language,: In Proceeding of ACL, 2012, pp. 1035-1044. |
Liu et al., “Recognizing named entities in tweets,” In Proceedings of ACL-HLT, 2011, pp. 359-367. |
Lui et al., “Accurate Language Identification of Twitter Messages,” Proceedings of the 5th Workshop on Language Analysis for Social Media (LASM) @ EACL 2014, pp. 17-25, Gothenburg, Sweden, Apr. 26-30, 2014. |
Lui et al., “Automatic Detection and Language Identification of Multilingual Documents,” Transactions of the Association for Computational Linguistics, pp. 27-40, published Feb. 2014. |
Lui et al., “Cross-domain Feature Selection for Language Identification,” Proceedings of the 5th International Joint Conference on Natural Language Processing, pp. 553-561, Chiang Mai, Thailand, Nov. 8-13, 2011. |
Lui et al., “langid.py: An Off-the-shelf Language Identification Tool,” Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pp. 25-30, Jeju, Republic of Korea, Jul. 8-14, 2012. |
Minkov, E., et al., “Extracting Personal Names from Email: Applying Named Entity Recognition to Informal Text,” HLT '05 Proc. of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing; pp. 443-450; Oct. 6, 2005. |
Mizuta et al., “Language Identification Using Statistical Hypothesis Testing for Similar Languages,” IPSJ SIG Technical Reports, JP, Information Processing Society of Japan, Nov. 19, 2008, vol. 2008, No. 113, pp. 91-98. |
Monteith, Kristine, et al., “Turning Bayesian Model Averaging Into Bayesian Model Combination,” Proceedings of the International Joint Conference on Neural Networks IJCNN'11. pp. 2657-2663, 2011. |
Och, F.J. and Ney, H., “A Systematic Comparison of Various Statistical Alignment Models,” Computational Linguistics; 29(1):19-51; Mar. 1, 2003. |
Office Action (Translated) in Japanese Patent Application No. 2017-520499; dated Sep. 11, 2018; 9 pgs. |
Office Action (Translated) in Korean Patent Application No. 10-2016-7000062; dated Oct. 14, 2016; 6 pgs. |
Okazaki, N., CRFsuite: A Fast Implementation of Conditional Random Fields (CRFs); accessed on the internet at http://www.chokkan.org/software/crfsuite/; downloaded Jan. 8, 2016; Published Jul. 22, 2015; 4 pgs. |
Papineni, K., et al. “BLEU: A Method for Automatic Evaluation of Machine Translation,” Proc. 40th Annual Meeting on Assoc. for Computational Linguistics (ACL); Jul. 2002; pp. 311-318. |
Partial Int'l Search Report of the ISA/EP in PCT/US2014/040676; dated Feb. 17, 2015; 5 pgs. |
Popovic et al., “Syntax-oriented Evaluation Measures for Machine Translation Output,” Proc. of the Fourth Workshop on Statistical Machine Translation, pp. 29-32, Mar. 30-31, 2009. |
Qureshi et al., Collusion Detection and Prevention with FIRE+ Trust and Reputation Model, 2010, IEEE, Computer and Information Technology (CIT), 2010 IEEE 10th International Conference, pp. 2548-2555; Jun. 2010. |
Ritter et al., “Named entity recognition in tweets: An experimental study,” In Proceedings of EMNLP, 2011, pp. 1524-1534. |
Rouse, M., “Parallel Processing,” Search Data Center.com; Mar. 27, 2007; 2 pgs. |
Sang, E., et al., “Introduction to the CoNLL-2003 Shared Task: Language-independent Named Entity Recognition,” CONLL '03 Proc. of the 7th Conference on Natural language Learning at HLT-NAACL; vol. 4, pp. 142-147; May 31, 2003. |
Shieber, S.M., and Nelken R., “Abbreviated Text Input Using Language Modeling.” Natural Language Eng; 13(2):165-183; Jun. 2007. |
Tromp et al., “Graphbased n-gram language identification on short texts,” In Proceedings of the 20th Machine Learning conference of Belgium and The Netherlands, 2011, 8 pgs. |
Vatanen et al., “Language identification of short text segments with n-gram models,” In Proceedings of LREC, 2010, pp. 3423-3430. |
Vogel et al., “Robust language identification in short, noisy texts: Improvements to LIGA,” In Proceedings of the Third International Workshop on Mining Ubiquitous and Social Environment, 2012, pp. 43-50. |
Written Opinion of the Austrian Patent Office in Singapore App. No. 11201509840Y dated Mar. 1, 2016; 12 pgs. |
Xia, F. and Lewis, W.D., “Applying NLP Technologies to the Collection and Enrichment of Language Data on the Web to Aid Linguistic Research,” Proc. of the EACL 2009 Workshop on Language Tech. and Resources for Cultural Heritage, Social Sciences, Humanities, and Education-LaTech—SHELT&R 2009; pp. 51-59; Mar. 2009. |
Zaidan et al., “Crowdsourcing Translation: Professional Quality from Non-Professionals,” Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pp. 1220-1229, Portland, Oregon, Jun. 19-24, 2011. |
Number | Date | Country | |
---|---|---|---|
20190108214 A1 | Apr 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15283646 | Oct 2016 | US |
Child | 16210405 | US | |
Parent | 14517183 | Oct 2014 | US |
Child | 15161913 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15161913 | May 2016 | US |
Child | 15283646 | US |