The Internet has made it possible for people to connect and share information globally in ways previously undreamt of. Social media platforms, for example, have enabled people on opposite sides of the globe to collaborate on ideas, discuss current events, or just share what they had for lunch. In the past, this spectacular resource has been somewhat limited to a communication between users having a common natural language (“Language”). In addition, users have only been able to consume content that is in their language, or for which a content provider is able to determine an appropriate translation based on a system setting or a network location (e.g., an Internet Protocol (“IP”) address).
While communication across the many different languages used around the world remains a particular challenge, machine translation services have attempted to address this concern. These services provide mechanisms for a user to provide a text using a web form, select one or more languages, and receive a translation of the text in a selected language. While these services have significantly increased people's ability to communicate across language barriers, they can require users to open a separate website, indicate the language they want the translation in, and even identify the language of the source document. The resulting translation is then shown in that separate website, which removes the content from the context provided by the original source. In some cases the translator service may not be able to locate portions of the source page to translate or may provide an unreadable version of the source website due to formatting changes resulting from the translation. In many cases, users find this process too cumbersome and may lose patience and navigate to a different website or may simply skip over text they do not understand, missing an opportunity to receive content. In addition, content providers may not be able to provide comprehensible media items to users if language classification identifiers, e.g., IP address and browser settings are not true indications of a user's preferred language.
A language clarification technology for generating and implementing user language models and media language classifiers is disclosed. User language models may provide an indication, associated with a user, that the user is facile with one or more languages. Language media classifiers may provide, for a particular media item, an indication of the language the media item is in. A media item, as used herein, may be any content that utilizes a language, including text, audio, video, etc. A language, as used herein, is a natural language, which is a human written, spoken, or signed language, e.g., English, French, Chinese, or American Sign Language. A language need not be a national language, e.g., English, but may be a dialect of or variation on a particular natural language or may be a separate representation of a language, e.g., Pinyin. A user's facility with any particular language may relate to that user's ability to speak/understand, read, and/or write the language. A stored indication that a user is facile with a language may comprise one or more identifiers for any combination of speaking, reading, or writing the language.
In some embodiments, user language models and media language classifiers may be incorporated into Internet systems to provide improved translation automation. For example, a social media platform may employ user language models to associate a user profile with one or more languages the associated user is facile with. This may enable the social media platform to provide automatic translations of media items into a language known by a viewing user. This may also help indicate the language of a media item a current user creates or interacts with. As another example, a social media platform may use media language classifiers to assign language identifiers to media items, e.g., media items created by the user. This may enable designation of a language the media item will be translated from. This may also facilitate attributing language abilities to users who create and consume the media items. Additionally, this enables a server to modify or select other items within a web page to match the language of a classified media item. Furthermore, identified media items may be used as training data to build additional classifiers.
User language models may store data objects indicating language facility as probabilities, Boolean operators for individual languages, or probability distributions across multiple languages. User language models may contain a single field indicating that a user is facile with a language or languages, or may have multiple fields indicating one or more of the user's facility to read, write, speak and/or understand a language. User language models may be based on a combination of statistical observations for characteristics associated with a user and adjustment factors based on actions taken by the user. A user may be initially assigned a baseline likelihood that the user is facile with a particular language based on characteristics known about the user. The baseline likelihood may then be updated as actions taken by the user are observed or new characteristics of the user are learned. For example, a user may have various known language-implicating characteristics: a locale associated with their IP address; membership with a particular social media platform; association as friends with particular other user profiles; Internet browser's locale; country of residence; etc. As used herein, a friend or friend account is another user account that has been identified and associated with a user's account, as is common in social media and other similar contexts. In this example, an initial baseline likelihood may be created by combining the observations that 70% of the users in the user's IP locale speak Spanish, that users of the social media platform are 60% likely to speak English, and that 75% of the user accounts associated to the user's account as friends have language models indicating those friends are Spanish speakers. Each of these identified characteristics may have an associated weight used in the combination. In this example, the weighted combination may provide a 73% baseline likelihood that the user is facile with Spanish and a 40% baseline likelihood the user is facile with English. In some embodiments, the baseline likelihood may be split into sub-abilities, e.g., a 78% likelihood the user reads/writes Spanish and a 72% likelihood the user can speak Spanish. As actions performed by the user are observed, the baseline likelihood may be updated with language expectation valued specified for the observed actions.
Continuing the example, the system may identify actions indicating that half the media items the user creates are in German and the other half are in Spanish; that more than half the time user selects a media item, it is a media item classified as German; and that on numerous occasions the user has used a translation service to translate English content into German. Observed actions may have a weight specified for use in a computation for updating either a baseline or current prediction. The weights may be dependent on an observed intensity or frequency of the action. In this example, the baseline likelihoods may be updated such that the probability the user is facile with Spanish is increased to 88% because it is likely a user is able to use Spanish if they are creating Spanish media items. The baseline likelihoods may be further updated to change a 0%, or other default likelihood, that the user is facile with German to 95% based on the user creating German media items, interacting with German media items, and translating media items into German. Finally, the baseline likelihoods may be changed to indicate that it is only 5% likely that the user is facile with English, based on the user translating media items from English. In some embodiments, translating to or from a language may have a particularly heavy weight. Building user language models are discussed in more detail below in relation to
The language classification technology may also be used to classify media items as being in a language. Media item classification may use any combination of classifiers including context classifiers, dictionary classifiers, and trained n-gram analysis classifiers. Context classifiers may use context characteristics associated with a media item to provide a probability that the media item is in a particular language. The context characteristics may comprise information about the media item's source, information about the media item's use, or characteristics of users who interact with the media item. A context characteristic may correspond to a computed likelihood that a media item with this context characteristic is in a particular language.
Dictionary classifiers may review particular words of a media item to decide what language the use of that word indicates. Particular words in a media item may correspond to a specified probability that the media item is in a particular language. For example, a post to a social media platform may include the words “fire” and “banana.” There may be a 65% probability that a media item with the word “fire” is in English and a 40% probability a media item with the word “banana” is in English. The system may use a specified algorithm, for example an algorithm that takes the average of the attributed probabilities, to compute a 52% probability that the media item is in English based on the dictionary classification.
Trained classifiers may use n-gram analysis to compare groupings of characters, or n-grams, from a media item to a probability distribution showing, given the use of the n-gram, whether a corresponding media item is in a particular language. The probability distributions for a trained classifier may be generated using a training process that analyzes a body of multiple training media items, where each training media item has a language classification. In the training process, one or more n-grams within the media items of a particular length, e.g., four or five characters, may be analyzed to determine a frequency with which that n-gram appears across the various languages of the training media items. In some embodiments, trained classifiers may be trained for use with particular types or categories of media items. Category based training may provide more accurate classifications because the way language is used in the training data, e.g., language tagged pages from Wikipedia, may provide probability distributions inconsistent with the way that same language is used in other contexts, e.g., on a social media platform. For example, an n-gram, e.g. “she said,” may show up regularly during person-to-person communication on a social media platform may be very infrequent in an informational source e.g., Wikipedia. Therefore, a trained classifier trained on a Wikipedia data source, when presented with the “she said” n-gram, may provide a distribution for English that is too low.
Building user language models and classifying media items may be interrelated processes because the probability that a given media item is in a particular language may depend on a likelihood the user who produced the media item speaks that language, or conversely observing a user interacting with a media item in a particular language may increase the likelihood the user understands that language. For each context characteristic used to improve a language classification of a media item, subsequent observations of users interacting with that media item are also improved. Likewise, for each action observed for a user to improve that user language model, subsequent media items created by that user are more likely to have a correct language classification based on that language model. Accordingly, this feedback loop between the media item classification process and the language modeling process enables each to be enhanced by characteristics and actions observed in the other process.
Several embodiments of the language classification technology are discussed below in more detail in reference to the Figures. Turning now to the Figures,
CPU 110 may be a single processing unit or multiple processing units in a device or distributed across multiple devices. As used herein, a processor, e.g., CPU 110, may be a single processing unit or multiple processing units in a device or distributed across multiple devices. CPU 110 may be coupled to other hardware devices, for example, with the use of a BUS, e.g., a PCI BUS or SCSI BUS. The CPU 110 may communicate with a hardware controller for devices, e.g., for a display 130. Display 130 may be used to display text and graphics. One example of a display 130 is a touchscreen that provides graphical and textual visual feedback to a user. In some implementations, the display includes the input device as part of the display, e.g., when the input device is a touchscreen. In some implementations, the display is separate from the input device. Examples of standalone display devices are: an LCD display screen, an LED display screen, a projected display (such as a heads-up display device), and so on. Other I/O devices 140 may also be coupled to the processor, e.g., a video or audio card, USB or other external devices, printer, speakers, CD-ROM drive, DVD drive, disk drives, or Blu-Ray devices. In some implementations, other I/O devices 140 also include a communication device capable of communicating wirelessly or wire-based with a network node. The communication device may communicate with another device or a server through a network using, for example, TCP/IP protocols. For example, device 100 may utilize the communication device to distribute operations across multiple network devices.
The CPU 110 has access to a memory 150. A memory includes one or more of various hardware devices for volatile and non-volatile storage, and may include both read-only and writable memory. For example, a memory may comprise random access memory (RAM), read-only memory (ROM), writable non-volatile memory, e.g., flash memory, hard drives, floppy disks, CDs, DVDs, magnetic storage devices, tape drives, device buffers, and so forth. A memory is not a propagating electrical signal divorced from underlying hardware, and is thus non-transitory. The memory 150 includes program memory 160 that contains programs and software, such as an operating system 162, user language model builder 164, media item classifier 166, and any other application programs 168. The memory 150 also includes data memory 170 that includes any configuration data, settings, user options and preferences that may be needed by the program memory 160 or by any element of the device 100.
The language classification technology is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with the technology include, but are not limited to, personal computers, server computers, handheld or laptop devices, cellular telephones, tablet devices, e-readers, multiprocessor systems, microprocessor-based systems, set-top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
Network 220 can be a local area network (LAN) or a wide area network (WAN), but may also be other wired or wireless networks. The client computing devices 205 can be connected to network 220 through a network interface, e.g., by a wired or wireless network.
General software 320 may include various applications including a BIOS 322, an operating system 324, and local programs 326. Specialized components 340 may be subcomponents of a general software application 320, e.g., a local program 326, or may be invoked as remote processes. Specialized components 340 may include interface 342, user baseline predictor 344, action expectation predictor 346, context classifiers 348, trained classifiers 350, dictionary classifiers 352, trained classifier builder 354, and training media data 356.
In some embodiments, a system may request creation of a user language model. In response, interface 342 may receive an identification of user characteristics or user actions. Received user characteristics may be associated with corresponding likelihoods that users with that particular characteristic are facile with a particular language. Received user actions may be associated with corresponding values indicating an expectation that users who perform that particular action are facile with a particular language. Received user characteristics and associated likelihoods may be passed to user baseline predictor 344, which may combine the specified likelihoods into one or more baseline likelihoods that, based on the received characteristics, the user is facile with one or more languages. Received user actions and associated values, as well as the baseline likelihood, may be passed to action expectation predictor 346, which may use the specified expectation value to update the baseline likelihood to generate a current prediction. The current prediction may undergo further updating as the system learns additional characteristics about the user or learns of additional actions taken by the user. The baseline likelihood and current prediction may be probability values for a single language, distributions across multiple languages, or may each comprise one or more binary indicators for particular languages the system predicts the user is facile with. The baseline likelihood and current prediction may also be broken down into sub-categories of being facile with a language, where the sub-categories may include any of being able to read a language, being able to write the language, being able to speak the language, being able to understand the language when spoken, or any combinations thereof. In some embodiments, the current prediction may also include, based on the identified user characteristics and observed actions, a likely location of the user. The current prediction may be used as a language model for a user associated with the identified user characteristics and observed actions. Generation of user language models is discussed in more detail below in relation to
In some embodiments a system may request language classification of a media item. In response, interface 342 may receive indications of the context of the media item and words or character groupings from the media item. Each received context may be associated with a corresponding computed likelihood that a media item with this context is in a particular language. Received media item contexts and associated likelihoods may be passed to context classifiers 348. In some embodiments, the observed contexts of the media item are passed to the context classifiers 348, and context classifiers 348 retrieve or compute associated likelihoods. In some embodiments, an indication of the media item is passed to the context classifiers 348. Context classifiers 348 may identify contexts of the media item that are relevant to language prediction, or that occur frequently enough or with sufficient intensity to meet a threshold level sufficient to be predictive of a language of the media item. Context classifiers 348 may combine the specified likelihoods into one or more context predictions that, based on the contexts, predict a particular language for the media item.
Trained classifiers 350 may receive words or character groupings from the media item. Trained classifiers 350 may select one or more n-grams, using a specified n-gram length, from the media item words or character groupings. Using defined probabilities for the selected n-grams, the trained classifiers may generate a trained prediction that the media item is in a particular language, given the selected n-grams.
Dictionary classifiers 352 may receive words from the media item and may determine that one or more of the words are indicative, to a particular degree, of a particular language. Using the particular degrees of indication corresponding to the one or more words, the dictionary classifiers 352 may compute a dictionary prediction that the media item is in a particular language. Any one or any combination of a context prediction, a trained prediction, and a dictionary prediction may be used to assign a language classification to the media item. Each of the context prediction, the trained prediction, the dictionary prediction and the assigned language classification may comprise one or more language probabilities, Boolean operators for individual languages, or distributions across multiple languages. Classification of media items is discussed in more detail below in relation to
In some embodiments interface 342 may receive a request for creation of one or more trained classifiers. In response, trained classifier builder 354 may retrieve, from training media data 356, training media items. In some embodiments, trained classifier builder 354 may generate trained classifiers corresponding to a particular category of media items. One category for building trained classifiers may be for social media. When trained classifier builder 354 builds a social media trained classifier, each training media item retrieved from training media data 356 is gathered from a social media source and has an associated language identifier based on one or more of: a language model associated with a user who created the media item, a common identified language in both a language model associated with a user who created the media item and in a language model associated with a user who received the media item, and context characteristics regarding the source or use of the media item within a social media source. Training classifier builder 354 selects multiple n-grams of a specified length and computes, for each selected n-gram, a probability distribution that a particular media item is in a particular language given that the n-gram is in the particular media item. Each probability distribution may be based on an analysis of the frequency of that n-gram in a subset of the training media items defined by each media item in the subset having the same language identifier. Training classifier building is discussed in more detail below in relation to
Those skilled in the art will appreciate that the components illustrated in
The process then continues to block 406, where it computes a baseline prediction that the user is facile with one or more languages. The computation of the baseline prediction may be based on the characteristics identified in block 404. In some embodiments, characteristics identified in block 404 may be associated with a particular likelihood. In some embodiments, a likelihood value is not pre-assigned to some of the characteristics identified in block 404, and therefore likelihoods for these characteristics are computed as part of block 406. For example, an algorithm may determine that, based on current census data, an IP address for Mexico City provides an 88% likelihood that a user speaks Spanish. Likelihoods associated with characteristics may vary according to the characteristic value. For example, an IP address indicating a user lives in Vermont may be a strong language predictor, while an IP address indicating a user lives in southern California may be only a moderate language predictor.
In some embodiments, the baseline prediction may be a probability distribution curve. For example, block 406 may create a probability distribution indicating a confidence level that a user is facile with a language where the height and shape of the curve are based on a combination of the likelihoods associated with each characteristic identified in block 404. In other embodiments, the baseline prediction for a language may be a single value based on a combination of the likelihoods associated with each characteristic identified in block 404. In some embodiments, the identified characteristics may have associated weights indicating how reliable a predictor the characteristic is based on an intensity or frequency of the characteristic. For example, user profile with 20% of its associated friend profiles having a German language indicator may provide a 27% likelihood the user associated with the profile is facile with German, while a user profile with 85% of its associated friend profiles having a Italian language indicator may provide a 92% likelihood the user associated with the profile is facile with Italian. In some embodiments, characteristics may have likelihoods of a set value and/or no corresponding weight. For example, whether there is a browser setting indicating a French preference is a binary question that, if true, may be indicated by a system administrator to provide a 60% likelihood that the corresponding user is facile with French.
The process then continues to block 408 where it receives one or more indications of actions taken by the user indicative of a language facility. For example, actions indicating a language facility may include one or more of: composing content in an identified language, accessing or interacting with content in an identified language, translating a content item to a particular language, and translating a content item from a particular language. In some embodiments, indications of actions may be actions taken by other users. For example, if 100 users whose user profiles indicate they speak only Swahili access content created by a particular user, these actions may indicate that the user created a Swahili content item, increasing the probability the user is facile with Swahili. Once actions are identified, the process continues to block 410.
At block 410, the process uses expectation values associated with the actions indicated in block 408 to update the baseline prediction. Similarly to the user characteristics, actions indicated in block 408 may have specified expectation values or the process at block 410 may calculate expectation values dynamically. Actions may have an associated weight based on intensity or frequency of the action. In some embodiments, the updating may be repeated over multiple user language model updates as additional actions and/or user characteristics are observed. Some actions may decrease a prediction value indicating that a user is facile with a particular language. For example, a baseline likelihood may establish that a user is 60% likely to be facile with Vietnamese, but then on multiple occasions the user is observed translating content out of Vietnamese. In this example, the computed probability that the user is facile with Vietnamese may be decreased to 15% based on the translation activity. Process then proceeds to block 412, where it ends.
Process 400 may result in a language model comprising a value, a probability distribution, or any other data organization configured to indicate a probability a user is facile with a language. In addition, process 400 may create a language model for a single language, or may create a distribution across multiple languages. As discussed above, process 400 in some embodiments may result in a language model with a separate prediction for a user's facility to read versus their facility to write a language. In some embodiments, some context characteristics or observed actions may be associated with a first likelihood that a user is able to read a language and a different likelihood that a user is able to write a language. For example, an observed action of a user translating a content item to Italian may provide a 60% chance that the user can read Italian, but only a 40% chance the user can write in Italian.
At block 506, the process identifies a set of one or more context characteristics. A context characteristic may be any data, apart from the content of the media item, which indicates the media item is in a particular language. For example, a context characteristic may include information about the media item's source, information about the media item's use, or characteristics of users who interact with the media item. The process then continues to block 508.
At block 508, the process computes a likelihood that the media item is in a particular language. Each context characteristic identified in block 506 may have an associated likelihood value or a likelihood value may be generated dynamically at block 508. For example, if the language model for a user that originated a media item indicates the user is only facile with English, this context characteristic may provide a 65% likelihood that the media item is in English. As another example, block 508 may perform an analysis of all the users that have interacted with a media item. If a threshold amount, e.g. 75%, of the interacting users are facile with multiple languages including a first particular language, the process may compute a 60% likelihood the media item is in that first language. However, if 20% of the users who interact with the media item only speak a second different language, the likelihood for this context characteristic may be calculated as only 40% likely for each of the first and second languages.
At block 510, the process applies trained n-gram analysis to the media item. Trained n-gram analysis selects groupings of characters (n-grams) from a text version of the language portion of the media item, and compares the n-grams to a specified probability distribution from a trained language classifier. The trained language classifier identifies, for a given n-gram, a probability that the n-gram is from a particular language based on that n-gram's frequency in a set of training data for that language. Trained language classifiers are discussed in more detail below in relation to
At block 512, the computed context likelihood and the results of the n-gram analysis are combined. In some embodiments only one of these results may be computed, in which case the language classification for the media item may be based on only one of the context likelihood and the results of the n-gram analysis. The combination may be accomplished in various ways, e.g., averaging the classification result values or distributions or using weighting by confidence factors. Confidence factors may include the number of found context characteristics or matched n-grams or making a determination that particular context characteristics or matched n-grams are very reliable predictors.
In some embodiments, other or alternate predictions, e.g. a dictionary prediction, may also be combined in block 512. A dictionary prediction selects words from a media item and provides a language prediction of the media item based on an identified frequency of words in a language. In some embodiments that implement a dictionary prediction, only particular words that are specifically predictive are selected to make a language prediction for the media item.
The combination or selection of a prediction becomes a language classification for the media item. This classification may be in the form of a value like “Chinese” or a probability distribution across multiple languages. The language classification may be associated with a confidence factor and be part of a data structure that provides access to the context characteristics and/or matched n-grams that were used to generate the classification. The language classification is then stored. The process then continues to block 514, where it ends.
Process 600 begins at block 602 and continues to block 604. At block 604, the process obtains training data. Trained classifiers may be trained for use with particular types of media items by selecting training data comprising media items from a source consistent with that type of media item. For example, trained classifiers may be created to classify social media content items by training the classifier using social media content items that already have a language classification. In some embodiments, the training data may be selected such that each training media item in the training data is selected using a social media source and each selected training media item was classified as being in a particular language based on A) a language model of a user who created the media item that indicating the user is mono-linguistic, B) there being a common identified language in both a language model associated with a user who created the media item and in a language model associated with a user who received the media item, or C) a group or page to which the media item was added is in a single known language. The process then continues to block 606.
At block 606, the process selects one or more n-grams of a set length from the training data. The set length may vary between trained classifiers of the system. In some embodiments the n-gram length is four or five characters. In some embodiments, all possible character sequences of the set length in the training data are selected as n-grams. The process then continues to block 608.
At block 608, a probability distribution is computed for each n-gram selected in block 606. Each probability distribution may show a rate at which that n-gram appears across the languages of the training media items. The value corresponding to each language in the probability distribution for an n-gram may be based on an analysis of the frequency that the n-gram appears in the training media items of that language. The resulting distribution for an n-gram indicates, when that n-gram appears in a media item, the likelihood for each language that the source media item is in that language. In some embodiments the probability distribution may be a single value for a single language. For example, the distribution that the use of the “erci” n-gram provides a 60% probability the media item containing the n-gram is in French. The generated probability distribution is saved as part of the trained classifier. The process then continues to block 610, where it ends.
Several embodiments of the described technology are described above in reference to the Figures. The computing devices on which the described technology may be implemented may include one or more central processing units, memory, input devices (e.g., keyboard and pointing devices), output devices (e.g., display devices), storage devices (e.g., disk drives), and network devices (e.g., network interfaces). The memory and storage devices are computer-readable storage media that may store instructions that implement at least portions of the described technology. In addition, the data structures and message structures may be stored or transmitted via a data transmission medium, such as a signal on a communications link. Various communications links may be used, such as the Internet, a local area network, a wide area network, or a point-to-point dial-up connection. Thus, computer-readable media can comprise computer-readable storage media (e.g., “non-transitory” media) and computer-readable transmission media.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims that follow.
This application is a continuation of U.S. patent application Ser. No. 14/302,032, filed on Jun. 11, 2014, the disclosure of which is hereby incorporated herein in its entirety by reference.
Number | Name | Date | Kind |
---|---|---|---|
5293581 | DiMarco et al. | Mar 1994 | A |
5477451 | Brown et al. | Dec 1995 | A |
5510981 | Berger et al. | Apr 1996 | A |
5799193 | Sherman et al. | Aug 1998 | A |
5991710 | Papineni et al. | Nov 1999 | A |
6002998 | Martino et al. | Dec 1999 | A |
6125362 | Elworthy | Sep 2000 | A |
6157905 | Powell | Dec 2000 | A |
6161082 | Goldberg | Dec 2000 | A |
6223150 | Duan et al. | Apr 2001 | B1 |
6266642 | Franz et al. | Jul 2001 | B1 |
6304841 | Berger et al. | Oct 2001 | B1 |
6377925 | Newman et al. | Apr 2002 | B1 |
6393389 | Chanod et al. | May 2002 | B1 |
6629095 | Wagstaff et al. | Sep 2003 | B1 |
7054804 | Gonzales et al. | May 2006 | B2 |
7110938 | Cheng et al. | Sep 2006 | B1 |
7289911 | Rigney et al. | Oct 2007 | B1 |
7359861 | Lee et al. | Apr 2008 | B2 |
7533019 | Riccardi et al. | May 2009 | B1 |
7664629 | Dymetman et al. | Feb 2010 | B2 |
7813918 | Muslea et al. | Oct 2010 | B2 |
7827026 | Brun et al. | Nov 2010 | B2 |
7895030 | Al-Onaizan et al. | Feb 2011 | B2 |
7983903 | Gao et al. | Jul 2011 | B2 |
8015140 | Kumar et al. | Sep 2011 | B2 |
8145484 | Zweig et al. | Mar 2012 | B2 |
8175244 | Frankel et al. | May 2012 | B1 |
8204739 | Lane et al. | Jun 2012 | B2 |
8209333 | Hubbard et al. | Jun 2012 | B2 |
8265923 | Chatterjee et al. | Sep 2012 | B2 |
8275602 | Curry et al. | Sep 2012 | B2 |
8306808 | Elbaz et al. | Nov 2012 | B2 |
8386235 | Duan et al. | Feb 2013 | B2 |
8543580 | Chen et al. | Sep 2013 | B2 |
8756050 | Curtis et al. | Jun 2014 | B1 |
8825466 | Wang et al. | Sep 2014 | B1 |
8825759 | Ho et al. | Sep 2014 | B1 |
8831928 | Marcu et al. | Sep 2014 | B2 |
8838434 | Liu | Sep 2014 | B1 |
8874429 | Crosley et al. | Oct 2014 | B1 |
8897423 | Nanjundaswamy | Nov 2014 | B2 |
8935150 | Christ | Jan 2015 | B2 |
8942973 | Viswanathan | Jan 2015 | B2 |
8949865 | Murugesan et al. | Feb 2015 | B1 |
8983974 | Ho et al. | Mar 2015 | B1 |
8990068 | Orsini et al. | Mar 2015 | B2 |
8996352 | Orsini et al. | Mar 2015 | B2 |
8996353 | Orsini et al. | Mar 2015 | B2 |
8996355 | Orsini et al. | Mar 2015 | B2 |
9009025 | Porter et al. | Apr 2015 | B1 |
9031829 | Leydon et al. | May 2015 | B2 |
9104661 | Evans | Aug 2015 | B1 |
9183309 | Gupta | Nov 2015 | B2 |
9231898 | Orsini et al. | Jan 2016 | B2 |
9245278 | Orsini et al. | Jan 2016 | B2 |
9336206 | Orsini et al. | May 2016 | B1 |
9477652 | Huang et al. | Oct 2016 | B2 |
9734142 | Huang | Aug 2017 | B2 |
9734143 | Rottmann et al. | Aug 2017 | B2 |
9740687 | Herdagdelen et al. | Aug 2017 | B2 |
9747283 | Rottmann et al. | Aug 2017 | B2 |
9805029 | Rottmann et al. | Oct 2017 | B2 |
9830386 | Huang et al. | Nov 2017 | B2 |
9830404 | Huang et al. | Nov 2017 | B2 |
9864744 | Eck et al. | Jan 2018 | B2 |
20020087301 | Jones et al. | Jul 2002 | A1 |
20020161579 | Saindon et al. | Oct 2002 | A1 |
20020169592 | Aityan | Nov 2002 | A1 |
20030040900 | D'Agostini et al. | Feb 2003 | A1 |
20030130846 | King et al. | Jul 2003 | A1 |
20040002848 | Zhou et al. | Jan 2004 | A1 |
20040049374 | Breslau et al. | Mar 2004 | A1 |
20040098247 | Moore | May 2004 | A1 |
20040122656 | Abir et al. | Jun 2004 | A1 |
20040243392 | Chino et al. | Dec 2004 | A1 |
20050021323 | Li et al. | Jan 2005 | A1 |
20050055630 | Scanlan et al. | Mar 2005 | A1 |
20050228640 | Aue et al. | Oct 2005 | A1 |
20060111891 | Menezes et al. | May 2006 | A1 |
20060206798 | Kohlmeier et al. | Sep 2006 | A1 |
20060271352 | Nikitin et al. | Nov 2006 | A1 |
20070130563 | Elgazzar et al. | Jun 2007 | A1 |
20070136222 | Horvitz et al. | Jun 2007 | A1 |
20080046231 | Laden et al. | Feb 2008 | A1 |
20080077384 | Agapi et al. | Mar 2008 | A1 |
20080281578 | Kumaran et al. | Nov 2008 | A1 |
20090070095 | Gao et al. | Mar 2009 | A1 |
20090083023 | Foster et al. | Mar 2009 | A1 |
20090132233 | Etzioni et al. | May 2009 | A1 |
20090182547 | Niu et al. | Jul 2009 | A1 |
20090198487 | Wong et al. | Aug 2009 | A1 |
20090210214 | Qian et al. | Aug 2009 | A1 |
20090276206 | Fitzpatrick et al. | Nov 2009 | A1 |
20090281789 | Waibel et al. | Nov 2009 | A1 |
20090326912 | Ueffing et al. | Dec 2009 | A1 |
20100042928 | Rinearson et al. | Feb 2010 | A1 |
20100121639 | Zweig et al. | May 2010 | A1 |
20100149803 | Nakano et al. | Jun 2010 | A1 |
20100161642 | Chen et al. | Jun 2010 | A1 |
20100194979 | Blumenschein et al. | Aug 2010 | A1 |
20100223048 | Lauder et al. | Sep 2010 | A1 |
20100228777 | Imig et al. | Sep 2010 | A1 |
20100241416 | Jiang et al. | Sep 2010 | A1 |
20100283829 | De Beer et al. | Nov 2010 | A1 |
20100299132 | Dolan et al. | Nov 2010 | A1 |
20100312769 | Bailey et al. | Dec 2010 | A1 |
20110099000 | Rai et al. | Apr 2011 | A1 |
20110137636 | Srihari et al. | Jun 2011 | A1 |
20110246172 | Liberman et al. | Oct 2011 | A1 |
20110246881 | Kushman et al. | Oct 2011 | A1 |
20110252027 | Chen et al. | Oct 2011 | A1 |
20110282648 | Sarikaya et al. | Nov 2011 | A1 |
20120005224 | Ahrens et al. | Jan 2012 | A1 |
20120029910 | Medlock et al. | Feb 2012 | A1 |
20120035907 | Lebeau et al. | Feb 2012 | A1 |
20120035915 | Kitade et al. | Feb 2012 | A1 |
20120047172 | Ponte et al. | Feb 2012 | A1 |
20120059653 | Adams et al. | Mar 2012 | A1 |
20120101804 | Roth et al. | Apr 2012 | A1 |
20120109649 | Talwar et al. | May 2012 | A1 |
20120123765 | Estelle et al. | May 2012 | A1 |
20120130940 | Gattani et al. | May 2012 | A1 |
20120138211 | Barger et al. | Jun 2012 | A1 |
20120158621 | Bennett et al. | Jun 2012 | A1 |
20120173224 | Anisimovich et al. | Jul 2012 | A1 |
20120209588 | Wu et al. | Aug 2012 | A1 |
20120253785 | Hamid et al. | Oct 2012 | A1 |
20120330643 | Frei et al. | Dec 2012 | A1 |
20130018650 | Moore et al. | Jan 2013 | A1 |
20130060769 | Pereg et al. | Mar 2013 | A1 |
20130084976 | Kumaran et al. | Apr 2013 | A1 |
20130103384 | Hunter et al. | Apr 2013 | A1 |
20130144595 | Lord et al. | Jun 2013 | A1 |
20130144603 | Lord et al. | Jun 2013 | A1 |
20130144619 | Lord et al. | Jun 2013 | A1 |
20130173247 | Hodson et al. | Jul 2013 | A1 |
20130246063 | Teller et al. | Sep 2013 | A1 |
20130317808 | Kruel et al. | Nov 2013 | A1 |
20140006003 | Soricut et al. | Jan 2014 | A1 |
20140006929 | Swartz et al. | Jan 2014 | A1 |
20140012568 | Caskey et al. | Jan 2014 | A1 |
20140025734 | Griffin et al. | Jan 2014 | A1 |
20140040371 | Gurevich et al. | Feb 2014 | A1 |
20140059030 | Hakkani-Tur et al. | Feb 2014 | A1 |
20140081619 | Solntseva et al. | Mar 2014 | A1 |
20140108393 | Angwin et al. | Apr 2014 | A1 |
20140163977 | Hoffmeister et al. | Jun 2014 | A1 |
20140172413 | Cvijetic et al. | Jun 2014 | A1 |
20140195884 | Castelli et al. | Jul 2014 | A1 |
20140207439 | Venkatapathy et al. | Jul 2014 | A1 |
20140229155 | Leydon et al. | Aug 2014 | A1 |
20140279996 | Teevan et al. | Sep 2014 | A1 |
20140280295 | Kurochkin et al. | Sep 2014 | A1 |
20140280592 | Zafarani et al. | Sep 2014 | A1 |
20140288913 | Shen et al. | Sep 2014 | A1 |
20140288917 | Orsini et al. | Sep 2014 | A1 |
20140288918 | Orsini et al. | Sep 2014 | A1 |
20140303960 | Orsini et al. | Oct 2014 | A1 |
20140335483 | Buryak et al. | Nov 2014 | A1 |
20140337007 | Fuegen et al. | Nov 2014 | A1 |
20140337989 | Bojja et al. | Nov 2014 | A1 |
20140350916 | Jagpal et al. | Nov 2014 | A1 |
20140358519 | Dymetman et al. | Dec 2014 | A1 |
20140365200 | Sagie | Dec 2014 | A1 |
20140365460 | Portnoy et al. | Dec 2014 | A1 |
20150006143 | Skiba et al. | Jan 2015 | A1 |
20150006148 | Najork Marc et al. | Jan 2015 | A1 |
20150006219 | Jose et al. | Jan 2015 | A1 |
20150033116 | Severdia et al. | Jan 2015 | A1 |
20150046146 | Crosley et al. | Feb 2015 | A1 |
20150066805 | Taira et al. | Mar 2015 | A1 |
20150120290 | Shagalov | Apr 2015 | A1 |
20150134322 | Cuthbert et al. | May 2015 | A1 |
20150142420 | Sarikaya et al. | May 2015 | A1 |
20150161104 | Buryak et al. | Jun 2015 | A1 |
20150161110 | Salz | Jun 2015 | A1 |
20150161112 | Galvez et al. | Jun 2015 | A1 |
20150161114 | Buryak et al. | Jun 2015 | A1 |
20150161115 | Denero et al. | Jun 2015 | A1 |
20150161227 | Buryak et al. | Jun 2015 | A1 |
20150213008 | Orsini et al. | Jul 2015 | A1 |
20150228279 | Moreno et al. | Aug 2015 | A1 |
20150293997 | Smith et al. | Oct 2015 | A1 |
20150363388 | Green et al. | Dec 2015 | A1 |
20160041986 | Nguyen | Feb 2016 | A1 |
20160048505 | Tian et al. | Feb 2016 | A1 |
20160092603 | Rezaei et al. | Mar 2016 | A1 |
20160117628 | Brophy et al. | Apr 2016 | A1 |
20160162473 | Hedley et al. | Jun 2016 | A1 |
20160162477 | Orsini et al. | Jun 2016 | A1 |
20160162478 | Blassin et al. | Jun 2016 | A1 |
20160162575 | Eck et al. | Jun 2016 | A1 |
20160177628 | Juvani | Jun 2016 | A1 |
20160188575 | Sawaf | Jun 2016 | A1 |
20160188576 | Huang et al. | Jun 2016 | A1 |
20160188661 | Zhang et al. | Jun 2016 | A1 |
20160188703 | Zhang et al. | Jun 2016 | A1 |
20160217124 | Sarikaya et al. | Jul 2016 | A1 |
20160239476 | Huang et al. | Aug 2016 | A1 |
20160267073 | Noeman et al. | Sep 2016 | A1 |
20160299884 | Chioasca et al. | Oct 2016 | A1 |
20160357519 | Vargas et al. | Dec 2016 | A1 |
20170011739 | Huang et al. | Jan 2017 | A1 |
20170083504 | Huang | Mar 2017 | A1 |
20170169015 | Huang | Jun 2017 | A1 |
20170177564 | Rottmann et al. | Jun 2017 | A1 |
20170185583 | Pino et al. | Jun 2017 | A1 |
20170185586 | Rottmann et al. | Jun 2017 | A1 |
20170185588 | Rottmann et al. | Jun 2017 | A1 |
20170315988 | Herdagdelen et al. | Nov 2017 | A1 |
20170315991 | Rottmann et al. | Nov 2017 | A1 |
20180004734 | Rottmann et al. | Jan 2018 | A1 |
Entry |
---|
Final Office Action dated Jul. 1, 2016, for U.S. Appl. No. 14/302,032 of Herdagdelen, A., filed Jun. 11, 2014. |
International Search Report and Written Opinion for International Application No. PCT/US2015/051737, dated Jul. 28, 2016, 22 pages. |
Koehn, P. et al., “Statistical Phrase-Based Translation,” Proceedings of the 2003 Conference of the North American Chapter of the Association for computational Linguistics on Human Language Technology—vol. 1, Assoc. for Computational Linguistics, 2003, p. |
Non-Final Office Action dated Dec. 17, 2015, for U.S. Appl. No. 14/302,032 of Saint Cyr, L., filed Jun. 11, 2014. |
Non-Final Office Action dated Dec. 21, 2016, for U.S. Appl. No. 14/586,022 of Huang, F., filed Dec. 30, 2014. |
Non-Final Office Action dated Dec. 29, 2016, for U.S. Appl. No. 14/586,049 of Huang, F. et al., filed Dec. 30, 2014. |
Non-Final Office Action dated Dec. 30, 2016 in U.S. Appl. No. 14/586,074 by Huang, F. et al., filed Dec. 30, 2014. |
Non-Final Office Action dated Feb. 9, 2017, for U.S. Appl. No. 14/559,540 of Eck, M. et al., filed Dec. 3, 2014. |
Non-Final Office Action dated Jan. 12, 2017, for U.S. Appl. No. 15/275,235 of Huang, F. et al., filed Sep. 23, 2016. |
Non-Final Office Action dated Jan. 19, 2017, for U.S. Appl. No. 14/980,654 of Pino, J. et al., filed Dec. 28, 2015. |
Non-Final Office Action dated Jul. 28, 2016, for U.S. Appl. No. 14/861,747 of F. Huang, filed Sep. 22, 2015. |
Non-Final Office Action dated Mar. 10, 2016, for U.S. Appl. No. 14/621,921 of Huang, F., filed Feb. 13, 2015. |
Non-Final Office Action dated Nov. 9, 2016, for U.S. Appl. No. 14/973,387 by Rottmann, K., et al., filed Dec. 17, 2015. |
Non-Final Office Action dated Oct. 6, 2016, U.S. Appl. No. 14/981,794 of Rottmann, K. filed Dec. 28, 2015. |
Notice of Allowance dated Apr. 13, 2017, for U.S. Appl. No. 14/973,387 of Rottmann, K., et al., filed Dec. 17, 2015. |
Notice of Allowance dated Apr. 19, 2017, for U.S. Appl. No. 14/981,769 by Rottmann, K., et al., filed Dec. 28, 2015. |
Notice of Allowance dated Apr. 20, 2017 for U.S. Appl. No. 14/302,032 by Herdagdelen, A., et al., filed Jun. 11, 2014. |
Notice of Allowance dated Apr. 7, 2017 for U.S. Appl. No. 14/861,747 by Huang, F., et al., filed Sep. 22, 2015. |
Notice of Allowance dated Jul. 18, 2016, for U.S. Appl. No. 14/621,921 of Huang, F., filed Feb. 13, 2015. |
Notice of Allowance dated Mar. 1, 2017, for U.S. Appl. No. 14/981,769 by Rottmann, K., et al., filed Dec. 28, 2015. |
Notice of Allowance dated Nov. 30, 2016, for U.S. Appl. No. 14/302,032 of Herdagdelen, A., filed Jun. 11, 2014. |
Sutskever, I., et al., “Sequence to sequence learning with neural networks,” Advances in Neural Information Processing Systems, pp. 3104-3112, 2014. |
U.S. Appl. No. 14/302,032 of Herdagdelen, A et al., filed Jun. 11, 2014. |
U.S. Appl. No. 14/559,540 of Eck, M et al., filed Dec. 3, 2014. |
U.S. Appl. No. 14/586,022 of Huang, F. et al., filed Dec. 30, 2014. |
U.S. Appl. No. 14/586,049, by Huang et al., filed Dec. 30, 2014. |
U.S. Appl. No. 14/586,074 by Huang et al., filed Dec. 30, 2014. |
U.S. Appl. No. 14/621,921 of Huang, F., filed Feb. 13, 2015. |
U.S. Appl. No. 14/861,747 by Huang, F., filed Sep. 22, 2015. |
U.S. Appl. No. 14/967,897 of Huang F. et al., filed Dec. 14, 2015. |
U.S. Appl. No. 14/973,387, of Rottmann, K., et al., filed Dec. 17, 2015. |
U.S. Appl. No. 14/980,654 of Pino, J. et al., filed Dec. 28, 2015. |
U.S. Appl. No. 14/981,769 by Rottmann, K., et al., filed Dec. 28, 2015. |
U.S. Appl. No. 14/981,794 by Rottmann, K., et al., filed Dec. 28, 2015. |
U.S. Appl. No. 15/199,890 of Zhang, Y. et al., filed Jun. 30, 2016. |
U.S. Appl. No. 15/244,179 of Zhang, Y., et al., filed Aug. 23, 2016. |
U.S. Appl. No. 15/275,235 of Huang, F. et al., filed Sep. 23, 2016. |
Vogel, S. et al., “HMM-Based Word Alignment in Statistical Translation.” In Proceedings of the 16th Conference on Computational Linguistics—vol. 2, Association for Computational Linguistics, 1996, pp. 836-841. |
Zamora, J.D., et al., “Tweets language identification using feature weightings,” Proceedings of the Twitter language identification workshop, Sep. 16, 2014, 5 pages. |
Extended European Search Report for European Application No. 16161095.1, dated Feb. 16, 2017, 4 pages. |
U.S. Appl. No. 15/644,690 of Huang, F. et al., filed Jul. 7, 2017. |
U.S. Appl. No. 15/422,463 of Merl, D., et al., filed Feb. 2, 2017. |
Notice of Allowance dated Jul. 12, 2017, for U.S. Appl. No. 14/981,794 by Rottman, K., et al., filed Dec. 28, 2015. |
Supplemental Notice of Allowability dated Jul. 13, 2017, for U.S. Appl. No. 14/981,769 by Rottmann, K., et al., filed Dec. 28, 2015. |
Corrected Notice of Allowability dated Jul. 13, 2017, for U.S. Appl. No. 14/973,387 of Rottmann, K., et al., filed Dec. 17, 2015. |
Final Office Action dated Jun. 16, 2017, for U.S. Appl. No. 14/586,022 of Huang, F. et al., filed Dec. 30, 2014. |
Notice of Allowance dated Jun. 6, 2017, for U.S. Appl. No. 14/981,794 by Rottman, K., et al., filed Dec. 28, 2015. |
Final Office Action dated Aug. 10, 2017 for U.S. Appl. No. 15/275,235 by Huang, F. et al. filed Sep. 23, 2016. |
Final Office Action dated Aug. 25, 2017 for U.S. Appl. No. 14/980,654 by Pino, J. et al., filed Dec. 28, 2015. |
Non-Final Office Action dated Aug. 25, 2017 for U.S. Appl. No. 15/652,175 by Herdagdelen, A., filed Jul. 17, 2017. |
Non-Final Office Action dated Aug. 29, 2017 for U.S. Appl. No. 14/967,897 by Huang, F., filed Dec. 14, 2015. |
Notice of Allowance dated Aug. 30, 2017 for U.S. Appl. No. 14/559,540 by Eck, M. et al. filed Dec. 3, 2014. |
Notice of Allowance dated Aug. 4, 2017, for U.S. Appl. No. 14/981,794 by Rottman, K., et al., filed Dec. 28, 2015. |
Notice of Allowance dated Jul. 26, 2017, for U.S. Appl. No. 14/586,074 by Huang, F., et al., filed Dec. 30, 2014. |
Notice of Allowance dated Jul. 28, 2017, for U.S. Appl. No. 14/586,049 by Huang, F., et al., filed Dec. 30, 2014. |
U.S. Appl. No. 15/652,144 of Rottmann, K., filed Jul. 17, 2017. |
U.S. Appl. No. 15/654,668 of Rottmann, K., filed Jul. 19, 2017. |
Notice of Allowability dated Sep. 12, 2017 for U.S. Appl. No. 14/981,794 by Rottman, K., et al., filed Dec. 28, 2015. |
Notice of Allowability dated Sep. 19, 2017 for U.S. Appl. No. 14/559,540 by Eck, M. et al. filed Dec. 3, 2014. |
Notice of Allowance dated Oct. 10, 2017 for U.S. Appl. No. 15/275,235 for Huang, F. et al., filed Sep. 23, 2016. |
U.S. Patent Application No. 15/672,690 of Huang, F., filed Aug. 9, 2017. |
U.S. Patent Application No. 15/696,121 of Rottmann, K. et al., filed Sep. 5, 2017. |
U.S. Patent Application No. 15/723,095 of Tiwari, P. filed Oct. 2, 2017. |
Corrected Notice of Allowability dated Nov. 17, 2017, for U.S. Appl. No. 14/559,540 of Eck, M. et al., filed Dec. 3, 2014. |
Non-Final Office Action dated Nov. 14, 2017 for U.S. Appl. No. 15/422,463 by Merl, D. filed Feb. 2, 2017. |
Notice of Allowance dated Dec. 8, 2017 for U.S. Appl. No. 15/652,175 by Herdagdelen, A., filed Jul. 17, 2017. |
Corrected Notice of Allowability dated Dec. 12, 2017, for U.S. Appl. No. 14/559,540 of Eck, M. et al., filed Dec. 3, 2014. |
U.S. Appl. No. 15/820,351 by Huang et al., filed Nov. 21, 2017. |
U.S. Appl. No. 15/821,167 by Huang et al., filed Nov. 22, 2017. |
U.S. Appl. No. 15/866,420 by Huang, F. filed Jan. 9, 2018. |
Number | Date | Country | |
---|---|---|---|
20170270102 A1 | Sep 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14302032 | Jun 2014 | US |
Child | 15445978 | US |