Users reviewing online information (e.g., via a web page, a social media post, etc.) are often interested in translations of content items included in the online information. For example, users might be interested in translations of words, sentences, phrases, paragraphs, or even pages. To provide the translations to users on demand, as and when requested, it is desirable to provide computer-implemented machine translations of content items. Often, machine translations are generated using natural language processing (NLP) algorithms. NLP algorithms take a content item, including any item containing language including text, images, audio, video, or other multi-media, as input and generate a machine translation which is then presented to users. However, content items can be inaccurately translated due to, for example, variants of the same language (e.g., American English versus British English), different meanings of the same word, non-standard phrases (e.g., slang), etc. For example, the word “lift” can mean “move upward” among speakers of American English (as that word is commonly used in America), whereas it can mean “elevator” for British English speakers. A content item including the phrase, “press the button for the lift,” could be translated into either “press the button for the elevator” or “press the button to go up.” In addition, machine translations of a content item are often based on dictionary translations and do not consider context, which often makes a significant difference such as in slang or colloquial passages.
When translating a content item, a machine translation output reranking system can create multiple possible translations automatically, e.g. by using various machine translation engines. Each possible translation can be incorporated into a web page or social media post. Users viewing web pages and/or social media posts can be asked to review (e.g. provide feedback for) the translations. Using the review results, a preferred translation can be selected to use for future viewers. Thus, according to implementations of the present disclosure, based on crowd-sourced determinations of the quality of machine translations, a preferred machine translation of a content item from multiple machine translations of the content item can be selected. More specifically, different algorithms can be used to generate different translations of the same content item, and feedback can be received from users regarding the quality of each machine translation. The received feedback can then be used to compute an aggregate score for the machine translations. The machine translation that gets the highest aggregate score can be identified as the preferred translation.
For example, a social media post (i.e., a content item) in German can be translated into multiple English translations using different machine translation engines. These multiple English translations can then be provided by a computer server to various groups of English language users. For example, a first group receives the first translation, a second group receives the second translation, and so on. After receiving the translations, the English language users can review the machine translations and provide their review regarding the quality of the machine translations, such as by clicking on a star rating. The user review (e.g. feedback) of a translation is received by the computer server and can be combined with other user reviews to compute a score for the quality of the translation. This score, for example, can be indicative of a perceived accuracy of a machine translation. The machine translation with the highest score can be declared as the preferred machine translation, and then can be subsequently provided to other users. If there is a tie in the scores, or if there is otherwise no preferred machine translation, then the machine translations with the highest scores, for example those above a threshold level, can be provided to additional user groups for further feedback. The additional user groups, in some implementations, can include some of the users who provided feedback earlier. This process can be repeated until a clearly highest-scoring translation, a “preferred machine translation,” is identified. In some implementations, the process can be stopped without identifying a clearly highest-scoring translation when either a maximum number of iterations is reached, or a maximum number of scoring users is reached, at which point the highest scoring machine translation is selected as the preferred machine translation. In some implementations, the results of the scoring can be fed back into the machine translation algorithms as model training data, or to identify particular features, parameters, or engines that provide the best translation.
In some implementations, groups of users selected to provide a review of a translation are based on factors such as their facility with the target language of the machine translation or an association with a classification. For example, a social media post could include a particular soccer slang or a colloquial term that can only be understood by some audiences, e.g., users who speak with a particular English accent in Liverpool. Because of the possibility of context affecting the meaning in the content item, it is likely that users who understand or appreciate the context are better-suited to provide feedback on the quality of a machine translation. In some implementations, the machine translation reranking system associates uses with a classification, such as a topic, a location, a theme, etc. Content items to be translated can also be associated with a classification. Users with a classification matching the content item classification can be selected to review translations.
In some implementations, a particular user can be repetitively polled for his or her feedback on the quality of different machine translations. Thus, the machine translation reranking system can maintain a database of past user feedback and use this database to determine users' historical ratings of computer-generated translations of content items. Such historical data can be used to assign a weight to user feedback, thus adjusting for users who provide consistently low or high ratings as compared to an average rating.
Several implementations of the described technology are discussed below in more detail in reference to the figures. Turning now to the figures,
CPU 110 can be a single processing unit or multiple processing units in a device or distributed across multiple devices. CPU 110 can be coupled to other hardware devices, for example, with the use of a bus, such as a PCI bus or SCSI bus. The CPU 110 can communicate with a hardware controller for devices, such as for a display 130. Display 130 can be used to display text and graphics. In some examples, display 130 provides graphical and textual visual feedback to a user. In some implementations, display 130 includes the input device as part of the display, such as when the input device is a touchscreen or is equipped with an eye direction monitoring system. In some implementations, the display is separate from the input device. Examples of display devices are: an LCD display screen, an LED display screen, a projected display (such as a heads-up display device or a head-mounted device), and so on. Other I/O devices 140 can also be coupled to the processor, such as a network card, video card, audio card, USB, firewire or other external device, camera, printer, speakers, CD-ROM drive, DVD drive, disk drive, or Blu-Ray device.
In some implementations, the device 100 also includes a communication device capable of communicating wirelessly or wire-based with a network node. The communication device can communicate with another device or a server through a network using, for example, TCP/IP protocols. Device 100 can utilize the communication device to distribute operations across multiple network devices.
The CPU 110 has access to a memory 150. A memory includes one or more of various hardware devices for volatile and non-volatile storage, and can include both read-only and writable memory. For example, a memory can comprise random access memory (RAM), CPU registers, read-only memory (ROM), and writable non-volatile memory, such as flash memory, hard drives, floppy disks, CDs, DVDs, magnetic storage devices, tape drives, device buffers, and so forth. A memory is not a propagating signal divorced from underlying hardware; a memory is thus non-transitory. Memory 150 includes program memory 160 that stores programs and software, such as an operating system 162, machine translation ranker 164, and any other application programs 166. Memory 150 also includes data memory 170 that can include different machine translation algorithms, fixed and/or variable parameters used in the machine translation algorithms, reviews or aggregate scores indicating a quality of one or more machine translations, content items used as inputs to the machine translations, user feedback data, user classification data, user classification algorithms, content item classification algorithms, multiple machine translations of content items, preferred machine translations of content items, configuration data, settings, and user options or preferences which can be provided to the program memory 160 or any element of the device 100.
The disclosed technology is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with the technology include, but are not limited to, personal computers, server computers, handheld or laptop devices, cellular telephones, wearable electronics, tablet devices, multiprocessor systems, microprocessor-based systems, set-top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
In some implementations, server 210 can be an edge server which receives client requests and coordinates fulfillment of those requests through other servers, such as servers 220A-C. Server computing devices 210 and 220 can comprise computing systems, such as device 100. Though each server computing device 210 and 220 is displayed logically as a single server, server computing devices can each be a distributed computing environment encompassing multiple computing devices located at the same or at geographically disparate physical locations. In some implementations, each server 220 corresponds to a group of servers.
Client computing devices 205 and server computing devices 210 and 220 can each act as a server or client to other server/client devices. Server 210 can connect to a database 215. Servers 220A-C can each connect to a corresponding database 225A-C. As discussed above, each server 220 may correspond to a group of servers, and each of these servers can share a database or can have their own database. Databases 215 and 225 can warehouse (e.g. store) information such as different machine translation algorithms, fixed and/or variable parameters used in the machine translation algorithms, reviews or aggregate scores indicating a quality of one or machine translations, content items used as inputs to the machine translations, user feedback data, user classification data, user classification algorithms, content item classification algorithms, multiple machine translations of content items, preferred machine translations of content items, and the like. Though databases 215 and 225 are displayed logically as single units, databases 215 and 225 can each be a distributed computing environment encompassing multiple computing devices, can be located within their corresponding server, or can be located at the same or at geographically disparate physical locations.
Network 230 can be a local area network (LAN) or a wide area network (WAN), but can also be other wired or wireless networks. Network 230 may be the Internet or some other public or private network. The client computing devices 205 can be connected to network 230 through a network interface, such as by wired or wireless communication. While the connections between server 210 and servers 220 are shown as separate connections, these connections can be any kind of local, wide area, wired, or wireless network, including network 230 or a separate public or private network.
General software 320 can include various applications including an operating system 322, local programs 324, and a BIOS 326. Specialized components 340 can be subcomponents of a general software application 320, such as a local program 324. Specialized components 340 can include machine translation generation engine 344, content item classification engine 346, scoring engine 348, user group defining engine 350 and components which can be used for controlling and receiving data from the specialized components, such as interface 342. In some implementations, the machine translation ranker 164 (shown in
Machine translation generation engines 344 can generate multiple computer-generated translations of a content item when each of the multiple computer-generated translations of the content item are created in the same target language. In some implementations, the machine translation generation engines 344 can be adjusted by implementing different algorithms, parameters, and classifiers for performing machine translations. Additionally, user feedback (or the outcome of scoring engine 348) received in connection with a machine translation can also be an input to machine translation generation engines 344 as model training data, or to identify particular features, parameters, or even algorithms that provide better machine translations.
Content item classification engine 346 can be configured to classify a content item into one or more categories or sub-categories, based, for example, on a topic of interest, a location, a theme, or a source of the content item. This classification can be according to one or more classification algorithms.
User group defining engine 350 provides translations of the content item to groups of users and receives feedback from the groups of users. Thus, a “user group” comprises users who will be polled for feedback on the quality (e.g., a perceived accuracy) of a machine translation of a content item. In some implementations, the user group defining engine 350 classifies users into a user category based on their interest in a topic, a location or a theme. For example, a first user group can include Italian language users who live in Seattle and are football fans. A second user group can include Italian language users who live in Seattle and are golf fans. A third user group can include Italian language users who live in Seattle and are baseball fans. Thus, in some implementations, the machine translation reranking system maintains a database of users. The users can, in some scenarios, be members of a social media system, employees of an organization, members of a class, or in general can be affiliated with any real or virtual entity.
In some implementations, user group defining engine 350 can also select users. This selection, for example, can be based on a mapping between how a content item has been classified and how users have been classified or defined. If a content item has been classified as belonging to a “football” category and is applicable to people living in Seattle, then the first user group (with reference to the above example) can be selected for providing feedback on the content item translation. In other words, users in a user group can be selected based on a match between an item category and a user category.
Scoring engine 348 assigns an aggregate score to a machine translation based on the feedback received from a group of users, such as a group defined by user group defining engine 350, that received the translation. The scoring engine 348 also can be configured to select a machine translation as the preferred machine translation in response to determining that the selected machine translation has the highest aggregate score amongst all machine translations of the content item. For example, a first group receives the first translation, a second group receives the second translation, and so on. Based on feedback from individual users in each corresponding group, the first translation receives a first aggregate score, the second translation receives a second aggregate score, and so on. Thus the first aggregate score can be an aggregate over all scores supplied by individual users in the first group. Similarly, the second aggregate score can be an aggregate over all scores supplied by individual users in the second group, and so on. If the first machine translation gets the highest aggregate score, then the scoring engine can identify the first machine translation as the preferred machine translation. In various implementations, a machine translation is selected as the preferred machine translation if that machine translation's aggregate score is above a threshold score level, or if that machine translation's aggregate score is a threshold amount above the aggregate scores of other machine translations.
In a scenario when a preferred machine translation has not been found, e.g., when two or more machine translations are candidates for being a preferred machine translation of a content item because neither has a clearly highest aggregate score, scoring engine 348 can be configured to repeat the aggregate scoring process iteratively until either a preferred machine translation is identified based on feedback from additional user groups, or a terminating condition is reached. Thus, for example, in a first iteration, ten users can be polled for each machine translation. In a second iteration, forty users can be polled for each of the five scoring machine translations. In a third iteration, sixty users can be polled for each of the top three scoring machine translations. There is no limitation on the number of users included in a user group that can be polled. In implementations of the machine translation reranking system, an aggregate score can be updated based on the feedback from additional user groups. The terminating condition, for example, can be a maximum number of computer iterations reached, or a maximum number of users in the additional user group(s) selected, or a combination of the above.
In some implementations, the aggregate score also depends on a “user-importance weight” for a user; such a weight can be indicative of how much importance is given to that user's feedback relative to the other users in a group. In some implementations, the user-importance weight for a given content item depends on a comparison of reviews supplied by the user for particular content item translations with average reviews supplied by all users for the same time period. This illustrates that a user's “average scoring” performance in the past can be considered in assigning weights. In some implementations, the user importance weight can be, alternatively or in addition, based on a determination of how likely that user is to be able to correctly review the translation. For example, for a translation of a post to a car enthusiast website, a user who is determined to know about cars, such as based on a history of the user interacting with content items about cars, posting to other car topics web sites, talking to other car enthusiasts, etc., can be given a higher weight.
Those skilled in the art will appreciate that the components illustrated in
At block 406, a maximum number of users is set and a user count is initialized to zero to iterate over all users in a group. The size of a user group depends on the maximum number of users selected in that group. In some implementations, a user group is defined by the machine translation reranking system, i.e., users of a user group are selected for being polled on the feedback of a quality (e.g., a perceived accuracy) of a machine translation of a content item.
In some implementations, users can be given various classifications indicating topics or content item with which the user has knowledge of or frequently interacts with, and therefore is likely to understand the language associated with that topic or type of content item. For example, a user who interacts with motorcycle content, sends messages related to motorcycles, or has friends who know about motorcycles can be classified as knowing language about motorcycles. As another example, a user from a particular location, such as South Korea, can be classified as speaking a dialect specific to that region. In addition, translations can be assigned one or more classification labels, such as based on keywords or other analysis of the content item or its translation, characteristics about a source of the content item, or externals of the content item such as where it was posted, what IP it was posted from, etc. Users can be selected to review the translation based on a match between the user classifications and the classification labels given to the translation. Alternatively or in addition, reviews given by users with a classification matching a classification label assigned to the translation can be given a higher weight.
At block 410, the process receives a request to translate a content item. A request can be accompanied with the content item. In various implementations, the request can indicate a target language of the translation for the content item. Next, at block 414, a machine translation of the content item can be generated using a translation algorithm. The generated machine translation can then be saved in the database at block 418. The machine translation can be presented (or displayed) to users in a user group at block 422, and feedback from a user can be received at block 426. For example, a user may be requested (via a web page, a mobile app, or an email message) to provide a star rating, based on a five star rating scale. Thus, process 400 can iterate between blocks 422 and 442 until feedback from every user in a user group is received. In some implementations, feedback provided by users can be expressed as, but not limited to, star ratings, numeric quantities, letter grades on a scale from A to F, and the like. In some implementations user groups are not pre-defined, and instead, as requests for a translation of a content item are received, the requesting user is considered part of the user group and is provided the translation and an input area to review the translation. In some implementations, only requesting users that match a criteria, such as sharing a classification of the content item to be translated, are provided with the input area to review the translation. In some implementations, translation requests can be provided translations to review, and those that respond with feedback are considered part of the user group.
After receiving feedback from a user at block 430, the machine translation is assigned an aggregate score based on the user's feedback. For example, a user-supplied review can be a positive (integer or non-integer valued) number, such as between a scale of 1 and 5. In some implementations, users can supply comments as a part of feedback associated with the quality of a machine translation. For example, a user can provide a review of 3.5 to the quality of a machine translation, and also provide comments as to how he or she believes that the quality of the machine translation can be improved. For example, he or she might suggest choosing an alternate word or phrase for a translation of a content item. Such suggestions are noted by the server running process 400.
In some implementations, the assigned review depends on a combination of a user-supplied review as well as a user-importance weight. A user-importance weight can be indicative of how much importance is given to a user's feedback, relative to the other users in a group. In some implementations, if a user is consistently above or below a threshold amount different from the average (i.e. the user always gives a 0 or 100 review) the user can be excluded from further reviewing. In an additional example, an assigned review can be a product of a user-supplied review and a user-importance weight. This can be applicable to implementations in which some users' feedback is given greater importance than that of the others. For example, reviews from users who have proven themselves to be exacting reviewers, i.e. they provide lower than average reviews for translations that have only a few mistakes and provide high reviews only to exceedingly accurate translations, can be given greater weight than users who have historically given high reviews even to significantly flawed translations. This weighting value can be determined by comparing a user's given review for each previous translation as compared to the average review for the corresponding translation. For example, a user has provided three previous reviews (on a % scale): 75%, 50%, 30% where the corresponding average for each translation review was 70%, 80%, 60%. In some implementations, the user's weighting value can be calculated as Σ(average/user review)/(# of reviews), or in this case: (70/75+80/50+60/30)/3=1.511. In some implementations, the user weighting value can be adjusted down for inconsistent users. For example, if a user sometimes scores translations higher than average and sometimes lower than average, without consistently doing either, the weighting value for that user can be lowered to account for the uncertainty in their scores. In some implementations, the user important weight can be computed based on the differences between a user's historical review and the average review for that sentence, then summing over all the sentences. Mathematically this can be calculated as:
Wu as W(u)=1/nu*sum W(u,i)
At block 434, the assigned review is saved in a database. The process determines at block 438 whether or not all the users in a group (e.g., the maximum number of users) have provided their feedback. If all the users have not provided their feedback, the process increments (at block 442) a user count variable to point to the next user in the group and then loops back to block 422. Accordingly, process 400 iterates between block 422 and block 442 until feedback from every user in a user group is received. In some implementations, the translation reviewing process 400 can be performed without a specified size of user groups and reviews can be accumulated continuously or until a preferred translation is selected, such as is discussed below in relation to process 500. If all the users have provided their feedback, the process 400 terminates at block 446.
At block 504, a maximum number of users is set and a maximum number of iterations is initialized to zero. In some implementations, process 500 iterates until a preferred machine translation is identified, or until the maximum number of iterations is reached. In some implementations, a user group is defined by the machine translation reranking system, such as a group comprising users who will be polled for feedback on the quality (e.g., a perceived accuracy) of a machine translation of a content item. In some implementations, a user group is defined as a number of users, and users are added to the group as they provide reviews for a translation version.
Blocks 510, 514, 518, and 522 are used for repeating over multiple machine translations. In implementations of the present disclosure, multiple machine translations of a content item are generated, all translations being in the same target language. Each machine translation is provided to a user group for their feedback on the quality of the translations, the machine translations are assigned aggregate scores based on the feedback, and the aggregate scores are finally used in determining a preferred machine translation from the multiple machine translations of the content item.
At block 510, the aggregate scores corresponding to a machine translation can be retrieved (typically from a memory or a database). In some implementations, these aggregate scores can be based on the reviews assigned in block 430 shown in
Using the retrieved reviews, an aggregate score is computed for (at block 514) with a respective machine translation. In some implementations, the aggregate score can be previously computed for the machine translation, such as is shown above at block 430. Thus, for example, an aggregate score of a machine translation can be the arithmetic average of the retrieved reviews. The aggregate score, in alternate implementations, can be based on any mathematical formula such as median or mode, and is not necessarily limited to the arithmetic average.
Next, at block 518, process 500 determines whether or not every machine translation has been associated with an aggregate score. If every machine translation has not been considered, process 500 proceeds to block 522 to move to a next translation, and then loops back to block 510. Otherwise, if every translation has been associated with an aggregate score, process 500 proceeds to block 526. Hence, once every machine translation has been considered, each machine translation is associated with an aggregate score.
At block 526, the aggregate scores are sorted over all machine translations. For example, the machine translation with the highest aggregate score can be given rank 1, the machine translation with second-to-highest aggregate score can be given rank 2, and so on. Next, process 500 determines (at block 530) if there is a unique maximum aggregate score. In various implementations the, unique maximum aggregate score is one of: the highest ranked aggregate that is greater than a threshold; or an aggregate score that is a threshold amount above all other aggregate scores.
If there is no unique maximum aggregate score, e.g., the difference of the aggregate scores of the two or more machine translations is less than a threshold, process 500, at block 532, determines whether a maximum number of users in a group has been reached or not. If the maximum number of users has not been reached, process 500 proceeds to the sub-routine shown in
If, however, the maximum number of users has been reached, process 500 moves to block 534. At block 534, process 500 determines the machine translation with the highest aggregate score. In some examples if a machine translation with a unique maximum aggregate score cannot be determined (i.e., there is a tie among top-scoring translations), process 500 selects a top-scoring translation. The respective machine translation is identified at block 538 as a preferred candidate. The preferred candidate is displayed (at block 542) as the translation of the content item in subsequent requests for machine translations of that content item. In other words, better quality translations are provided to users who later request translations of the content item, partly due to the scoring system employed to determine a preferred machine translation from multiple machine translations. Process 500 terminates at block 546.
Now referring to
Process 500 determines (at block 556) if a maximum number of iterations (e.g., indicated in the form of a terminating condition for the sub-routine) has been reached. If the maximum number of iterations has not been reached, the process moves to block 560. At block 560, based on the aggregate scores (calculated in block 514 or updated at block 572, a set of top-scoring machine translations are identified. In some implementations, the top-scoring machine translations can be those translations whose aggregate scores lie within a fixed percentage (e.g., 10%) of the top aggregate scores. In some examples, the top-scoring machine translations can be a fixed number (e.g., five) of the top-scoring translations. In some implementations, the top-scoring machine translations can be those machine translations whose aggregate scores do not differ from each other by a value greater than a threshold.
Next, at block 564, each of the top-scoring machine translations is presented to users in a user group, and feedback from the users is received at block 568 as discussed in more detail in relation to
In example 600, in Group 1, the user-supplied reviews provided by User 1, User 2, and User 3 are 2, 1, and 4 respectively, as indicated in row 640. The user-importance weight for these users in Group 1 are 3, 3, and 2 respectively, as also indicated in row 640. With regard to Group 2, the user-supplied reviews provided by User 4 and User 5 are 5 and 4 respectively, as indicated in row 640. The user-importance weight for these users in Group 2 are 2 and 3 respectively, as also indicated in row 640.
The assigned review can be a product of a user-supplied review and a user-importance weight. Thus, the assigned review for User 1, User 2, and User 3 in Group 1 are 6, 3, and 8 respectively, as indicated in row 650. Also, the assigned review for User 4 and User 5 in Group 2 are 10 and 12 respectively, as indicated in row 650. Based on the assigned review per user, per user group, the aggregate score can be computed. In the example shown in
While
Several implementations of the disclosed technology are described above in reference to the figures. The computing devices on which the described technology may be implemented may include one or more central processing units, memory, input devices (e.g., keyboard and pointing devices), output devices (e.g., display devices), storage devices (e.g., disk drives), and network devices (e.g., network interfaces). The memory and storage devices are computer-readable storage media that can store instructions that implement at least portions of the described technology. In addition, the data structures and message structures can be stored or transmitted via a data transmission medium, such as a signal on a communications link. Various communications links may be used, such as the Internet, a local area network, a wide area network, or a point-to-point dial-up connection. Thus, computer-readable media can comprise computer-readable storage media (e.g., “non-transitory” media) and computer-readable transmission media.
As used herein, being above a threshold means that a value for an item under comparison is above a specified other value, that an item under comparison is among a certain specified number of items with the largest value, or that an item under comparison has a value within a specified top percentage value. As used herein, being below a threshold means that a value for an item under comparison is below a specified other value, that an item under comparison is among a certain specified number of items with the smallest value, or that an item under comparison has a value within a specified bottom percentage value. As used herein, being within a threshold means that a value for an item under comparison is between two specified other values, that an item under comparison is among a middle specified number of items, or that an item under comparison has a value within a middle specified percentage range.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Specific embodiments and implementations have been described herein for purposes of illustration, but various modifications can be made without deviating from the scope of the embodiments and implementations. The specific features and acts described above are disclosed as example forms of implementing the claims that follow. Accordingly, the embodiments and implementations are not limited except as by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
5293581 | DiMarco et al. | Mar 1994 | A |
5477451 | Brown et al. | Dec 1995 | A |
5510981 | Berger et al. | Apr 1996 | A |
5799193 | Sherman et al. | Aug 1998 | A |
5991710 | Papineni et al. | Nov 1999 | A |
6002998 | Martino et al. | Dec 1999 | A |
6157905 | Powell | Dec 2000 | A |
6161082 | Goldberg | Dec 2000 | A |
6223150 | Duan et al. | Apr 2001 | B1 |
6266642 | Franz et al. | Jul 2001 | B1 |
6304841 | Berger et al. | Oct 2001 | B1 |
6377925 | Newman et al. | Apr 2002 | B1 |
6393389 | Chanod et al. | May 2002 | B1 |
6629095 | Wagstaff et al. | Sep 2003 | B1 |
7054804 | Gonzales et al. | May 2006 | B2 |
7110938 | Cheng et al. | Sep 2006 | B1 |
7359861 | Lee et al. | Apr 2008 | B2 |
7533019 | Riccardi et al. | May 2009 | B1 |
7664629 | Dymetman et al. | Feb 2010 | B2 |
7813918 | Muslea et al. | Oct 2010 | B2 |
7827026 | Brun et al. | Nov 2010 | B2 |
7895030 | Al-Onaizan et al. | Feb 2011 | B2 |
7983903 | Gao et al. | Jul 2011 | B2 |
8015140 | Kumar et al. | Sep 2011 | B2 |
8145484 | Zweig et al. | Mar 2012 | B2 |
8175244 | Frankel et al. | May 2012 | B1 |
8204739 | Lane et al. | Jun 2012 | B2 |
8209333 | Hubbard et al. | Jun 2012 | B2 |
8265923 | Chatterjee et al. | Sep 2012 | B2 |
8275602 | Curry et al. | Sep 2012 | B2 |
8386235 | Duan et al. | Feb 2013 | B2 |
8543580 | Chen et al. | Sep 2013 | B2 |
8756050 | Harkness | Jun 2014 | B1 |
8825466 | Wang et al. | Sep 2014 | B1 |
8825759 | Ho et al. | Sep 2014 | B1 |
8831928 | Marcu et al. | Sep 2014 | B2 |
8838434 | Liu | Sep 2014 | B1 |
8874429 | Crosley et al. | Oct 2014 | B1 |
8897423 | Nanjundaswamy | Nov 2014 | B2 |
8935150 | Christ | Jan 2015 | B2 |
8942973 | Viswanathan | Jan 2015 | B2 |
8949865 | Murugesan et al. | Feb 2015 | B1 |
8983974 | Ho et al. | Mar 2015 | B1 |
8990068 | Orsini | Mar 2015 | B2 |
8996352 | Orsini | Mar 2015 | B2 |
8996353 | Orsini | Mar 2015 | B2 |
8996355 | Orsini | Mar 2015 | B2 |
9009025 | Porter et al. | Apr 2015 | B1 |
9031829 | Leydon | May 2015 | B2 |
9104661 | Evans | Aug 2015 | B1 |
9183309 | Gupta | Nov 2015 | B2 |
9231898 | Orsini | Jan 2016 | B2 |
9245278 | Orsini | Jan 2016 | B2 |
9336206 | Orsini | May 2016 | B1 |
9477652 | Huang et al. | Oct 2016 | B2 |
9734142 | Huang | Aug 2017 | B2 |
9734143 | Rottmann et al. | Aug 2017 | B2 |
9740687 | Herdagdelen et al. | Aug 2017 | B2 |
9747283 | Rottmann et al. | Aug 2017 | B2 |
9805029 | Rottmann et al. | Oct 2017 | B2 |
9830386 | Huang et al. | Nov 2017 | B2 |
9830404 | Huang et al. | Nov 2017 | B2 |
20020087301 | Jones et al. | Jul 2002 | A1 |
20020161579 | Saindon et al. | Oct 2002 | A1 |
20020169592 | Aityan | Nov 2002 | A1 |
20030040900 | D'Agostini et al. | Feb 2003 | A1 |
20040002848 | Zhou et al. | Jan 2004 | A1 |
20040049374 | Breslau et al. | Mar 2004 | A1 |
20040098247 | Moore | May 2004 | A1 |
20040122656 | Abir et al. | Jun 2004 | A1 |
20040243392 | Chino et al. | Dec 2004 | A1 |
20050021323 | Li | Jan 2005 | A1 |
20050055630 | Scanlan et al. | Mar 2005 | A1 |
20050228640 | Aue et al. | Oct 2005 | A1 |
20060111891 | Menezes et al. | May 2006 | A1 |
20060206798 | Kohlmeier et al. | Sep 2006 | A1 |
20060271352 | Nikitin et al. | Nov 2006 | A1 |
20070130563 | Elgazzar et al. | Jun 2007 | A1 |
20070136222 | Horvitz et al. | Jun 2007 | A1 |
20080046231 | Laden et al. | Feb 2008 | A1 |
20080077384 | Agapi et al. | Mar 2008 | A1 |
20080281578 | Kumaran et al. | Nov 2008 | A1 |
20090070095 | Gao et al. | Mar 2009 | A1 |
20090083023 | Foster et al. | Mar 2009 | A1 |
20090132233 | Etzioni et al. | May 2009 | A1 |
20090182547 | Niu et al. | Jul 2009 | A1 |
20090198487 | Wong et al. | Aug 2009 | A1 |
20090210214 | Qian et al. | Aug 2009 | A1 |
20090276206 | Fitzpatrick et al. | Nov 2009 | A1 |
20090281789 | Waibel et al. | Nov 2009 | A1 |
20090326912 | Ueffing et al. | Dec 2009 | A1 |
20100042928 | Rinearson | Feb 2010 | A1 |
20100121639 | Zweig et al. | May 2010 | A1 |
20100149803 | Nakano et al. | Jun 2010 | A1 |
20100161642 | Chen et al. | Jun 2010 | A1 |
20100194979 | Blumenschein et al. | Aug 2010 | A1 |
20100223048 | Lauder et al. | Sep 2010 | A1 |
20100228777 | Imig et al. | Sep 2010 | A1 |
20100241416 | Jiang et al. | Sep 2010 | A1 |
20100283829 | De Beer et al. | Nov 2010 | A1 |
20100299132 | Dolan et al. | Nov 2010 | A1 |
20110099000 | Rai et al. | Apr 2011 | A1 |
20110137636 | Srihari et al. | Jun 2011 | A1 |
20110246172 | Liberman et al. | Oct 2011 | A1 |
20110246881 | Kushman et al. | Oct 2011 | A1 |
20110252027 | Chen et al. | Oct 2011 | A1 |
20110282648 | Sarikaya et al. | Nov 2011 | A1 |
20120005224 | Ahrens et al. | Jan 2012 | A1 |
20120029910 | Medlock et al. | Feb 2012 | A1 |
20120035907 | Lebeau et al. | Feb 2012 | A1 |
20120035915 | Kitade et al. | Feb 2012 | A1 |
20120047172 | Ponte et al. | Feb 2012 | A1 |
20120059653 | Adams et al. | Mar 2012 | A1 |
20120101804 | Roth | Apr 2012 | A1 |
20120109649 | Talwar | May 2012 | A1 |
20120123765 | Estelle et al. | May 2012 | A1 |
20120130940 | Gattani et al. | May 2012 | A1 |
20120138211 | Barger et al. | Jun 2012 | A1 |
20120158621 | Bennett et al. | Jun 2012 | A1 |
20120173224 | Anisimovich et al. | Jul 2012 | A1 |
20120209588 | Wu et al. | Aug 2012 | A1 |
20120253785 | Hamid et al. | Oct 2012 | A1 |
20120330643 | Frei et al. | Dec 2012 | A1 |
20130018650 | Moore et al. | Jan 2013 | A1 |
20130060769 | Pereg et al. | Mar 2013 | A1 |
20130084976 | Kumaran et al. | Apr 2013 | A1 |
20130103384 | Hunter et al. | Apr 2013 | A1 |
20130144593 | Och | Jun 2013 | A1 |
20130144595 | Lord et al. | Jun 2013 | A1 |
20130144603 | Lord et al. | Jun 2013 | A1 |
20130144619 | Lord et al. | Jun 2013 | A1 |
20130173247 | Hodson et al. | Jul 2013 | A1 |
20130246063 | Teller et al. | Sep 2013 | A1 |
20130317808 | Kruel et al. | Nov 2013 | A1 |
20140006003 | Soricut | Jan 2014 | A1 |
20140006929 | Swartz et al. | Jan 2014 | A1 |
20140012568 | Caskey et al. | Jan 2014 | A1 |
20140025734 | Griffin et al. | Jan 2014 | A1 |
20140059030 | Hakkani-Tur et al. | Feb 2014 | A1 |
20140081619 | Solntseva et al. | Mar 2014 | A1 |
20140108393 | Angwin et al. | Apr 2014 | A1 |
20140163977 | Hoffmeister et al. | Jun 2014 | A1 |
20140195884 | Castelli et al. | Jul 2014 | A1 |
20140207439 | Venkatapathy, Sr. et al. | Jul 2014 | A1 |
20140229155 | Leydon | Aug 2014 | A1 |
20140279996 | Teevan et al. | Sep 2014 | A1 |
20140280295 | Kurochkin et al. | Sep 2014 | A1 |
20140280592 | Zafarani et al. | Sep 2014 | A1 |
20140288913 | Shen et al. | Sep 2014 | A1 |
20140288917 | Orsini et al. | Sep 2014 | A1 |
20140288918 | Orsini et al. | Sep 2014 | A1 |
20140303960 | Orsini et al. | Oct 2014 | A1 |
20140335483 | Buryak et al. | Nov 2014 | A1 |
20140337007 | Fuegen et al. | Nov 2014 | A1 |
20140337989 | Bojja et al. | Nov 2014 | A1 |
20140350916 | Jagpal et al. | Nov 2014 | A1 |
20140358519 | Dymetman et al. | Dec 2014 | A1 |
20140365200 | Sagie | Dec 2014 | A1 |
20140365460 | Portnoy et al. | Dec 2014 | A1 |
20150006143 | Skiba et al. | Jan 2015 | A1 |
20150006219 | Jose et al. | Jan 2015 | A1 |
20150033116 | Severdia et al. | Jan 2015 | A1 |
20150046146 | Crosley et al. | Feb 2015 | A1 |
20150066805 | Taira et al. | Mar 2015 | A1 |
20150120290 | Shagalov | Apr 2015 | A1 |
20150134322 | Cuthbert et al. | May 2015 | A1 |
20150142420 | Sarikaya et al. | May 2015 | A1 |
20150161104 | Buryak et al. | Jun 2015 | A1 |
20150161110 | Salz | Jun 2015 | A1 |
20150161112 | Galvez et al. | Jun 2015 | A1 |
20150161114 | Buryak et al. | Jun 2015 | A1 |
20150161115 | Denero et al. | Jun 2015 | A1 |
20150161227 | Buryak et al. | Jun 2015 | A1 |
20150213008 | Orsini | Jul 2015 | A1 |
20150228279 | Moreno et al. | Aug 2015 | A1 |
20150293997 | Smith et al. | Oct 2015 | A1 |
20150363388 | Herdagdelen et al. | Dec 2015 | A1 |
20160041986 | Nguyen | Feb 2016 | A1 |
20160048505 | Tian et al. | Feb 2016 | A1 |
20160092603 | Rezaei et al. | Mar 2016 | A1 |
20160162473 | Hedley et al. | Jun 2016 | A1 |
20160162477 | Orsini | Jun 2016 | A1 |
20160162478 | Blassin et al. | Jun 2016 | A1 |
20160162575 | Eck | Jun 2016 | A1 |
20160177628 | Juvani | Jun 2016 | A1 |
20160188575 | Sawaf | Jun 2016 | A1 |
20160188661 | Zhang et al. | Jun 2016 | A1 |
20160188703 | Zhang et al. | Jun 2016 | A1 |
20160217124 | Sarikaya et al. | Jul 2016 | A1 |
20160239476 | Huang | Aug 2016 | A1 |
20160267073 | Noeman et al. | Sep 2016 | A1 |
20160299884 | Chioasca et al. | Oct 2016 | A1 |
20160357519 | Vargas et al. | Dec 2016 | A1 |
20170011739 | Huang et al. | Jan 2017 | A1 |
20170083504 | Huang | Mar 2017 | A1 |
20170169015 | Huang | Jun 2017 | A1 |
20170177564 | Rottmann et al. | Jun 2017 | A1 |
20170185583 | Pino et al. | Jun 2017 | A1 |
20170185586 | Rottmann et al. | Jun 2017 | A1 |
20170185588 | Rottmann et al. | Jun 2017 | A1 |
20170270102 | Herdagdelen et al. | Sep 2017 | A1 |
20170315988 | Herdagdelen et al. | Nov 2017 | A1 |
20170315991 | Rottmann et al. | Nov 2017 | A1 |
Entry |
---|
Non-Final Office Action dated Dec. 17, 2015, for U.S. Appl. No. 14/302,032 of Saint Cyr, L., filed Jun. 11, 2014. |
U.S. Appl. No. 14/302,032 of Herdagdelen, A et al., filed Jun. 11, 2014. |
U.S. Appl. No. 14/559,540 of Eck, M et al., filed Dec. 3, 2014. |
U.S. Appl. No. 14/621,921 of Huang, F., filed Feb. 13, 2015. |
U.S. Appl. No. 14/967,897 of Huang F. et al., filed Dec. 14, 2015. |
U.S. Appl. No. 14/980,654 of Pino, J. et al., filed Dec. 28, 2015. |
Final Office Action dated Jul. 1, 2016, for U.S. Appl. No. 14/302,032 of Herdagdelen, A., filed Jun. 11, 2014. |
Non-Final Office Action dated Mar. 10, 2016, for U.S. Appl. No. 14/621,921 of Huang, F., filed Feb. 13, 2015. |
Notice of Allowance dated Jul. 18, 2016, for U.S. Appl. No. 14/621,921 of Huang, F., filed Feb. 13, 2015. |
U.S. Appl. No. 15/199,890 of Zhang, Y. et al., filed Jun. 30, 2016. |
U.S. Appl. No. 15/244,179 of Zhang, Y., et al., filed Aug. 23, 2016. |
International Search Report and Written Opinion for International Application No. PCT/US2015/051737, dated Jul. 28, 2016, 22 pages. |
Koehn, P. et al., “Statistical Phrase-Based Translation,” Proceedings of the 2003 Conference of the North American Chapter of the Association for computational Linguistics on Human Language Technology—vol. 1, Assoc. for Computational Linguistics, 2003, p. |
Non-Final Office Action dated Dec. 29, 2016, for U.S. Appl. No. 14/586,049 of Huang, F. et al., filed Dec. 30, 2014. |
Non-Final Office Action dated Dec. 30, 2016 in U.S. Appl. No. 14/586,074 by Huang, F. et al., filed Dec. 30, 2014. |
Non-Final Office Action dated Jul. 28, 2016, for U.S. Appl. No. 14/861,747 of F. Huang, filed Sep. 22, 2015. |
Non-Final Office Action dated Nov. 9, 2016, for U.S. Appl. No. 14/973,387 by Rottmann, K., et al., filed Dec. 17, 2015. |
Non-Final Office Action dated Oct. 6, 2016, U.S. Appl. No. 14/981,794 of Rottmann, K. filed Dec. 28, 2015. |
Notice of Allowance dated Apr. 13, 2017, for U.S. Appl. No. 14/973,387 of Rottmann, K., et al., filed Dec. 17, 2015. |
Notice of Allowance dated Apr. 19, 2017, for U.S. Appl. No. 14/981,769 by Rottmann, K., et al., filed Dec. 28, 2015. |
Notice of Allowance dated Apr. 20, 2017 for U.S. Appl. No. 14/302,032 by Herdagdelen, A., et al., filed Jun. 11, 2014. |
Notice of Allowance dated Apr. 7, 2017 for U.S. Appl. No. 14/861,747 by Huang, F., et al., filed Sep. 22, 2015. |
Notice of Allowance dated Mar. 1, 2017, for U.S. Appl. No. 14/981,769 by Rottmann, K., et al., filed Dec. 28, 2015. |
Sutskever, I., et al., “Sequence to sequence learning with neural networks,” Advances in Neural Information Processing Systems, pp. 3104-3112, 2014. |
U.S. Appl. No. 14/586,049, by Huang et al., filed Dec. 30, 2014. |
U.S. Appl. No. 14/586,074 by Huang et al., filed Dec. 30, 2014. |
U.S. Appl. No. 14/861,747 by Huang, F., filed Sep. 22, 2015. |
U.S. Appl. No. 14/973,387, of Rottmann, K., et al., filed Dec. 17, 2015. |
U.S. Appl. No. 14/981,769 by Rottmann, K., et al., filed Dec. 28, 2015. |
U.S. Appl. No. 14/981,794 by Rottmann, K., et al., filed Dec. 28, 2015. |
U.S. Appl. No. 15/445,978 by Herdagdelen, A., et al., filed Feb. 28, 2017. |
Vogel, S. et al., “HMM-Based Word Alignment in Statistical Translation.” In Proceedings of the 16th Conference on Computational Linguistics—vol. 2, Association for Computational Linguistics, 1996, pp. 836-841. |
Non-Final Office Action dated Feb. 9, 2017, for U.S. Appl. No. 14/559,540 of Eck, M. et al., filed Dec. 3, 2014. |
Non-Final Office Action dated Jan. 12, 2017, for U.S. Appl. No. 15/275,235 of Huang, F. et al., filed Sep. 23, 2016. |
Notice of Allowance dated Nov. 30, 2016, for U.S. Appl. No. 14/302,032 of Herdagdelen, A., filed Jun. 11, 2014. |
U.S. Appl. No. 15/275,235 of Huang, F. et al., filed Sep. 23, 2016. |
Non-Final Office Action dated Jan. 19, 2017, for U.S. Appl. No. 14/980,654 of Pino, J. et al., filed Dec. 28, 2015. |
U.S. Appl. No. 15/644,690 of Huang, F. et al., filed Jul. 7, 2017. |
Notice of Allowance dated Jul. 12, 2017, for U.S. Appl. No. 14/981,794 by Rottman, K., et al., filed Dec. 28, 2015. |
Supplemental Notice of Allowability dated Jul. 13, 2017, for U.S. Appl. No. 14/981,769 by Rottmann, K., et al., filed Dec. 28, 2015. |
Corrected Notice of Allowability dated Jul. 13, 2017, for U.S. Appl. No. 14/973,387 of Rottmann, K., et al., filed Dec. 17, 2015. |
Notice of Allowance dated Jun. 6, 2017, for U.S. Appl. No. 14/981,794 by Rottman, K., et al., filed Dec. 28, 2015. |
Non-Final Office Action dated Jun. 26, 2017, for U.S. Appl. No. 15/445,978 of Herdagdelen, A., filed Feb. 28, 2017. |
Final Office Action dated Aug. 10, 2017 for U.S. Appl. No. 15/275,235 by Huang, F. et al. filed Sep. 23, 2016. |
Final Office Action dated Aug. 25, 2017 for U.S. Appl. No. 14/980,654 by Pino, J. et al., filed Dec. 28, 2015. |
Non-Final Office Action dated Aug. 25, 2017 for U.S. Appl. No. 15/652,175 by Herdagdelen, A., filed Jul. 17, 2017. |
Non-Final Office Action dated Aug. 29, 2017 for U.S. Appl. No. 14/967,897 by Huang, F., filed Dec. 14, 2015. |
Notice of Allowance dated Aug. 30, 2017 for U.S. Appl. No. 14/559,540 by Eck, M. et al. filed Dec. 3, 2014. |
Notice of Allowance dated Aug. 4, 2017, for U.S. Appl. No. 14/981,794 by Rottman, K., et al., filed Dec. 28, 2015. |
Notice of Allowance dated Jul. 26, 2017, for U.S. Appl. No. 14/586,074 by Huang, F., et al., filed Dec. 30, 2014. |
Notice of Allowance dated Jul. 28, 2017, for U.S. Appl. No. 14/586,049 by Huang, F., et al., filed Dec. 30, 2014. |
U.S. Appl. No. 15/652,144 of Rottmann, K., filed Jul. 17, 2017. |
U.S. Appl. No. 15/654,668 of Rottmann, K., filed Jul. 19, 2017. |
Corrected Notice of Allowability dated Nov. 17, 2017, for U.S. Appl. No. 14/559,540 of Eck, M. et al., filed Dec. 3, 2014. |
Notice of Allowance dated Dec. 8, 2017 for U.S. Appl. No. 15/652,175 by Herdagdelen, A., filed Jul. 17, 2017. |
Corrected Notice of Allowability dated Dec. 12, 2017, for U.S. Appl. No. 14/559,540 of Eck, M. et al., filed Dec. 3, 2014. |
U.S. Appl. No. 15/820,351 by Huang et al., filed Nov. 21, 2017. |
U.S. Appl. No. 15/821,167 by Huang et al., filed Nov. 22, 2017. |
Final Office Action dated Sep. 8, 2017 for U.S. Appl. No. 15/445,978 of Herdagdelen, A. filed Feb. 28, 2017. |
Notice of Allowability dated Sep. 12, 2017 for U.S. Appl. No. 14/981,794 by Rottman, K., et al., filed Dec. 28, 2015. |
Notice of Allowability dated Sep. 19, 2017 for U.S. Appl. No. 14/559,540 by Eck, M. et al. filed Dec. 3, 2014. |
Notice of Allowance dated Oct. 10, 2017 for U.S. Appl. No. 15/275,235 for Huang, F. et al., filed Sep. 23, 2016. |
Notice of Allowance dated Oct. 23, 2017 for U.S. Appl. No. 15/445,978 of Herdagdelen, A. filed Feb. 28, 2017. |
U.S. Appl. No. 15/672,690 of Huang, F., filed Aug. 9, 2017. |
U.S. Appl. No. 15/696,121 of Rottmann, K. et al., filed Sep. 5, 2017. |
U.S. Appl. No. 15/723,095 of Tiwari, P. filed Oct. 2, 2017. |
Number | Date | Country | |
---|---|---|---|
20160188576 A1 | Jun 2016 | US |