This application is a national stage application of an international patent application PCT/CN2012/080749, filed Aug. 30, 2012, which application is hereby incorporated by reference in its entirety.
This disclosure relates generally to an input method editor (IME), and more particularly, to an IME with multiple operating modes.
An input method editor (IME) is a computer functionality that assists a user to input text into a host application of a computing device. An IME may provide several suggested words and phrases based on received inputs from the user as candidates for insertion into the host application. For example, the user may input one or more initial characters of a word or phrase and an IME, based on the initial characters, may provide one or more suggested words or phrases for the user to select a desired one.
For another example, an IME may also assist the user to input non-Latin characters such as Chinese. The user inputs Latin characters through a keyboard and the IME returns one or more Chinese characters as candidates for insertion into the host application based on the Latin characters. The user may then select the proper character and insert it into the host application. As many typical keyboards support inputting Latin characters, the IME is useful for the user to input non-Latin characters using a Latin-character keyboard.
The candidates selected by the user can be inserted into various host applications, such as a chatting application, a document editing application, an email application, a drawing application, a gaming application, etc.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. The term “techniques,” for instance, may refer to device(s), system(s), method(s) and/or computer-readable instructions as permitted by the context above and throughout the present disclosure.
This disclosure describes techniques to provide alternate candidates for selection and/or insertion by a user through an input method editor (IME). In various embodiments, an IME is executable by a computing device. The IME may present candidates to a user for insertion into a host application. The IME may present different types of candidates depending on the input by the user and context of the input. The IME may provide both text candidates and alternate candidates in a form that is different from the form of the input.
The Detailed Description is set forth with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items.
Overview
This disclosure describes techniques to relate an input method editor (IME) that presents candidates to a user that may be selected by the user for insertion into a host application. The candidates presented to the user may be based at least in part on a user-selectable mode of the IME. The candidates may include text candidates such as non-Latin or Chinese characters, and rich candidates such as multimedia candidates, in order to provide supplemental information to a user and to enhance the user experience. Additionally or alternatively, the candidates may be based at least in part on the context of user input, which may include, but is not limited to, the host application, previous user input, adjacent user input, a combination thereof, and other contextual information. Additionally or alternatively, the candidates may, upon user selection, replace or augment the entered text.
In various embodiments, the user inputs are one or more of textual characters or symbols input by the user into a composition window of an IME. The user input may represent one or more expressions, search queries, parts thereof, or combinations thereof. For example, a user input may be a series of initial characters, an abbreviation, a spelling, and/or a translation of one or more words or phrases. The user inputs and the expressions represented by the user inputs, such as words or phrases, may be in the same or different languages. The user may input the user inputs through a variety of input methods, such as a keyboard input, a voice input, a touch screen input, a gesture, a movement, or a combination thereof.
In various embodiments, candidates are provided in alternate forms from a user input. For example, if the user inputs text, then alternate candidates may be provided in the form of image(s), video(s), audio file(s), web link(s), web page(s), map(s), other multimedia file(s), or a combination thereof.
In various embodiments, candidates are ranked according to the relevancies to the user inputs and/or the context of the user input.
Illustrative System Model
In various embodiments, if personal information such as location information, conversation or search contents is stored or transmitted, then the user may have an opportunity to decide to allow the collection, storage, and/or transmittal, and/or an opportunity to discontinue the same. In various embodiments, if personal information is stored or transmitted, adequate security measures and features are in place to secure the personal data. Additionally or alternatively, context may comprise triggering words or phrases. For example, a user may input the phrase “photo of” directly before an input indicating that alternative candidates are sought. In this example, the phrase “photo of” may provide context that a user is interested in seeing an image or photo of the input that follows. Other illustrative triggering words or phrases may include, but are not limited to, “image”, “video”, “sound”, “hear”, “nearby”, “picture”, “let me see”, “have a look at”, “did you see”, and the like. It is understood that triggering words or phrases may be specific to a culture and/or a language.
In various embodiments, both the content and context of the user input are used to inform selection of the candidates. In a possible embodiment, the context and content of the user input is used to formulate a search to gather candidates. In one example, a user inputs the text “forbidden city” into a chat program while several lines or entries prior to entering “forbidden city,” the user or a third party previously input the term “moat.” In this example, the content of the input may be “forbidden city” and the context of the input may be a chat program and the textual input “moat.”
In this example, the candidate manager 102 may cause a search of images, videos, audio files, graphics, others, or a combination thereof to find a “moat” at, near, belonging to, or in the “forbidden city.” This example may differ from a search that does not take into account the context of the user input.
In various embodiments, the search caused by the candidate manager 102 may be facilitated by a search engine, for example, a commercial search engine on a search engine computing device capable of searching various forms of media. For example, the search engine computing device may select a search engine, such as Bing®, Google®, Yahoo®, Alibaba®, or the like, to gather candidates. The candidates may include, for example, web search results, video search results, image search results, audio file search results or dictionary and encyclopedia search results.
In various embodiments, the search caused by the candidate manager 102 may be based at least in part on content of a user input and/or context of the user input.
In various embodiments, the search caused by the candidate manager 102 may return candidates with any surrounding text, metadata associated with the candidate, or a combination thereof.
In various embodiments, the search caused by the candidate manager 102 may return candidates with results information in a ranked format. For example, the results information may include the ranked format and may provide a set of top results, a set of tail or lowest results, and/or duplicate candidates.
In various embodiments, the candidate manager 102 selects one or more references from the candidates. For example, several reference images may be selected from a set of all candidate images returned by the search engine. The reference images may be selected based at least in part on the rank and number of duplicate or similar images returned. For example, the top N candidate images returned from the search, where N is an integer greater than or equal to zero, may be considered as possible reference images. Additionally or alternatively, the bottom M candidate images or tail candidate images returned from the search, where M is an integer greater than or equal to zero, may be considered as possible images to avoid as reference images. Additionally or alternatively, the number of duplicate or similar images may be considered as possible reference images. For example, if a first candidate image has more duplicates or similar images than a second candidate image, then the first candidate image may be considered a better reference image than the second candidate image. Additionally or alternatively, the number of duplicate or similar images may be indicative of the popularity, trendiness, or similar aspect of the candidate image.
Additionally or alternatively, the candidate manager 102 may include an extractor 108. In various embodiments, the extractor 108 may extract features of the references. For example, the extractor 108 may extract features from candidate images selected to be reference images. In various embodiments, the features extracted from the reference images may comprise average features.
Additionally or alternatively, the extractor 108 may extract features from the candidate images that were not selected as reference images. In various embodiments, the extracted features from the candidate images are compared to the extracted features from the reference images. For example, a cosine distance between the features extracted from the candidate image and the features extracted from the reference image may be calculated. Additionally or alternatively, information contained in the metadata or surrounding text of the candidate image may be compared to the information contained in the metadata or surrounding text of the reference image. For example, a cosine difference is calculated between the two.
In various embodiments, each extracted feature and comparison to a reference may comprise a dimension of the candidate. Each dimension of the candidate may be used to determine its score, rank, suitability for presentation to a user, or a combination thereof. In various embodiments, the number of dimensions assembled may be large. Additionally or alternatively, a value or weight of a first dimension in comparison with a second dimension need not be evaluated or determined in this module or at the analogous stage in a related process or method.
Additionally or alternatively, the candidate manager 102 may include a classifier 110. In various embodiments, the classifier 110 may classify each candidate. For example, the classifier 110 may use the dimensions associated with a candidate image to assign a score to the candidate image. In various embodiments, the classifier may be manually programmed, seeded with initial correlations between a feature and a score where the model is expanded with subsequent searches or additions, generated by a logic engine, or a combination thereof. Additionally or alternatively, the classifier may be developed, updated, maintained offline, online, or a combination thereof. Additionally or alternatively, a regression model may be used to map a relative feature of an image reflected in a dimension to a score. In various embodiments, the score may be aggregated for each candidate image to generate a candidate score.
Additionally or alternatively, the candidate manager 102 may include a ranker 112. In various embodiments, the ranker 112 may rank each candidate. For example, the ranker 112 may rank each candidate image based at least in part on a candidate score.
Additionally or alternatively, the candidate manager 102 may include a selection process where candidates are one or more of selected, made available for selection, made unavailable for selection, removed from consideration, or a combination there of. For example, candidate manager 102 may include a reducer 114. In various embodiments, the reducer 114 may reduce the number of candidates considered. For example, the reducer 114 may remove from consideration each candidate image based at least in part on a candidate score. In various embodiments, a candidate with a candidate score below a threshold will be removed from consideration.
Additionally or alternatively, the candidate manager 102 may include a duplicate remover 116. In various embodiments, the duplicate remover 116 may be included as part of the selection process. In various embodiments, the duplicate remover 116 may reduce the number of candidates considered. For example, the duplicate remover 116 may remove duplicate images from the candidate images. Additionally or alternatively, the duplicate remover 116 may remove exact duplicates as well as candidate images that are similar to each other. For example, if the difference between a first and a second candidate image is below a threshold difference or within a similarity threshold, then the first and second candidate images may be considered sufficiently similar or duplicative of each other. In this example, the duplicate remover 116 may remove either the first or the second candidate image from consideration and/or make the candidate unavailable for selection.
For example, as discussed above, in various embodiments, the content of the user input and the context of the user input is used to inform and determine the candidates to be presented. In various embodiments, the context and content of the user input is used to formulate a search to gather candidates, extract features, classifier candidates, and/or rank candidates. For example, a user inputs the text “forbidden city” into a chat program. However, several lines or entries prior to entering the text “forbidden city,” the user and a third party input the term “moat.” In this example, the content of the input may be “forbidden city” and the context of the input may be a chat program and the textual input “moat.”
In this example, the candidate manager 102 may cause a search of images, videos, audio files, graphics, others, or a combination thereof to find a “moat” at, near, belonging to, or in the “forbidden city.” This example may differ from a search that does not take into account the context of the user input, but merely searches for the “forbidden city.” In this example, the search for “forbidden city” may or may not return an image of the Forbidden City containing its moat. Additionally or alternatively, the extractor 108 may extract features and create dimensions related to the term “moat.” Additionally or alternatively, the classifier 110 may map a higher score to an image containing a dimension related to a moat feature than to an image without a dimension related to a moat feature. Additionally or alternatively, the ranker 112 may rank a candidate image containing a moat higher than a candidate image without a moat. In this example, by taking into account the context of the input related to “forbidden city,” the previously used term “moat,” may result in a more desirable set of candidate images when, for example, the user desires an image of the moat at the Forbidden City and not an image of the Forbidden City itself.
Additionally or alternatively, IME 100 may comprise the insertion manager 104. In various embodiments, the insertion manager 104 includes an inserter 118 that may provide candidates to a user for insertion. For example, the inserter 118 may cause a subset of the candidate images to be displayed to a user based at least in part on the user input, the context of the user input, the remaining candidates, or a combination thereof. For example, if the user is inputting text into a chat program, the inserter 118 may identify the chat program as the host program for the IME and display an appropriate number of candidate images at one time. Additionally or alternatively, the inserter 118 may show all of the available candidate images at the same time.
In the illustrated example, host application 202 includes a text insertion area, generally indicated by 204. Text insertion area 204 includes characters inserted directly into the host application by the user or via IME 206. Text insertion area 204 also includes an input indication represented by “|,” which represents an indication of where the candidates are to be inserted into host application 202 by, for example, insertion manager 104. The input indication may be, for example, a focus of a mouse. The input indication also indicates which host application 202 of a computing device among many host applications running on the computing device that is to receive the candidates inserted by IME 206. In one particular example, multiple host applications may utilize the features of IME 206 simultaneously and the user may switch between one or more of the host applications to receive the candidates by moving the input indication between the host applications.
In the illustrated example, IME 206 is operating to insert Chinese characters into host application 202 using pinyin.
IME 206 is shown as user interface 208, which includes composition window 210 for receiving user inputs, text candidate window 212 for presenting text candidates to the user, and alternate candidate window 214 for presenting alternate forms of candidates to the user. Text candidate window 212 is shown including previous and next candidate arrows 216, which the user can interact with to receive additional text candidates not currently shown in text candidate window 212. Alternate candidate window 214 is also shown including a previous and next candidate arrows 218, which the user can interactive with to receive additional alternate candidates not currently shown in alternate candidate window 214.
While various alternate candidate modes are illustrated, other alternate candidate modes may be used without departing from the scope of the present disclosure. For example, IME 206 may include an audio mode where, for example, an alternate candidate may include audio files, such as Moving Picture Experts Group (MPEG) MPEG-1 or MPEG-2 Audio Layer III (mp3) files. Additionally or alternatively, IME 206 may include a video mode where, for example, an alternate candidate may include videos. Additionally or alternatively, IME 206 may include graphical text where, for example, an alternate candidate may include graphical text. An example of a graphical text may include but is not limited to a graphics file (e.g., an image file) containing text. Additionally or alternatively, IME 206 may include animated graphical images where, for example, an alternate candidate may include an animated graphical image. By interacting with the previous and next candidate arrows 216, the user may gain access to these additional rich candidate modes at the rich candidate mode menu. Additionally or alternatively, the IME 206 may take context of the user input into account and display one type before another type, or a combination of two or all possible types.
In the illustrated example, the user has entered the text “ha'ha.” As a result of the user entering text in this context, IME 206 is in alternate candidate mode. While in alternate candidate mode, text candidate window 212 continues to translate the text “ha'ha” from pinyin into Chinese characters. On the other hand, alternate candidate window 214 presents image search results for the text “ha'ha” to the user. As illustrated, alternate candidate window 214 presents options 2-5 including images of various cartoon animals laughing.
In various embodiments, the user may cause IME 206 to insert either the Chinese characters indicated in text candidate window 212 into host application 202 at text insertion area 204 by entering “1”, or an image as shown in alternate candidate window 214 by entering “2”, “3”, “4”, or “5”. Other user selection options may be used without departing from the scope of the present disclosure.
Illustrative Computing Device and Illustrative Operational Environment
In at least one configuration, the computing device 300 includes at least one processor 302 and system memory 304. The processor(s) 302 may execute one or more modules and/or processes to cause the computing device 300 to perform a variety of functions. In some embodiments, the processor(s) 302 may include a central processing unit (CPU), a graphics processing unit (GPU), both CPU and GPU, or other processing units or components known in the art. Additionally, each of the processor(s) 302 may possess its own local memory, which also may store program modules, program data, and/or one or more operating systems.
Depending on the exact configuration and type of the computing device 300, the system memory 304 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, miniature hard drive, memory card, or the like) or some combination thereof. The system memory 304 may include an operating system 306, one or more program modules 308, and may include program data 310. The operating system 306 includes a component-based framework 334 that supports components (including properties and events), objects, inheritance, polymorphism, reflection, and provides an object-oriented component-based application programming interface (API). The computing device 300 is of a very basic illustrative configuration demarcated by a dashed line 312. Again, a terminal may have fewer components but may interact with a computing device that may have such a basic configuration.
Program modules 308 may include, but are not limited to, an IME 334, applications 336, and/or other components 338. In various embodiments, the IME 334 may comprise a user interface (UI) 340, candidate manager 102, and/or insertion manager 104. In various embodiments, candidate manager 102 comprises an analyzer 106, an extractor 108, and/or a classifier 110.
The computing device 300 may have additional features and/or functionality. For example, the computing device 300 may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated in
The storage devices and any associated computer-readable media may provide storage of computer readable instructions, data structures, program modules, and other data. Computer-readable media includes, at least, two types of computer-readable media, namely computer storage media and communication media.
Computer storage media includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data.
Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that may be used to store information for access by a computing device.
In contrast, communication media may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism. As defined herein, computer storage media does not include communication media.
Moreover, the computer-readable media may include computer-executable instructions that, when executed by the processor(s) 302, perform various functions and/or operations described herein.
The computing device 300 may also have input device(s) 318 such as a keyboard, a mouse, a pen, a voice input device, a touch input device, etc. Output device(s) 320, such as a display, speakers, a printer, etc. may also be included.
The computing device 300 may also contain communication connections 322 that allow the device to communicate with other computing devices 324, such as over a network. By way of example, and not limitation, communication media and communication connections include wired media such as a wired network or direct-wired connections, and wireless media such as acoustic, radio frequency (RF), infrared, and other wireless media. The communication connections 322 are some examples of communication media. Communication media may typically be embodied by computer readable instructions, data structures, program modules, etc.
The illustrated computing device 300 is only one example of a suitable device and is not intended to suggest any limitation as to the scope of use or functionality of the various embodiments described. Other well-known computing devices, systems, environments and/or configurations that may be suitable for use with the embodiments include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, game consoles, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, implementations using field programmable gate arrays (“FPGAs”) and application specific integrated circuits (“ASICs”), and/or the like.
The implementation and administration of a shared resource computing environment on a single computing device may enable multiple computer users to concurrently collaborate on the same computing task or share in the same computing experience without reliance on networking hardware such as, but not limited to, network interface cards, hubs, routers, servers, bridges, switches, and other components commonly associated with communications over the Internet, as well without reliance on the software applications and protocols for communication over the Internet.
Additionally or alternatively, the computing device 300 implementing IME 334 may be in communication with one or more search engine computing devices 342 via, for example, network 328.
Communication connection(s) 322 are accessible by processor(s) 302 to communicate data to and from the one or more search engine computing devices 342 over a network, such as network 328. Search engine computing devices 342 may be configured to perform the search using one or more search engines 344. Search engines 344 may be a generic search engine such as Bing®, Google®, or Yahoo®, a combination of search engines, or a custom search engine configured to operate in conjunction with IME 334 (such as a translation engine). Search engines 344 may also be a specialized form of a search engine such as Bing®, Maps, or Google® image search.
It should be understood that IME 334 may be used in an environment or in a configuration of universal or specialized computer systems. Examples include a personal computer, a server computer, a handheld device or a portable device, a tablet device, a multi-processor system, a microprocessor-based system, a set-up box, a programmable customer electronic device, a network PC, and a distributed computing environment including any system or device above.
Illustrative Processes
For ease of understanding, the processes discussed in this disclosure are delineated as separate operations represented as independent blocks. However, these separately delineated operations should not be construed as necessarily order dependent in their performance. The order in which the processes are described is not intended to be construed as a limitation, and any number of the described process blocks may be combined in any order to implement the process, or an alternate process. Moreover, it is also possible that one or more of the provided operations may be modified or omitted.
The processes are illustrated as a collection of blocks in logical flowcharts, which represent a sequence of operations that may be implemented in hardware, software, or a combination of hardware and software. For discussion purposes, the processes are described with reference to the system shown in
At 404, the IME may analyze the content of the input as well as the context of the input. The content and the context may, together or separately, be used by the IME to determine whether alternate candidates should be provided. For example, if an input is directly adjacent to a trigger phrase, that context may increase the likelihood that a user is interested in an alternate candidate and alternate candidates may be provided.
At 406, the IME may gather one or more candidates. For example, the received input may be formulated into a query and sent to a search engine. Additionally or alternatively, aspects of the context may influence the query, by for example, adding additional or excluding relevant query terms. In various embodiments, the query will return results in the form of ranked results. The ranked results may include a number of top ranked results, a number of bottom or tail ranked results, and various duplicate and/or similar results. Additionally or alternatively, the results may also include any metadata associate with each candidate. Additionally or alternatively, the results may also include surrounding text of the candidate.
At 408, the IME may select one or more references from the candidates. The IME may select a reference based at least in part on the rank of a candidate returned from the query and the number of duplicate or similar candidates.
At 410, the IME may extract features from the references. The features extracted from the references may be reference features. In various embodiments, the reference features may be considered average features for the given set of candidates in light of the content and context of the input.
At 412, the IME may extract features from the candidate images. The features extracted from the candidate images may be considered candidate features. In various embodiments, the extracted candidate features are compared to the analogous extracted reference features. A difference between the analogous extracted features may be used to define a dimension associated with the candidate. In various embodiments, extracted features may include features contained in the reference itself, for example, if the reference is an image, the image may contain an image of water where the water feature may be extracted. Additionally or alternatively, the extracted features may include features contained in information associated with the candidate. For example, the image may contain metadata that may be examined and/or extracted. Additionally or alternatively, the image may have been surrounded by text that was returned as part of the query results. The surrounding text may be examined and/or extracted.
At 414, the IME may score each candidate. In various embodiments, a classifier scores each candidate based at least in part on the dimensions and extracted features. In various embodiments, the classifier may be an offline learned or trained model. Additionally or alternatively, the classifier may be manually seeded with initial settings and examples and trained by searching around the seeded material to expand the model coverage and/or increase the models accuracy. In various embodiments, emotive words are the target for replacement by or supplementation with an alternate form. Additionally or alternatively, popular terms, phrases, topics, or a combination thereof, may be mapped using the seed and search method. The IME may use the classifier to map an extracted feature to a score. The score for each feature and/or dimension may be aggregated for each candidate to create a candidate score.
At 416, the IME may rank the candidates based at least in part on the candidate scores.
At 418, the IME may remove candidates based at least in part on a first criteria. In various embodiments, the first criteria comprises whether the candidate score is greater than a threshold score. When a candidate score is lower than the threshold score, the candidate s removed from consideration.
At 420, the IME may remove candidates based at least in part on a second criteria. In various embodiments, the second criteria comprises whether the candidate is duplicative or similar to another candidate. When the candidate is duplicative of another candidate, the IME may remove either the candidate or the another candidate from consideration. Similarly, when the dissimilarity between a candidate and another candidate fails to exceed a threshold of distinction, the IME may remove either the candidate or the another candidate from consideration. In various embodiments, a comparison between a candidate and another candidate may be based at least in part on local features, global features, a pixel-level comparison, or a combination thereof.
At 422, the IME may provide one or more candidates for selection or insertion by a user into an application.
The subject matter described above can be implemented in hardware, software, or in both hardware and software. Although implementations have been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts are disclosed as example forms of implementing the claims. For example, the methodological acts need not be performed in the order or combinations described herein, and may be performed in any combination of one or more acts.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/CN2012/080749 | 8/30/2012 | WO | 00 | 11/29/2012 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2014/032244 | 3/6/2014 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
4559604 | Ichikawa et al. | Dec 1985 | A |
5796866 | Sakurai et al. | Aug 1998 | A |
5873107 | Borovoy et al. | Feb 1999 | A |
5987415 | Breese et al. | Nov 1999 | A |
5995928 | Nguyen et al. | Nov 1999 | A |
6014638 | Burge et al. | Jan 2000 | A |
6076056 | Huang et al. | Jun 2000 | A |
6085160 | D'hoore et al. | Jul 2000 | A |
6092044 | Baker et al. | Jul 2000 | A |
6236964 | Tamura et al. | May 2001 | B1 |
6247043 | Bates et al. | Jun 2001 | B1 |
6363342 | Shaw et al. | Mar 2002 | B2 |
6377965 | Hachamovitch et al. | Apr 2002 | B1 |
6408266 | Oon | Jun 2002 | B1 |
6460015 | Hetherington et al. | Oct 2002 | B1 |
6731307 | Strubbe et al. | May 2004 | B1 |
6732074 | Kuroda | May 2004 | B1 |
6801893 | Backfried et al. | Oct 2004 | B1 |
6941267 | Matsumoto | Sep 2005 | B2 |
6963841 | Handal et al. | Nov 2005 | B2 |
7069254 | Foulger et al. | Jun 2006 | B2 |
7089504 | Froloff | Aug 2006 | B1 |
7099876 | Hetherington et al. | Aug 2006 | B1 |
7107204 | Liu et al. | Sep 2006 | B1 |
7165032 | Bellegarda | Jan 2007 | B2 |
7194538 | Rabe et al. | Mar 2007 | B1 |
7224346 | Sheng | May 2007 | B2 |
7277029 | Thiesson et al. | Oct 2007 | B2 |
7308439 | Baird et al. | Dec 2007 | B2 |
7353247 | Hough et al. | Apr 2008 | B2 |
7360151 | Froloff | Apr 2008 | B1 |
7370275 | Haluptzok et al. | May 2008 | B2 |
7389223 | Atkin et al. | Jun 2008 | B2 |
7447627 | Jessee et al. | Nov 2008 | B2 |
7451152 | Kraft et al. | Nov 2008 | B2 |
7490033 | Chen et al. | Feb 2009 | B2 |
7505954 | Heidloff et al. | Mar 2009 | B2 |
7512904 | Matthews et al. | Mar 2009 | B2 |
7555713 | Yang | Jun 2009 | B2 |
7562082 | Zhou | Jul 2009 | B2 |
7565157 | Ortega et al. | Jul 2009 | B1 |
7599915 | Hill et al. | Oct 2009 | B2 |
7676517 | Hurst-Hiller et al. | Mar 2010 | B2 |
7689412 | Wu et al. | Mar 2010 | B2 |
7725318 | Gavalda et al. | May 2010 | B2 |
7728735 | Aaron et al. | Jun 2010 | B2 |
7752034 | Brockett et al. | Jul 2010 | B2 |
7844599 | Kasperski et al. | Nov 2010 | B2 |
7881934 | Endo et al. | Feb 2011 | B2 |
7917355 | Wu et al. | Mar 2011 | B2 |
7917488 | Niu et al. | Mar 2011 | B2 |
7930676 | Thomas | Apr 2011 | B1 |
7953730 | Bleckner et al. | May 2011 | B1 |
7957955 | Christie et al. | Jun 2011 | B2 |
7957969 | Alewine et al. | Jun 2011 | B2 |
7983910 | Subramanian et al. | Jul 2011 | B2 |
8161073 | Connor | Apr 2012 | B2 |
8230336 | Morrill | Jul 2012 | B2 |
8285745 | Li | Oct 2012 | B2 |
8498864 | Liang et al. | Jul 2013 | B1 |
8539359 | Rapaport et al. | Sep 2013 | B2 |
8564684 | Bai | Oct 2013 | B2 |
8597031 | Cohen et al. | Dec 2013 | B2 |
8762356 | Kogan | Jun 2014 | B1 |
8996356 | Yang et al. | Mar 2015 | B1 |
20020005784 | Balkin et al. | Jan 2002 | A1 |
20020045463 | Chen et al. | Apr 2002 | A1 |
20020188603 | Baird et al. | Dec 2002 | A1 |
20030041147 | van den Oord et al. | Feb 2003 | A1 |
20030160830 | DeGross | Aug 2003 | A1 |
20030179229 | Van Erlach et al. | Sep 2003 | A1 |
20030220917 | Copperman | Nov 2003 | A1 |
20040128122 | Privault et al. | Jul 2004 | A1 |
20040220925 | Liu et al. | Nov 2004 | A1 |
20040243415 | Commarford et al. | Dec 2004 | A1 |
20050144162 | Liang | Jun 2005 | A1 |
20050203738 | Hwang | Sep 2005 | A1 |
20050216253 | Brockett | Sep 2005 | A1 |
20060026147 | Cone et al. | Feb 2006 | A1 |
20060167857 | Kraft et al. | Jul 2006 | A1 |
20060190822 | Basson et al. | Aug 2006 | A1 |
20060204142 | West et al. | Sep 2006 | A1 |
20060206324 | Skilling et al. | Sep 2006 | A1 |
20060242608 | Garside et al. | Oct 2006 | A1 |
20060248074 | Carmel et al. | Nov 2006 | A1 |
20070016422 | Mori et al. | Jan 2007 | A1 |
20070033269 | Atkinson et al. | Feb 2007 | A1 |
20070050339 | Kasperski et al. | Mar 2007 | A1 |
20070052868 | Chou et al. | Mar 2007 | A1 |
20070088686 | Hurst-Hiller et al. | Apr 2007 | A1 |
20070089125 | Claassen | Apr 2007 | A1 |
20070124132 | Takeuchi | May 2007 | A1 |
20070150279 | Gandhi et al. | Jun 2007 | A1 |
20070162281 | Saitoh et al. | Jul 2007 | A1 |
20070167689 | Ramadas et al. | Jul 2007 | A1 |
20070192710 | Platz et al. | Aug 2007 | A1 |
20070208738 | Morgan | Sep 2007 | A1 |
20070213983 | Ramsey | Sep 2007 | A1 |
20070214164 | MacLennan et al. | Sep 2007 | A1 |
20070233692 | Lisa et al. | Oct 2007 | A1 |
20070255567 | Bangalore et al. | Nov 2007 | A1 |
20080046405 | Olds et al. | Feb 2008 | A1 |
20080115046 | Yamaguchi | May 2008 | A1 |
20080167858 | Christie et al. | Jul 2008 | A1 |
20080171555 | Oh et al. | Jul 2008 | A1 |
20080189628 | Liesche et al. | Aug 2008 | A1 |
20080195645 | Lapstun et al. | Aug 2008 | A1 |
20080195980 | Morris | Aug 2008 | A1 |
20080208567 | Brockett et al. | Aug 2008 | A1 |
20080221893 | Kaiser | Sep 2008 | A1 |
20080288474 | Chin et al. | Nov 2008 | A1 |
20080294982 | Leung et al. | Nov 2008 | A1 |
20080312910 | Zhang | Dec 2008 | A1 |
20090002178 | Guday et al. | Jan 2009 | A1 |
20090043584 | Philips | Feb 2009 | A1 |
20090043741 | Kim | Feb 2009 | A1 |
20090077464 | Goldsmith et al. | Mar 2009 | A1 |
20090106224 | Roulland | Apr 2009 | A1 |
20090128567 | Shuster et al. | May 2009 | A1 |
20090154795 | Tan et al. | Jun 2009 | A1 |
20090187515 | Andrew | Jul 2009 | A1 |
20090187824 | Hinckley et al. | Jul 2009 | A1 |
20090210214 | Qian et al. | Aug 2009 | A1 |
20090216690 | Badger et al. | Aug 2009 | A1 |
20090222437 | Niu et al. | Sep 2009 | A1 |
20090249198 | Davis et al. | Oct 2009 | A1 |
20090313239 | Wen et al. | Dec 2009 | A1 |
20100005086 | Wang et al. | Jan 2010 | A1 |
20100122155 | Monsarrat | May 2010 | A1 |
20100125811 | Moore et al. | May 2010 | A1 |
20100138210 | Seo et al. | Jun 2010 | A1 |
20100146407 | Bokor et al. | Jun 2010 | A1 |
20100169770 | Hong et al. | Jul 2010 | A1 |
20100180199 | Wu et al. | Jul 2010 | A1 |
20100217581 | Hong | Aug 2010 | A1 |
20100217795 | Hong | Aug 2010 | A1 |
20100231523 | Chou | Sep 2010 | A1 |
20100245251 | Yuan et al. | Sep 2010 | A1 |
20100251304 | Donoghue et al. | Sep 2010 | A1 |
20100306139 | Wu et al. | Dec 2010 | A1 |
20100306248 | Bao et al. | Dec 2010 | A1 |
20100309137 | Lee | Dec 2010 | A1 |
20110014952 | Minton | Jan 2011 | A1 |
20110041077 | Reiner | Feb 2011 | A1 |
20110060761 | Fouts | Mar 2011 | A1 |
20110066431 | Ju et al. | Mar 2011 | A1 |
20110087483 | Hsieh et al. | Apr 2011 | A1 |
20110107265 | Buchanan et al. | May 2011 | A1 |
20110131642 | Hamura et al. | Jun 2011 | A1 |
20110137635 | Chalabi et al. | Jun 2011 | A1 |
20110161080 | Ballinger et al. | Jun 2011 | A1 |
20110161311 | Mishne et al. | Jun 2011 | A1 |
20110173172 | Hong et al. | Jul 2011 | A1 |
20110178981 | Bowen et al. | Jul 2011 | A1 |
20110184723 | Huang et al. | Jul 2011 | A1 |
20110188756 | Lee et al. | Aug 2011 | A1 |
20110191321 | Gade et al. | Aug 2011 | A1 |
20110201387 | Paek et al. | Aug 2011 | A1 |
20110202836 | Badger et al. | Aug 2011 | A1 |
20110202876 | Badger et al. | Aug 2011 | A1 |
20110219299 | Scalosub | Sep 2011 | A1 |
20110258535 | Adler, III et al. | Oct 2011 | A1 |
20110282903 | Zhang | Nov 2011 | A1 |
20110289105 | Hershowitz | Nov 2011 | A1 |
20110296324 | Goossens et al. | Dec 2011 | A1 |
20120016678 | Gruber et al. | Jan 2012 | A1 |
20120019446 | Wu et al. | Jan 2012 | A1 |
20120022853 | Ballinger et al. | Jan 2012 | A1 |
20120023103 | Soderberg et al. | Jan 2012 | A1 |
20120029902 | Lu et al. | Feb 2012 | A1 |
20120035932 | Jitkoff et al. | Feb 2012 | A1 |
20120036468 | Colley | Feb 2012 | A1 |
20120041752 | Wang et al. | Feb 2012 | A1 |
20120060113 | Sejnoha et al. | Mar 2012 | A1 |
20120060147 | Hong et al. | Mar 2012 | A1 |
20120078611 | Soltani et al. | Mar 2012 | A1 |
20120113011 | Wu et al. | May 2012 | A1 |
20120117506 | Koch et al. | May 2012 | A1 |
20120143897 | Wei et al. | Jun 2012 | A1 |
20120173222 | Wang et al. | Jul 2012 | A1 |
20120222056 | Donoghue et al. | Aug 2012 | A1 |
20120297294 | Scott et al. | Nov 2012 | A1 |
20120297332 | Changuion et al. | Nov 2012 | A1 |
20130016113 | Adhikari et al. | Jan 2013 | A1 |
20130054617 | Colman | Feb 2013 | A1 |
20130091409 | Jeffery | Apr 2013 | A1 |
20130132359 | Lee et al. | May 2013 | A1 |
20130159920 | Scott et al. | Jun 2013 | A1 |
20130298030 | Nahumi et al. | Nov 2013 | A1 |
20130346872 | Scott et al. | Dec 2013 | A1 |
20140040238 | Scott et al. | Feb 2014 | A1 |
20150106702 | Scott et al. | Apr 2015 | A1 |
20150121291 | Scott et al. | Apr 2015 | A1 |
20150370833 | Fey | Dec 2015 | A1 |
20160196150 | Jing et al. | Jul 2016 | A1 |
Number | Date | Country |
---|---|---|
1609764 | Apr 2005 | CN |
1851617 | Oct 2006 | CN |
1908863 | Feb 2007 | CN |
101183355 | May 2008 | CN |
101276245 | Oct 2008 | CN |
101286092 | Oct 2008 | CN |
101286093 | Oct 2008 | CN |
101286094 | Oct 2008 | CN |
101587471 | Nov 2009 | CN |
101661474 | Mar 2010 | CN |
102012748 | Apr 2011 | CN |
102056335 | May 2011 | CN |
102144228 | Aug 2011 | CN |
102193643 | Sep 2011 | CN |
102314441 | Jan 2012 | CN |
102314461 | Jan 2012 | CN |
2000148748 | May 2000 | JP |
2011507099 | Mar 2011 | JP |
2012008874 | Jan 2012 | JP |
2012094156 | May 2012 | JP |
WO2010105440 | Sep 2010 | WO |
Entry |
---|
Damper, “Self-Learning and Connectionist Approaches to Text-Phoneme Conversion”, retrieved on May 26, 2010 at <<ftp://ftp.cogsci.ed.ac.uk/pub/joe/newbull.ps>>, UCL Press, Connectionist Models of Memory and Language, 1995, pp. 117-144. |
“Database”, Microsoft Computer Dictionary, Fifth Edition, retrieved on May 13, 2011, at <<http://academic.safaribooksonline.com/book/communications/0735614954>>, Microsoft Press, May 1, 2002, 2 pages. |
“File”, Microsoft Computer Dictionary, Fifth Edition, retrieved on May 13, 2011, at <<http://academic.safaribooksonline.com/book/communications/0735614954>>, Microsoft Press, May 1, 2002, 2 pages. |
Kumar, “Google launched Input Method editor—type anywhere in your language”, retrieved at <<http://shoutingwords.com/google-launched-input-method-editor.html>>, Mar. 2010, 12 pages. |
Office action for U.S. Appl. No. 12/693,316, mailed on Oct. 30, 2013, Huang, et al., “Phonetic Suggestion Engine”, 24 pages. |
Office action for U.S. Appl. No. 12/693,316, mailed on Jun. 19, 2013, Huang et al., “Phonetic Suggestion Engine”, 20 pages. |
“Search Engine”, Microsoft Computer Dictionary, Mar. 2002 , Fifth Edition, pp. 589. |
Wikipedia, “Selection Based Search”, retrieved Mar. 23, 2012 at http://en.wikipedia.org/wiki/Selection based search, 3 pgs. |
Wikipedia, “Soundex”, retrieved on Jan. 20, 2010 at http://en.wikipedia.org/wiki/soundex, 3 pgs. |
Office action for U.S. Appl. No. 12/693,316, mailed on Oct. 16, 2014, Huang, et al., “Phonetic Suggestion Engine”, 24 pages. |
Office Action for U.S. Appl. No. 13/315,047, mailed on Oct. 2, 2014, Weipeng Liu, “Sentiment Aware User Interface Customization”, 12 pages. |
Millward, Steven, “Baidu Japan Acquires Simeji Mobile App Team, for added Japanese Typing fun,” Published Dec. 13, 2011, 3 pages. |
Final Office Action for U.S. Appl. No. 13/109,021, mailed on Jan. 11, 2013, Scott et al., “Network Search for Writing Assistance”, 16 pages. |
Lenssen, Philipp, “Google Releases Pinyin Converter,” Published Apr. 4, 2007, retrieved at <<http://blogoscoped.com/archive/2007-04-04-n49.html>>, 3 pages. |
Office action for U.S. Appl. No. 13/567,305, mailed on Jan. 30, 2014, Scott, et al., “Business Intelligent in-Document Suggestions”, 14 pages. |
Office action for U.S. Appl. No. 13/315,047, mailed on Feb. 12, 2014, Liu, et al., “Sentiment Aware User Interface Customization”, 14 pages. |
Office Action for U.S. Appl. No. 13/109,021, mailed on Aug. 21, 2012, Scott et al., “Network Search for Writing Assistance”, 19 pages. |
Office Action for U.S. Appl. No. 13/109,021, mailed on Sep. 25, 2013, Scott et al., “Network Search for Writing Assistance”, 18 pages. |
International Search Report & Written Opinion for PCT Patent Application No. PCT/US2013/053321, Mailed Date: Oct. 1, 2013, Filed Date: Aug. 2, 2013, 9 Pages. |
Office Action for U.S. Appl. No. 13/109,021, mailed on Mar. 11, 2014, Dyer, A.R., “Network Search for Writing Assistance,” 18 pages. |
Office Action for U.S. Appl. No. 13/109,021, mailed on Jun. 19, 2014, Dyer, A.R., “Network Search for Writing Assistance,” 42 pages. |
Office action for U.S. Appl. No. 13/586,267, mailed on Jul. 31, 2014, Scott et al., “Input Method Editor Application Platform”, 20 pages. |
Non-Final Office Action for U.S. Appl. No. 13/331,023, mailed Aug. 4, 2014, Matthew Robert Scott et al., “Scenario-Adaptive Input Method Editor”, 20 pages. |
U.S. Appl. No. 12/960,258, filed Dec. 3, 2010, Wei et al., “Wild Card Auto Completion,” 74 pages. |
U.S. Appl. No. 13/109,021, filed May 17, 2011, Matthew Robert Scott, “Network Search for Writing Assistance,” 43 pages. |
U.S. Appl. No. 13/331,023 , filed Dec. 20, 2011,Tony Hou, Weipeng Liu, Weijiang Xu, and Xi Chen, “Scenario-Adaptive Input Method Editor,” 57 pages. |
Ciccolini, Ramiro, “Baidu IME More literate in Chinese input,” Published Sep. 15, 2011, <<http://www.itnews-blog.com/it/81630.html>> 4 pages. |
Millward, “Baidu Japan Acquires Simeji Mobile App Team, for added Japanese Typing fun,” Published Dec. 13, 2011, <<http://www.techinasia.com/baidu-japan-simeiji>>, 3 pages. |
Ciccolini, “Baidu IME More Literate in Chinese Input,” Published Sep. 15, 2011, <<http://www.itnews-blog.com/it/81630.html>>, 4 pages. |
Gamon et al., “Using Statistical Techniques and Web Search to Correct ESL Errors,” Published 2009, retrieved from <<http://research.microsoft.com/pubs/81312/Calico—published.pdf>>, CALICO Journal, vol. 26, No. 3, 2009, 21 pages. |
“Google launched Input Method editor—type anywhere in your language,” published Mar. 2, 2010, retrieved at <<http://shoutingwords.com/google-launched-input-method-editor.html>>, 12 pages. |
Lenssen, “Google Releases Pinyin Converter,” Published Apr. 4, 2007 <<http://blogoscoped.com/archive/2007-04-04-n49.html>>, 3 pages. |
“Google Scribe,” retrieved at <<http://www.scribe.googlelabs.com/>>, retrieved date: Feb. 3, 2011, 1 page. |
Google Transliteration Input Method (IME) Configuration, published Feb. 5, 2010, retrieved at <<http://www.technicstoday.com/2010/02/google-transliteration-input-method-ime-configuration/>>, pp. 1-13. |
Komasu et al., “Corpus-based Predictive Text Input,” Proceedings of the Third International Conference on Active Media Technology, May 2005, 6 pages. |
Lo et al., “Cross platform CJK input Method Engine,” IEEE International Conference on Systems, Man and Cybernetics, Oct. 6, 2002, retrieved at <<http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1175680>>, pp. 1-6. |
“Microsoft Research ESL Assistant,” retrieved at <<http://www.eslassistant.com/>>, retrieved date Feb. 3, 2011, 1 page. |
Mohan et al., “Input Method Configuration Overview,” Jun. 3, 2011, retrieved at <<http://gameware.autodesk.com/documents/gfx—4.0—ime.pdf>>, pp. 1-9. |
Scott et al., “Engkoo: Mining theWeb for Language Learning,” in Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Systems Demonstrations, Jun. 21, 2011, 6 pages. |
Sowmya et al., “Transliteration Based Text Input Methods for Telugu,” <<http://content.imamu.edu.sa/Scholars/it/VisualBasic/2009—53.pdf>>, Proceedings of 22nd International Conference on Computer Processing of Oriental Languages. Language Technology for the Knowledge-based Economy (ICCPOL), Mar. 2009, pp. 122-132. |
Suematsu et al., “Network-Based Context-Aware Input Method Editor,” Proceedings: Sixth International Conference on Networking and Services (ICNS), Mar. 7, 2010, retrieved at <<http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5460679>>, pp. 1-6. |
Suzuki et al., “A Comparative Study on Language Model Adaptation Techniques Using New Evaluation Metrics,” in Proceedings of Human Language Technology Conference on Empirical Method in Natural Language Processing, Oct. 6, 2005, 8 pages. |
Windows XP Chinese Pinyin Setup, published Apr. 15, 2006, retrieved at <<http://www.pinyinjoe.com/pinyin/pinyin—setup.htm>>, pp. 1-10. |
Office action for U.S. Appl. No. 13/315,047, mailed on Apr. 24, 2014, Liu et al., “Sentiment Aware User Interface Customization”, 13 pages. |
Office action for U.S. Appl. No. 12/693,316, mailed on May 19, 2014, Huang et al., “Phonetic Suggestion Engine”, 22 pages. |
“Prose”, Dictionary.com, Jun. 19, 2014, 2 pgs. |
Office action for U.S. Appl. No. 13/586,267, mailed on Jan. 2, 2015, Scott et al., “Input Method Editor Application Platform”, 19 pages. |
Office action for U.S. Appl. No. 13/331,023, mailed on Feb. 12, 2015, Scott et al, “Scenario-Adaptive Input Method Editor”, 20 pages. |
European Office Action mailed Jun. 18, 2015 for European patent application No. 12879676.0, a counterpart foreign application of U.S. Appl. No. 13/635,306, 5 pages. |
Miessler, “7 Essential Firefox Quicksearches”, Retrieved from <<https:danielmiessler.com/blog/7-essential-firefox-quicksearches/>>, Published Aug. 19, 2007, 2 pages. |
Office action for U.S. Appl. No. 13/331,023, mailed on Jun. 26, 2015, Scott et al., “Scenario-Adaptive Input Method Editor”, 23 pages. |
Office action for U.S. Appl. No. 13/635,306, mailed on Aug. 14, 2015, Scott et al., “Input Method Editor”, 26 pages. |
Office Action for U.S. Appl. No. 13/109,021, mailed on Sep. 30, 2014, Dyer, A.R., “Network Search for Writing Assistance,” 17 pages. |
PCT International Preliminary Report on Patentability mailed Mar. 12, 2015 for PCT Application No. PCT/CN2012/080749, 8 pages. |
Supplemenary European Search Report mailed Jul. 16, 2015 for European patent application No. 12880149.5, 5 pages. |
Supplemenary European Search Report mailed Sep. 14, 2015 for European patent application No. 12879804.8, 5 pages. |
Office action for U.S. Appl. No. 13/315,047, mailed on Sep. 24, 2015, Liu et al., “Sentiment Aware User Interface Customization”, 12 pages. |
Office action for U.S. Appl. No. 13/635,219, mailed on Sep. 29, 2015, Scott et al., “Cross Lingual Input Method Editor”, 14 page. |
Supplementary European Search Report mailed Nov. 12, 2015 for European patent application No. 12880149.5, 7 pages. |
Guo et al., “NaXi Pictographs Input Method and WEFT”, Journal of Computers, vol. 5, No. 1, Jan. 2010, pp. 117-124. |
Office action for U.S. Appl. No. 13/586,267 mailed on Nov. 6, 2015, Scott et al., “Input Method Editor Application Platform”, 22 pages. |
European Office Action mailed Oct. 8, 2015 for European patent application No. 12879804.8, a counterpart foreign application of U.S. Appl. No. 13/586,267, 9 pages. |
Ben-Haim, et al., “Improving Web-based Image Search via Content Based Clustering”, Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW '06), IEEE, Jun. 17, 2006, 6 pages. |
Berg, et al., “Animals on the Web”, Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '06), vol. 2, IEEE, Jun. 17, 2006, pp. 1463-1470. |
Partial Supplemenary European Search Report mailed Oct. 26, 2015 for European patent application No. 12883902.4, 7 pages. |
Office action for U.S. Appl. No. 13/331,023 mailed on Nov. 20, 2015, Scott et al., “Scenario-Adaptive Input Method Editor”, 25 pages. |
U.S. Appl. No. 13/635,219, filed Sep. 14, 2011, Scott, et al., “Cross-Lingual Input Method Editor”. |
Dinamik-Bot, et al., “Input method”, retrieved on May 6, 2015 at <<http://en.wikipedia.org/w/index.php?title=Input—method&oldid=496631911>>, Wikipedia, the free encyclopedia, Jun. 8, 2012, 4 pages. |
Engkoo Pinyin Redefines Chinese Input, Published on: May 13, 2013, Available at: http://research.microsoft.com/en-us/news/features/engkoopinyinime-051313.aspx. |
“English Assistant”, Published on: Apr. 19, 2013, Available at: http://bing.msn.cn/pinyin/. |
Supplementary European Search Report mailed May 20, 2015 for European Patent Application No. 12879676.0, 3 pages. |
“Innovative Chinese Engine”, Published on: May 2, 2013, Available at: http://bing.msn.cn/pinyin/help.shtml. |
“Input Method (IME)”, Retrieved on: Jul. 3, 2013, Available at: http://www.google.co.in/inputtools/cloud/features/input-method.html. |
International Search Report & Written Opinion for PCT Patent Application No. PCT/CN2013/081156, mailed May 5, 2014; filed Aug. 9, 2013, 14 pages. |
Office action for U.S. Appl. No. 13/635,219, mailed on Mar. 13, 2015, Scott et al., “Cross-Lingual Input Method Editor”, 21 pages. |
Office action for U.S. Appl. No. 13/635,306, mailed on Mar. 27, 2015, Scott et al., “Input Method Editor”, 18 pages. |
Office action for U.S. Appl. No. 13/315,047, mailed on Apr. 28, 2015, Liu et al., “Sentiment Aware User Interface Customization”, 12 pages. |
Office action for U.S. Appl. No. 13/586,267, mailed on May 8, 2015, Scott et al., “Input Method Editor Application Platform”, 18 pages. |
European Search Report mailed Feburary 18, 2016 for European patent application No. 12883902.4, 7 pages. |
European Office Action mailed Mar. 1, 2016 for European Patent Application No. 12883902.4, a counterpart foreign application of U.S. Appl. No. 13/701,008, 8 pages. |
Office action for U.S. Appl. No. 13/635,306, mailed on Feb. 25, 2016, Scott et al., “Input Method Editor”, 29 pages. |
PCT International Preliminary Report on Patentability mailed Feb. 18, 2016 for PCT Application No. PCT/CN2013/081156, 8 pages. |
Translated Chinese Office Action mailed Jun. 3, 2016 for Chinese Patent Application No. 201280074382.1, a counterpart foreign application of U.S. Appl. No. 13/635,219, 18 pages. |
Office action for U.S. Appl. No. 13/635,219, mailed on Mar. 24, 2016, Scott et al., “Cross-Lingual Input Method Editor”, 29 pages. |
Office action for U.S. Appl. No. 13/586,267, mailed on Jun. 7, 2016, Scott et al., “Input Method Editor Application Platform”, 24 pages. |
Chinese Office Action mailed Jun. 28, 2016 for Chinese Patent Application No. 201280074281.4, a counterpart foreign application of U.S. Appl. No. 13/586,267. |
Supplementary European Search Report mailed Jul. 6, 2016 for European patent application No. 13891201.9, 4 pages. |
Translated Japanese Office Action mailed May 24, 2016 for Japanese patent application No. 2015-528828, a counterpart foreign application of U.S. Appl. No. 13/701,008, 17 pages. |
European Office Action mailed Jul. 19, 2016 for European Patent Application No. 13891201.9, a counterpart foreign application of U.S. Appl. No. 14/911,247, 7 pages. |
European Office Action mailed Jul. 19, 2016 for European patent application No. 12880149.5, a counterpart foreign application of U.S. Appl. No. 13/635,219, 7 pages. |
Office action for U.S. Appl. No. 13/635,306, mailed on Jul. 28, 2016, Scott et al., “Input Method Editor”, 24 pages. |
Final Office Action for U.S. Appl. No. 13/635,219, mailed on Aug. 10, 2016, Matthew Robert Scott, “Cross-Lingual Input Method Editor”, 29 pages. |
Chinese Office Action mailed Jan. 3, 2017 for Chinese patent application No. 201280074383.6, a counterpart foreign application of U.S. Appl. No. 13/635,306. |
Chinese Office Action mailed Feb. 3, 2017 for Chinese patent application No. 201280074382.1, a counterpart foreign application of U.S. Appl. No. 13/635,219. |
European Office Action mailed Dec. 22, 2016 for European patent application No. 12880149.5, a counterpart foreign application of U.S. Appl. No. 13/635,219, 11 pages. |
Japanese Office Action mailed Oct. 31, 2016 for Japanese Patent Application No. 2015-528828, a counterpart foreign application of U.S. Appl. No. 13/701,008. |
Office action for U.S. Appl. No. 13/635,219, mailed on Nov. 14, 2016, Scott et al., “Cross-Lingual Input Method Editor”, 27 pages. |
Office action for U.S. Appl. No. 13/701,008, mailed on Nov. 30, 2016, Wang et al., “Feature-Based Candidate Selection”, 21 pages. |
Chinese Office Action dated Mar. 24, 2017 for Chinese Patent Application No. 201280074281.4, a counterpart foreign application of U.S. Appl. No. 13/586,267, 28 pgs. |
Chinese Office Action dated Jun. 19, 2017 for Chinese Patent Application No. 201280075557.0, a counterpart foreign application of U.S. Appl. No. 13/701,008. |
Number | Date | Country | |
---|---|---|---|
20150161126 A1 | Jun 2015 | US |