This non-provisional utility application claims priority to UK patent application number 1610984.5 entitled “SUPPRESSION OF INPUT IMAGES” and filed on Jun. 23, 2016, which is incorporated herein in its entirety by reference.
Computing devices, such as mobile phones, portable and tablet computers, wearable computers, head-worn computers, game consoles, and the like are often deployed with soft keyboards for text input and/or other interaction with the computing device. A soft keyboard is one displayed on a screen or other surface and where user input associated with the displayed keys triggers input of text characters to a computing device. When a user operates the soft keyboard the computing device employs text prediction to predict and offer candidate words or phrases to the user and also to predict and offer candidate emoji to the user. The term “emoji” as used herein refers to ideograms, smileys, pictographs, emoticons and other graphic representations. Emoji are often used in place of words or phrases, or in conjunction with words or phrases but this is not essential.
The Unicode (6.0) standard allocates 722 code points as descriptions of emojis (examples include U+1 F60D: Smiling face with heart shaped eyes and U+1 F692: Fire engine). Specified images are used to render each of these Unicode characters so that they may be sent and received. Although it is popular to input emojis, it remains difficult to do so, because the user has to discover appropriate emojis and, even knowing the appropriate emoji, has to navigate through a great number of possible emojis to find the one they want to input.
The embodiments described below are not limited to implementations which solve any or all of the disadvantages of known processes and/or apparatus for inputting images such as emoji to electronic devices.
The following presents a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not intended to identify key features or essential features of the claimed subject matter nor is it intended to be used to limit the scope of the claimed subject matter. Its sole purpose is to present a selection of concepts disclosed herein in a simplified form as a prelude to the more detailed description that is presented later.
A computing device is described which has a memory storing at least one indicator of image use; a user interface which receives user input; and a processor configured to trigger prediction, from the user input, of a plurality of candidate images for input to the computing device. The processor is configured to at least partially suppress the prediction of the plurality of images using the indicator of image use.
In this way it is possible, but not essential, to automatically control whether or not images such as emoji are available as candidates for entry to the computing device by using the at least one indicator. It is also possible, but not essential, to automatically control other factors such as how many images are available as candidates for entry and/or how often images are available as candidates for entry. The user is thus more easily able to enter data into the computing device.
Many of the attendant features will be more readily appreciated as the same becomes better understood by reference to the following detailed description considered in connection with the accompanying drawings.
The present description will be better understood from the following detailed description read in light of the accompanying drawings, wherein:
Like reference numerals are used to designate like parts in the accompanying drawings.
The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present example are constructed or utilized. The description sets forth the functions of the example and the sequence of operations for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different examples.
In the examples described herein an end user is able to insert relevant images to an electronic device in appropriate situations, and is otherwise not burdened with candidate image predictions in situations when he or she is unlikely to input an image to the computing device. This is achieved in a behind the scenes manner, without the need for manual switching on or off of candidate image prediction functionality by the user. Examples below also describe how the quantity and/or frequency of candidate image predictions available to a user to select from is dynamically adjusted in an automatic manner according to likelihood of a user inputting an image. The computing device has access to one or more indicators of image use and these enable it to assess the likelihood of a user inputting an image.
The indicators of image use are described in more detail below and in summary, an indicator of image use is a measure of how likely a user is to input any image (such as any emoji) to the computing device in a particular situation or context. This is in contrast to a measure of how likely a user is to input a particular image. By using one or more of the indicators the computing device enables more effective input of images. For example, one or more of the indicators are used to suppress candidate image predictions produced by a predictor in situations where the user is unlikely to want to input an image. The suppression occurs automatically so that no manual input is needed from the user to adjust the prediction functionality and the suppression enables resources of the computing device to be allocated to other tasks. In some examples the indicators are learnt as described in more detail below.
Each indicator is a statistical representation of observed image use. For example, an indicator is one or more numerical values, such as a ratio, mean, median, mode, average, variance or other statistic describing observed image use. The indicators are application specific in some examples, that is, they are statistics describing observed image use in conjunction with a particular application executing on the computing device. In some examples the indicators are user specific, that is they are statistics describing observed image use by a particular user. Indicators which are combinations of application specific and user specific are used in some examples. Indicators are used which are any combination of one or more of: application specific, user specific, field specific, enterprise specific, recipient specific, user age group specific, user gender specific, topic specific, population specific, community specific, language specific, geographical region specific, time zone specific, time of year specific.
The indicators are calculated in advance in some cases by observing large amounts of data from large populations of users. In some cases the indicators are dynamically learnt during operation of the end user computing device. The indicators are learnt at the end user computing device and/or at a cloud service computing device or other remote computing device. The indicators are sent between computing devices and in some cases are stored in association with a user profile such that user specific indicators are available through a cloud service to whichever end user device a user is operating.
More detail about how indicators are learnt is given later in the document with reference to
End user devices such as mobile telephones 124, 114, tablet computers 116, wearable computers 118, laptop computers or other end user electronic devices are connected to a communications network 102 via wired or wireless links. The communications network 102 is the internet, an intranet, or any wired or wireless communications network. Also connected to the communications network is a prediction engine 100 comprising an image prediction component 106 which has been trained to predict, for given user input, a plurality of images which are relevant to the given user input. In some examples the prediction engine 100 also has a word/phrase prediction component 104 which is configured to predict, given user input, text characters, words or phrases, which are likely to follow the given user input. The prediction engine is implemented using any of software, hardware and firmware and has a communications interface arranged to receive user input 112 from an end user computing device 124, 114, 116, 118 and to send one or more predictions 110 to the end user computing devices. The user input 112 is of any type such as speech input, gesture input, keystroke input, touch input, eye movement input, combinations of one or more different types of user input, or other user input.
The prediction engine 100 also comprises an indicator updater 108 in some cases. The indicator updater 108 is described in more detail with reference to
The predictions comprise images and, in some examples, text characters, words or phrases. In some examples the predictions are ranked according to their probability or likelihood of being relevant to the user input 112. In various examples the quantity and/or frequency of predicted image candidates returned by the prediction engine 100 to an end user device in response to an instance of user input is controlled on the basis of one or more of the indicators. The instance of user input, in the case of text input, is a key stroke, character, phoneme, morpheme, word, phrase, sentence or other unit of text.
Once the end user computing device receives one or more predictions it outputs the predictions. For example, to a panel 122 above a soft keyboard. In the example of
A prediction tool at the end user computing device (or at the prediction engine 100) has access to one or more of the indicators and in some examples is able to suppress the availability of image candidates for user selection. The prediction tool is software, hardware or firmware at the end user computing device and/or at the prediction engine 100. In some examples the prediction tool is configured to control the quantity or the frequency of image candidates for user selection, on the basis of one or more of the indicators.
It is noted that the deployment of
The display is a screen 302 in some cases or a virtual reality or augmented reality display which projects into the environment or into the user's eye in some cases. The computing device 300 has one or more sensors 306 such as a touch screen, a microphone, cameras, or other sensors which detect user input to enable a user to input text and make selections regarding criteria for image input. The sensors provide input to a device operating system 308 which is connected to the user interface system 310. The user interface system implements a soft keyboard at the computing device in some examples where the soft keyboard has prediction capability. The device operating system 308 controls a renderer 304 configured to render graphical content such as a graphical user interface to optional screen 302 or to any other means for displaying graphical content to an end user. The end user computing device 300 has a memory 316 which stores indicators of emoji/image user 320 and optionally other data, one or more processors 316 and a communications interface 318. The end user computing device has various other components which are not illustrated for clarity and are described in more detail below with reference to
Alternatively, or in addition, the functionality of the prediction tool 312 and indicator updater 322 is performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that are optionally used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), Graphics Processing Units (GPUs).
Through the use of the prediction engine 100 of
In the examples described herein the user does not have to provide shorthand text that identifies a particular emoji and does not need to type in an exact description of an emoji. In this way the end user is able to insert relevant images to an electronic device with minimal effort.
The prediction tool 400 selects 402 one or more stored indicators. The stored indicators are available at the computing device (see 320 of
The prediction tool 400 requests 404 prediction candidates from the prediction engine 100 using suppression. For example the prediction tool 400 sends a request message to the prediction engine 100 where the request message comprises detail about the user input and also comprises the selected stored indicator(s). The selected stored indicator(s) are used to suppress or control availability of image prediction candidates to the user. This is achieved in any of a variety of different ways illustrated at boxes 406, 408, 410 or hybrids of one or more of these ways.
Suppression through control 406 of the prediction engine is achieved by inputting the selected indicator(s) to the prediction engine 100 together with other inputs based on the user input data such that the output of the prediction engine is modified (as compared with the situation where only the other inputs are made to the prediction engine). For example, the proportion of text predictions and the proportion of image predictions in the total number of predictions is adjusted.
Suppression through use of a filter 408 is achieved by post-processing the output of the prediction engine 100 as described with reference to
Suppression through amplification or reduction 410 is achieved by post-processing the output of the prediction engine 100 as described with reference to
After the suppression stage the prediction tool outputs 412 candidate image predictions at a quantity (which may be zero in some cases) and/or a rate (or frequency, such as the number of times candidate images are available over time) influenced by the suppression process. In this way image prediction candidates are available to the user for selection and input to the electronic device as and when they are required. A trade-off between use of display area for text candidate predictions and for image candidate predictions is carefully managed in an automatic manner so that burden on the user is reduced and more efficient and accurate entry of image data (and text data where appropriate) is achieved. Where the total number of candidate predictions available for selection by a user at any one time is a maximum of three, the process of
The computing device selects one or more indicators from a store of image use indicators as described above. For example, the computing device uses an identifier of the in-focus application available to it from an operating system of the computing device, to select an image use indicator associated with the application identifier. The prediction tool of the computing device checks 506 if the indicator meets criteria. For example, if the indicator is below a threshold then emojis are rarely observed in use with this application. In this case the prediction tool of the computing device switches off emoji prediction capability of the prediction engine. This is achieved by deactivating part of the prediction engine, or by filtering out emoji predictions generated by the prediction engine. If the indicator does not meet the criteria at check 506 the process repeats.
The application specific ratio 600 is a ratio of the number of image prediction candidates which have been selected in connection with the application to the total number of image prediction candidates which have been available for selection in connection with the application. The general ratio 602 is a ratio of the number of image prediction candidates which have been selected to the total number of image prediction candidates which have been available for selection. The ratios 600, 602 are user specific in some examples although this is not essential.
The prediction engine outputs prediction candidates which are shown in the uppermost table of dotted region 604. In this example there are three word candidates and three emoji candidates and each candidate has an associated statistical value generated by the prediction engine 100. The candidates are ranked in the table according to the statistical values as the statistical values represent likelihood that the candidate will be selected by a user.
The ratios 600, 602 (or other indicator(s)) are combined to form a numerical value called a multiplier. In the example of
Without use of the magnifier the candidates of the uppermost table produce a keyboard display 608 with the most likely word in the center of the candidate row, and with word 2 and emoji 1 either side of the most likely word, word 1. With use of the magnifier the candidates of the lowermost table produce a keyboard display 608 with the most likely candidate in the center of the candidate row being emoji 1, and with emoji 2 and word 1 either side.
If an emoji candidate is selected 712 with a long press 710 (indicating that the emoji is to be blacklisted) then an instance of negative evidence 716 is observed and the indicator updater 720 updates the indicators 704. If no emoji candidate is selected 712 and the user accesses an emoji panel 714, by actively selecting an emoji panel, then an instance of positive evidence is observed and is used by the indicator updater 720 to update the indicators 704. If no emoji is selected 712 and no emoji panel is accessed 714 then an instance of negative evidence 716 is observed and the indicator updater 720 updates the indicators 704 accordingly. The indicator updater uses update rules to update the indicators according to instances of positive or negative evidence. A time decay process 706 is applied to the indicators 704 in some cases to enable changes in user behavior over time to be taken into account.
More detail about the prediction engine 100 of
The prediction engine 100 comprises a language model in some examples. The prediction engine comprises a search engine in some examples. The prediction engine comprises a classifier in some examples.
An example in which the prediction engine 100 comprises a language model is now described. The prediction engine 100 comprises an image language model to generate image predictions and, optionally, word prediction(s). The image language model may be a generic image language model, for example a language model based on the English language, or may be an application-specific image language model, e.g. a language model trained on short message service messages or email messages, or any other suitable type of language model. The prediction engine 100 may comprise any number of additional language models, which may be text-only language models or an image language model.
If the prediction engine 100 comprises one or more additional language models, the predication engine 100 comprises a multi-language model (Multi-LM) to combine the image predictions and/or word predictions, sourced from each of the language models to generate final image predictions and/or final word predictions that may be provided to a user interface for display and user selection. The final image predictions are preferably a set (i.e. a specified number) of the overall most probable predictions.
If the additional language model is a standard word-based language model, it is used alongside the image-based language model, such that the prediction engine 100 generates an image prediction from the image language model and a word prediction from the word-based language model. If preferred, the image/word based language model may also generate word predictions which are used by the Multi-LM to generate a final set of word predictions. Since the additional language model of this embodiment can predict words only, the Multi-LM is not needed to output final image predictions. The word-based language model 104 of
If the additional language model 104 of
An example of an image language model is now described. There are two possible inputs into a given language model, a current term input and a context input. The language model may use either or both of the possible inputs. The current term input comprises information the system has about the term the system is trying to predict, e.g. the word the user is attempting to enter (e.g. if the user has entered “I am working on ge”, the current term input 11 is ‘ge’). This could be a sequence of multi-character keystrokes, individual character keystrokes, the characters determined from a continuous touch gesture across a touchscreen keypad, or a mixture of input forms. The context input comprises the sequence of terms entered so far by the user, directly preceding the current term (e.g. “I am working”), and this sequence is split into ‘tokens’ by the Multi-LM or a separate tokenizer. If the system is generating a prediction for the nth term, the context input will contain the preceding n−1 terms that have been selected and input into the system by the user. The n−1 terms of context may comprise a single word, a sequence of words, or no words if the current word input relates to a word beginning a sentence. A language model may comprise an input model (which takes the current term input as input) and a context model (which takes the context input as input).
For example, the language model comprises a trie (an example of an input model) and a word-based n-gram map (an example of a context model) to generate word predictions from current input and context respectively. The language model comprises an intersection to compute a final set of word predictions from the predictions generated by the trie and n-gram map. The trie can be a standard trie or an approximate trie which is queried with the direct current word-segment input. Alternatively, the trie can be a probabilistic trie which is queried with a KeyPressVector generated from the current input. The language model can also comprise any number of filters to generate the final set of word predictions. If desired, the intersection of the language model is configured to employ a back-off approach if a candidate predicted by the trie has not been predicted by the n-gram map also, rather than retaining only candidates generated by both. Each time the system has to back-off on the context searched for, the intersection mechanism applies a ‘back-off penalty to the probability (which may be a fixed penalty, e.g. by multiplying by a fixed value). In this embodiment, the context model (e.g. the n-gram map) may comprise unigram probabilities with the back-off penalties applied.
In an example, the language model includes a word→image correspondence map, which maps each word of the language model to one or more relevant images/labels, e.g. if the word prediction is ‘pizza’, the language model outputs an image of a pizza (e.g. the pizza emoji) as the image prediction.
In some examples the word to image correspondence map is not needed since the n-gram map of the language model is trained on source data comprising images embedded in sections of text. In this case, the n-gram map the emojis are treated like words to generate the language model, i.e. the n-gram map comprises emojis in the context in which they have been identified. The n-gram map comprises the probabilities associated with sequences of words and emojis, where emojis and words are treated in the same manner. In some cases the n-gram map with emojis is used without the trie so that images are predicted without the need for the current input. In some cases the n-gram map with emojis is used with the trie so that the intersection is computed to yield the word predictions and the image predictions (without the need for the correspondence map).
In the case that a search engine is used as the prediction engine, the search engine has an image database comprising a statistical model associated with each image. The statistical models have been trained on sections of text associated with the particular image for that model. The statistical model is a text language model in some examples.
In some examples the prediction engine comprises a classifier which has been trained on text data that has been pre-labeled with images. Any suitable type of machine learning classifier may be used which identifies to which of a set of categories a new observation belongs, on the basis of a training set of data containing observations or instances whose category membership is known. In some cases a neural network classifier is used.
Computing-based device 800 comprises one or more processors 802 which are microprocessors, controllers or any other suitable type of processors for processing computer executable instructions to control the operation of the device in order to input images such as emojis to the device, where the images are relevant to text input by the user. In some examples, for example where a system on a chip architecture is used, the processors 802 include one or more fixed function blocks (also referred to as accelerators) which implement a part of the method of any of
The computer executable instructions are provided using any computer-readable media that is accessible by computing based device 800. Computer-readable media includes, for example, computer storage media such as memory 812 and communications media. Computer storage media, such as memory 812, includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or the like. Computer storage media includes, but is not limited to, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM), electronic erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that is used to store information for access by a computing device. In contrast, communication media embody computer readable instructions, data structures, program modules, or the like in a modulated data signal, such as a carrier wave, or other transport mechanism. As defined herein, computer storage media does not include communication media. Therefore, a computer storage medium should not be interpreted to be a propagating signal per se. Although the computer storage media (memory 812) is shown within the computing-based device 812 it will be appreciated that the storage is, in some examples, distributed or located remotely and accessed via a network or other communication link (e.g. using communication interface 808).
The computing-based device 800 also comprises an input/output controller 810 arranged to output display information to a display device which may be separate from or integral to the computing-based device 800. The display information may provide a graphical user interface. The input/output controller 810 is also arranged to receive and process input from one or more devices, such as a user input device 814 (e.g. a mouse, keyboard, camera, microphone or other sensor). In some examples the user input device 814 detects voice input, user gestures or other user actions and provides a natural user interface (NUT). This user input may be used to select candidate images for input, input text, set criteria, configure rules and for other purposes. In an embodiment the display device also acts as the user input device 814 if it is a touch sensitive display device. The input/output controller 810 outputs data to devices other than the display device in some examples, e.g. a locally connected printing device.
Any of the input/output controller 810, display device and the user input device 814 may comprise NUI technology which enables a user to interact with the computing-based device in a natural manner, free from artificial constraints imposed by input devices such as mice, keyboards, remote controls and the like. Examples of NUI technology that are provided in some examples include but are not limited to those relying on voice and/or speech recognition, touch and/or stylus recognition (touch sensitive displays), gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, voice and speech, vision, touch, gestures, and machine intelligence. Other examples of NUI technology that are used in some examples include intention and goal understanding systems, motion gesture detection systems using depth cameras (such as stereoscopic camera systems, infrared camera systems, red green blue (rgb) camera systems and combinations of these), motion gesture detection using accelerometers/gyroscopes, facial recognition, three dimensional (3D) displays, head, eye and gaze tracking, immersive augmented reality and virtual reality systems and technologies for sensing brain activity using electric field sensing electrodes (electro encephalogram (EEG) and related methods).
Alternatively or in addition to the other examples described herein, examples include any combination of the following:
A computing device comprising:
a memory storing at least one indicator of image use, where image use is entry of images to a computing device;
a user interface which receives user input;
a processor configured to trigger prediction, from the user input, of a plurality of candidate images for input to the computing device; and
wherein the processor is configured to at least partially suppress availability of the candidate images for selection by a user, using the indicator of image use.
A computing device as described above wherein the at least one indicator of image use is a statistic describing at least observations of images input to the computing device.
A computing device as described above wherein the at least one indicator of image use is a statistic describing at least observations of candidate images available for selection by a user.
A computing device as described above wherein the at least one indicator of image use is any one or more of: user specific, application specific, field specific, time specific, recipient specific.
A computing device as described above wherein the processor is configured to dynamically compute the at least one indicator of image use during operation of the computing device.
A computing device as described above wherein the processor is configured to receive context about the user input and to select, using the context, the at least one indicator of image use from a plurality of indicators of image use.
A computing device as described above wherein the processor is configured to trigger prediction by sending details of the user input and the indicator of image use to a prediction engine and receiving in response the plurality of candidate images.
A computing device as described above wherein the processor is configured to at least partially suppress availability of the candidate images by inputting the at least one indicator to a prediction engine.
A computing device as described above wherein the processor is configured to suppress availability of the candidate images by switching off image prediction capability.
A computing device as described above wherein the processor is configured to at least partially suppress availability of the candidate images by filtering the candidate images.
A computing device as described above wherein the processor is configured to control availability of the candidate images by multiplying statistical values of the candidate images by a multiplier computed from the at least one indicator.
A computing device as described above wherein the at least one indicator is a ratio of observations of images input to the computing device to observations of candidate images available for selection by a user.
A computing device as described above wherein the user interface is a soft keyboard having text prediction and image prediction capability.
A computer-implemented method comprising:
storing at least one indicator of image use, where image use is entry of images to a computing device;
receiving user input;
triggering prediction, from the user input, of a plurality of candidate images for input to the computing device; and at least partially suppressing availability of the candidate images for selection by a user, using the indicator of image use.
A method as described above wherein the at least one indicator of image use is a statistic describing at least observations of images input to the computing device.
A method as described above wherein the at least one indicator of image use is a statistic describing at least observations of candidate images available for selection by a user.
A method as described above wherein the at least one indicator of image use is any one or more of: user specific, application specific, field specific, time specific, recipient specific.
A method as described above comprising dynamically computing the at least one indicator of image use during operation of the computing device.
A method as described above comprising receiving context about the user input and selecting, using the context, the at least one indicator of image use from a plurality of indicators of image use.
A method as described above comprising triggering prediction by sending details of the user input and the indicator of image use to a prediction engine and receiving in response the plurality of candidate images.
A method as described above comprising at least partially suppressing availability of the candidate images by inputting the at least one indicator to the prediction engine.
A method as described above comprising suppressing availability of the candidate images by switching off image prediction capability.
A method as described above comprising at least partially suppressing availability of the candidate images by filtering the candidate images.
A method as described above comprising controlling availability of the candidate images by multiplying statistical values of the candidate images by a multiplier computed from the at least one indicator.
A method as described above wherein the at least one indicator is a ratio of observations of images input to the computing device to observations of candidate images available for selection by a user.
A method as described above comprising implementing the user interface as a soft keyboard having text prediction and image prediction capability.
A computing device comprising:
means for storing at least one indicator of image use, where image use is entry of images to a computing device;
means for receiving user input;
means for triggering prediction, from the user input, of a plurality of candidate images for input to the computing device; and at least partially suppressing availability of the candidate images for selection by a user, using the indicator of image use.
For example, the means for storing at least one indicator of image use is the memory 812 of
A computing device comprising:
a processor which implements a keyboard having text prediction and emoji prediction capability; and
a memory storing at least one indicator of emoji use; and
wherein the processor is configured to control the emoji prediction capability of the keyboard on the basis of the indicator.
A computer-implemented method comprising:
executing a keyboard having text prediction and emoji prediction capability;
storing at least one indicator of emoji use; and
controlling the emoji prediction capability of the keyboard on the basis of the indicator.
A computing device comprising:
means for executing a keyboard having text prediction and emoji prediction capability;
means for storing at least one indicator of emoji use; and
means for controlling the emoji prediction capability of the keyboard on the basis of the indicator.
For example, the means for executing a keyboard with prediction is the user interface and prediction tool 820 of
A computing device comprising:
a processor which implements a soft keyboard having text prediction and emoji prediction capability; and
a memory storing a plurality of applications executable on the computing device, the memory storing an application specific indicator of emoji use for each of the applications; and
wherein the processor is configured to automatically switch on or off the emoji prediction capability of the keyboard on the basis of the application specific indicators and an indication of which of the plurality of applications is currently in focus.
A computer-implemented method comprising:
executing a keyboard having text prediction and emoji prediction capability;
storing a plurality of applications executable on the computing device, and storing an application specific indicator of emoji use for each of the applications; and
automatically switching on or off the emoji prediction capability of the keyboard on the basis of the application specific indicators and an indication of which of the plurality of applications is currently in focus.
A computing device comprising:
means for executing a keyboard having text prediction and emoji prediction capability;
means for storing a plurality of applications executable on the computing device, and storing an application specific indicator of emoji use for each of the applications; and
means for automatically switching on or off the emoji prediction capability of the keyboard on the basis of the application specific indicators and an indication of which of the plurality of applications is currently in focus.
For example, the means for executing a keyboard with prediction is the user interface and prediction tool 820 of
The term ‘computer’ or ‘computing-based device’ is used herein to refer to any device with processing capability such that it executes instructions. Those skilled in the art will realize that such processing capabilities are incorporated into many different devices and therefore the terms ‘computer’ and ‘computing-based device’ each include personal computers (PCs), servers, mobile telephones (including smart phones), tablet computers, set-top boxes, media players, games consoles, personal digital assistants, wearable computers, and many other devices.
The methods described herein are performed, in some examples, by software in machine readable form on a tangible storage medium e.g. in the form of a computer program comprising computer program code means adapted to perform all the operations of one or more of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable medium. The software is suitable for execution on a parallel processor or a serial processor such that the method operations may be carried out in any suitable order, or simultaneously.
This acknowledges that software is a valuable, separately tradable commodity. It is intended to encompass software, which runs on or controls “dumb” or standard hardware, to carry out the desired functions. It is also intended to encompass software which “describes” or defines the configuration of hardware, such as HDL (hardware description language) software, as is used for designing silicon chips, or for configuring universal programmable chips, to carry out desired functions.
Those skilled in the art will realize that storage devices utilized to store program instructions are optionally distributed across a network. For example, a remote computer is able to store an example of the process described as software. A local or terminal computer is able to access the remote computer and download a part or all of the software to run the program. Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a digital signal processor (DSP), programmable logic array, or the like.
Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. It will further be understood that reference to ‘an’ item refers to one or more of those items.
The operations of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. Additionally, individual blocks may be deleted from any of the methods without departing from the scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought.
The term ‘comprising’ is used herein to mean including the method blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may contain additional blocks or elements.
It will be understood that the above description is given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exemplary embodiments. Although various embodiments have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from the spirit or scope of this specification.
Number | Date | Country | Kind |
---|---|---|---|
1610984.5 | Jun 2016 | GB | national |
Number | Name | Date | Kind |
---|---|---|---|
4852173 | Bahl et al. | Jul 1989 | A |
4872122 | Altschuler et al. | Oct 1989 | A |
5467425 | Lau et al. | Nov 1995 | A |
5477451 | Brown et al. | Dec 1995 | A |
5612690 | Levy | Mar 1997 | A |
5671426 | Armstrong, III | Sep 1997 | A |
5680511 | Baker et al. | Oct 1997 | A |
5768603 | Brown et al. | Jun 1998 | A |
5797098 | Schroeder et al. | Aug 1998 | A |
5805832 | Brown et al. | Sep 1998 | A |
5805911 | Miller | Sep 1998 | A |
5806021 | Chen et al. | Sep 1998 | A |
5896321 | Miller et al. | Apr 1999 | A |
5952942 | Balakrishnan et al. | Sep 1999 | A |
5963671 | Comerford et al. | Oct 1999 | A |
6009444 | Chen | Dec 1999 | A |
6011554 | King et al. | Jan 2000 | A |
6052657 | Yamron et al. | Apr 2000 | A |
6054941 | Chen | Apr 2000 | A |
6101275 | Coppersmith et al. | Aug 2000 | A |
6104989 | Kanevsky et al. | Aug 2000 | A |
6125342 | Selesky | Sep 2000 | A |
6167377 | Gillick et al. | Dec 2000 | A |
6182027 | Nasukawa et al. | Jan 2001 | B1 |
6204848 | Nowlan et al. | Mar 2001 | B1 |
6219632 | Schumacher et al. | Apr 2001 | B1 |
6236958 | Lange et al. | May 2001 | B1 |
6253169 | Apte et al. | Jun 2001 | B1 |
6269353 | Sethi et al. | Jul 2001 | B1 |
6275792 | Lewis | Aug 2001 | B1 |
6286064 | King et al. | Sep 2001 | B1 |
6307548 | Flinchem et al. | Oct 2001 | B1 |
6307549 | King et al. | Oct 2001 | B1 |
6317735 | Morimoto | Nov 2001 | B1 |
6321192 | Houchin et al. | Nov 2001 | B1 |
6327561 | Smith et al. | Dec 2001 | B1 |
6362752 | Guo et al. | Mar 2002 | B1 |
6377965 | Hachamovitch et al. | Apr 2002 | B1 |
6393399 | Even | May 2002 | B1 |
6405060 | Schroeder et al. | Jun 2002 | B1 |
6442561 | Gehrke et al. | Aug 2002 | B1 |
6460015 | Hetherington et al. | Oct 2002 | B1 |
6484136 | Kanevsky et al. | Nov 2002 | B1 |
6490549 | Ulicny et al. | Dec 2002 | B1 |
6519557 | Emens et al. | Feb 2003 | B1 |
6523134 | Korenshtein | Feb 2003 | B2 |
6573844 | Venolia et al. | Jun 2003 | B1 |
6625600 | Lyudovyk et al. | Sep 2003 | B2 |
6643660 | Miller et al. | Nov 2003 | B1 |
6671670 | Levin et al. | Dec 2003 | B2 |
6724936 | Riemer | Apr 2004 | B1 |
6778958 | Nishimura et al. | Aug 2004 | B1 |
6813616 | Simpson et al. | Nov 2004 | B2 |
6889219 | Epstein et al. | May 2005 | B2 |
6901364 | Nguyen et al. | May 2005 | B2 |
6911608 | Levy | Jun 2005 | B2 |
6917910 | Itoh et al. | Jul 2005 | B2 |
6925433 | Stensmo | Aug 2005 | B2 |
6941287 | Vaidyanathan et al. | Sep 2005 | B1 |
6944329 | Yoshii | Sep 2005 | B2 |
6963831 | Epstein | Nov 2005 | B1 |
6963870 | Heckerman | Nov 2005 | B2 |
6965856 | Stuermer | Nov 2005 | B1 |
6993476 | Dutta et al. | Jan 2006 | B1 |
7003490 | Keyes | Feb 2006 | B1 |
7075520 | Williams | Jul 2006 | B2 |
7092870 | Chen et al. | Aug 2006 | B1 |
7098896 | Kushler et al. | Aug 2006 | B2 |
7129932 | Klarlund et al. | Oct 2006 | B1 |
7175438 | Levy | Feb 2007 | B2 |
7187365 | Harman | Mar 2007 | B2 |
7218249 | Chadha | May 2007 | B2 |
7222067 | Glushnev et al. | May 2007 | B2 |
7251367 | Zhai | Jul 2007 | B2 |
7251639 | Bernhardt et al. | Jul 2007 | B2 |
7269546 | Stensmo | Sep 2007 | B2 |
7366666 | Balchandran et al. | Apr 2008 | B2 |
7395081 | Bonnelykke Kristensen et al. | Jul 2008 | B2 |
7426505 | Simpson et al. | Sep 2008 | B2 |
7453439 | Kushler et al. | Nov 2008 | B1 |
7475010 | Chao | Jan 2009 | B2 |
7487461 | Zhai et al. | Feb 2009 | B2 |
7562016 | Balchandran et al. | Jul 2009 | B2 |
7580829 | James et al. | Aug 2009 | B2 |
7610189 | Mackie | Oct 2009 | B2 |
7610191 | Gao et al. | Oct 2009 | B2 |
7706616 | Kristensson et al. | Apr 2010 | B2 |
7720682 | Stephanick et al. | May 2010 | B2 |
7750891 | Stephanick et al. | Jul 2010 | B2 |
7809575 | Ativanichayaphong et al. | Oct 2010 | B2 |
7814088 | Simpson et al. | Oct 2010 | B2 |
7843427 | Ording et al. | Nov 2010 | B2 |
7900142 | Baer | Mar 2011 | B2 |
7904298 | Rao | Mar 2011 | B2 |
7912700 | Bower et al. | Mar 2011 | B2 |
7920132 | Longe et al. | Apr 2011 | B2 |
7953692 | Bower et al. | May 2011 | B2 |
7996211 | Gao et al. | Aug 2011 | B2 |
8010343 | Agapi et al. | Aug 2011 | B2 |
8032358 | Helletzgruber et al. | Oct 2011 | B2 |
8036878 | Assadollahi | Oct 2011 | B2 |
8073698 | Ativanichayaphong et al. | Dec 2011 | B2 |
8074172 | Kocienda et al. | Dec 2011 | B2 |
8117144 | Angell et al. | Feb 2012 | B2 |
8136050 | Sacher et al. | Mar 2012 | B2 |
8200487 | Peters et al. | Jun 2012 | B2 |
8225203 | Unruh | Jul 2012 | B2 |
8232973 | Kocienda et al. | Jul 2012 | B2 |
8289283 | Kida et al. | Oct 2012 | B2 |
8299943 | Longe | Oct 2012 | B2 |
8516367 | Archer | Aug 2013 | B2 |
8605039 | Danielsson et al. | Dec 2013 | B2 |
8713432 | Assadollahi | Apr 2014 | B2 |
8756527 | Paasovaara | Jun 2014 | B2 |
8812972 | Bangalore | Aug 2014 | B2 |
9152219 | Dai et al. | Oct 2015 | B2 |
9331970 | Yuen et al. | May 2016 | B2 |
9424246 | Spencer et al. | Aug 2016 | B2 |
9659002 | Medlock et al. | May 2017 | B2 |
20020045463 | Chen et al. | Apr 2002 | A1 |
20020077808 | Liu et al. | Jun 2002 | A1 |
20020091520 | Endo et al. | Jul 2002 | A1 |
20020111806 | Franz et al. | Aug 2002 | A1 |
20020126097 | Savolainen | Sep 2002 | A1 |
20020156627 | Itoh et al. | Oct 2002 | A1 |
20020188448 | Goodman et al. | Dec 2002 | A1 |
20020196163 | Bradford et al. | Dec 2002 | A1 |
20030011574 | Goodman | Jan 2003 | A1 |
20030023420 | Goodman | Jan 2003 | A1 |
20030046073 | Mori et al. | Mar 2003 | A1 |
20030073451 | Kraft | Apr 2003 | A1 |
20030093263 | Chen et al. | May 2003 | A1 |
20030182279 | Willows | Sep 2003 | A1 |
20030234821 | Pugliese | Dec 2003 | A1 |
20040002879 | Bernhardt et al. | Jan 2004 | A1 |
20040083198 | Bradford et al. | Apr 2004 | A1 |
20040136564 | Roeber et al. | Jul 2004 | A1 |
20040153975 | Williams et al. | Aug 2004 | A1 |
20040156562 | Mulvey et al. | Aug 2004 | A1 |
20040201607 | Mulvey et al. | Oct 2004 | A1 |
20040243393 | Wang | Dec 2004 | A1 |
20050017954 | Kay et al. | Jan 2005 | A1 |
20050044495 | Lee et al. | Feb 2005 | A1 |
20050060138 | Wang et al. | Mar 2005 | A1 |
20050114770 | Sacher et al. | May 2005 | A1 |
20050162395 | Unruh | Jul 2005 | A1 |
20050190970 | Griffin | Sep 2005 | A1 |
20050200609 | Van Der Hoeven | Sep 2005 | A1 |
20050270270 | Chadha | Dec 2005 | A1 |
20050283724 | Griffin | Dec 2005 | A1 |
20060036428 | Ramsey | Feb 2006 | A1 |
20060047498 | Fux et al. | Mar 2006 | A1 |
20060142997 | Jakobsen et al. | Jun 2006 | A1 |
20060206313 | Xu et al. | Sep 2006 | A1 |
20060217144 | Bonnelykke Kristensen et al. | Sep 2006 | A1 |
20060247915 | Bradford et al. | Nov 2006 | A1 |
20060265208 | Assadollahi | Nov 2006 | A1 |
20060265648 | Rainisto et al. | Nov 2006 | A1 |
20060274051 | Longe et al. | Dec 2006 | A1 |
20060290535 | Thiesson et al. | Dec 2006 | A1 |
20070016572 | Bates et al. | Jan 2007 | A1 |
20070033002 | Dymetman et al. | Feb 2007 | A1 |
20070046641 | Lim | Mar 2007 | A1 |
20070061753 | Ng et al. | Mar 2007 | A1 |
20070074131 | Assadollahi | Mar 2007 | A1 |
20070076862 | Chatterjee et al. | Apr 2007 | A1 |
20070088729 | Baca et al. | Apr 2007 | A1 |
20070094024 | Kristensson et al. | Apr 2007 | A1 |
20070155369 | Jobs | Jul 2007 | A1 |
20070157122 | Williams | Jul 2007 | A1 |
20070247436 | Jacobsen | Oct 2007 | A1 |
20080040099 | Wu et al. | Feb 2008 | A1 |
20080076472 | Hyatt | Mar 2008 | A1 |
20080077406 | Ganong | Mar 2008 | A1 |
20080091412 | Strope et al. | Apr 2008 | A1 |
20080120102 | Rao | May 2008 | A1 |
20080126075 | Thorn | May 2008 | A1 |
20080162113 | Dargan | Jul 2008 | A1 |
20080168366 | Kocienda et al. | Jul 2008 | A1 |
20080189605 | Kay et al. | Aug 2008 | A1 |
20080195374 | Holubar et al. | Aug 2008 | A1 |
20080195388 | Bower et al. | Aug 2008 | A1 |
20080195571 | Furuuchi et al. | Aug 2008 | A1 |
20080281583 | Slothouber et al. | Nov 2008 | A1 |
20080282154 | Nurmi | Nov 2008 | A1 |
20080285857 | Sharan et al. | Nov 2008 | A1 |
20080291059 | Longe | Nov 2008 | A1 |
20080294982 | Leung et al. | Nov 2008 | A1 |
20080310723 | Manu et al. | Dec 2008 | A1 |
20080312910 | Zhang | Dec 2008 | A1 |
20090006101 | Rigazio et al. | Jan 2009 | A1 |
20090009367 | Hirshberg | Jan 2009 | A1 |
20090019002 | Boulis | Jan 2009 | A1 |
20090058690 | Scott | Mar 2009 | A1 |
20090058813 | Rubanovich et al. | Mar 2009 | A1 |
20090106695 | Perry et al. | Apr 2009 | A1 |
20090150322 | Bower et al. | Jun 2009 | A1 |
20090187846 | Paasovaara | Jul 2009 | A1 |
20090193334 | Assadollahi | Jul 2009 | A1 |
20090210214 | Qian et al. | Aug 2009 | A1 |
20090225041 | Kida et al. | Sep 2009 | A1 |
20090249198 | Davis et al. | Oct 2009 | A1 |
20090271195 | Kitade et al. | Oct 2009 | A1 |
20090284471 | Longe et al. | Nov 2009 | A1 |
20090313572 | Paek et al. | Dec 2009 | A1 |
20100017393 | Broicher et al. | Jan 2010 | A1 |
20100031143 | Rao et al. | Feb 2010 | A1 |
20100106740 | Kawauchi | Apr 2010 | A1 |
20100121870 | Unruh et al. | May 2010 | A1 |
20100161318 | Lowles et al. | Jun 2010 | A1 |
20100171700 | Sharan et al. | Jul 2010 | A1 |
20100199176 | Chronqvist | Aug 2010 | A1 |
20100208031 | Lee et al. | Aug 2010 | A1 |
20100218141 | Xu et al. | Aug 2010 | A1 |
20100225599 | Danielsson et al. | Sep 2010 | A1 |
20100283736 | Akabane et al. | Nov 2010 | A1 |
20110029862 | Scott et al. | Feb 2011 | A1 |
20110047456 | Sharan et al. | Feb 2011 | A1 |
20110074685 | Causey et al. | Mar 2011 | A1 |
20110179032 | Ceusters et al. | Jul 2011 | A1 |
20110202836 | Badger et al. | Aug 2011 | A1 |
20120010875 | Helletzgruber et al. | Jan 2012 | A1 |
20120029910 | Medlock et al. | Feb 2012 | A1 |
20120059787 | Brown et al. | Mar 2012 | A1 |
20120167009 | Davidson et al. | Jun 2012 | A1 |
20120296650 | Bates et al. | Nov 2012 | A1 |
20130151508 | Kurabayashi et al. | Jun 2013 | A1 |
20130159919 | Leydon | Jun 2013 | A1 |
20140088954 | Shirzadi et al. | Mar 2014 | A1 |
20140267045 | Grieves et al. | Sep 2014 | A1 |
20140297267 | Spencer et al. | Oct 2014 | A1 |
20140350920 | Medlock et al. | Nov 2014 | A1 |
20150100537 | Grieves | Apr 2015 | A1 |
20150149925 | Weksler et al. | May 2015 | A1 |
20150186362 | Li et al. | Jul 2015 | A1 |
20150243279 | Morse et al. | Aug 2015 | A1 |
20150317069 | Clements | Nov 2015 | A1 |
20150332088 | Chembula et al. | Nov 2015 | A1 |
20150332672 | Akbacak et al. | Nov 2015 | A1 |
20150347920 | Medlock et al. | Dec 2015 | A1 |
20160085773 | Chang et al. | Mar 2016 | A1 |
20160104482 | Aleksic et al. | Apr 2016 | A1 |
20160328377 | Spencer et al. | Nov 2016 | A1 |
20170220552 | Medlock et al. | Aug 2017 | A1 |
Number | Date | Country |
---|---|---|
1703664 | Nov 2005 | CN |
101034390 | Sep 2007 | CN |
101203849 | Jun 2008 | CN |
101329614 | Dec 2008 | CN |
101496036 | Jul 2009 | CN |
0872801 | Oct 1998 | EP |
1724692 | Nov 2006 | EP |
2088536 | Aug 2009 | EP |
2443652 | May 2008 | GB |
2001050335 | Jul 2001 | WO |
2008116843 | Oct 2008 | WO |
2008120033 | Oct 2008 | WO |
2014150104 | Sep 2014 | WO |
2015087084 | Jun 2015 | WO |
2015119605 | Aug 2015 | WO |
Entry |
---|
“What is Emoji Prediction?”, Published on: Nov. 19, 2014 Available at: https://blog.swiftkey.corn/what-is-emoji-prediction/. |
Calimlim, Aldrin, “6 emoji keyboard apps for iOS to help you say more”, Published on: Jul. 17, 2015 Available at: http://appadvice.com/appnn/2015/0716-emoji-keyboard-apps-for-ios-to-help-you-say-more. |
Cappallo, et al., “Image2Emoji: Zero-shot Emoji Prediction for Visual Media”, In Proceedings of the 23rd ACM international conference on Multimedia, Oct. 26, 2015, pp. 1311-1314. |
Aoki, et al., “A Method for Automatically Generating the Emotional Vectors of Emoticons Using Weblog Articles”, In Proceedings of the 10th WSEAS international conference on Applied computer and applied computational science, Mar. 8, 2011, pp. 132-136. |
“International Search Report and Written Opinion Issued in PCT Application No. PCT/US2017/038211”, dated Oct. 11, 2017, 12 Pages. |
U.S. Appl. No. 15/493,028; Final Office Action; dated Oct. 20, 2017; 11 pages. |
U.S. Appl. No. 14/452,887; Non-Final Office Action; dated Oct. 5, 2017; 53 pages. |
“Ai.type keyboard Free + Emoji”, Retrieved from <<https://web.archive.org/web/20150905230845/https://play.google.com/store/apps/details?id=com.aitype.android>>, May 19, 2015, 31 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 10/185,663”, dated Oct. 3, 2006, 31 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 10/185,663”, dated May 1, 2007, 8 Pages. |
“Intent to Grant Issued in European Patent Application No. 10725232.2”, dated Nov. 6, 2014, 6 Pages. |
“Notice of Allowance Issued in European Patent Application No. 10725232.2”, dated Jan. 29, 2015, 2 Pages. |
“Office Action Issued in European Patent Application No. 10775846.8”, dated Aug. 4, 2017, 6 Pages. |
“Final Office Action Issued in U.S. Appl. No. 13/262,190”, dated May 15, 2015, 16 Pages. |
“Final Office Action Issued in U.S. Appl. No. 13/262,190”, dated Nov. 4, 2016, 13 Pages. |
“Final Office Action Issued in U.S. Appl. No. 13/262,190”, dated Feb. 12, 2016, 13 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 13/262,190”, dated Oct. 26, 2015, 12 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 13/262,190”, dated Jul. 18, 2016, 12 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 13/262,190”, dated Jan. 22, 2015, 22 Pages. |
“Notice of Allowance Issued in U.S. Appl. No. 13/262,190”, dated Jan. 19, 2017, 15 Pages. |
“Corrected Notice of Allowance Issued in U.S. Appl. No. 14/307,308”, dated Jun. 3, 2016, 2 Pages. |
“Final Office Action Issued in U.S. Appl. No. 14/307,308”, dated Nov. 23, 2015, 12 Pages. |
“Non Final Office Action Issued in U.S. Appl. No. 14/307,308”, dated Aug. 17, 2015, 16 pages. |
“Notice of Allowance Issued in U.S. Appl. No. 14/307,308”, dated Apr. 25, 2016, 13 Pages. |
“Final Office Action Issued in U.S Appl. No. 14/452,887”, dated Feb. 21, 2018, 60 Pages. |
“Final Office Action Issued in U.S. Appl. No. 14/452,887”, dated Sep. 8, 2016, 52 Pages. |
“Final Office Action Issued in U.S. Appl. No. 14/452,887”, dated Mar. 13, 2015, 45 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 14/452,887”, dated May 6, 2016, 47 pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 14/452,887”, dated Nov. 5, 2014, 40 Pages. |
“Final Office Action Issued in U.S. Appl. No. 15/213,168”, dated Nov. 15, 2017, 21 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 15/213,168”, dated Jul. 28, 2017, 16 Pages. |
“Non-Final Office Action Issued in U.S. Appl. No. 15/493,028”, dated Jul. 10, 2017, 18 Pages. |
“Extended Search Report Issued in European Patent Application No. 15155451.6”, dated May 12, 2015, 6 Pages. |
“First Office Action and Search Report Issued in Chinese Patent Application No. 201080022530.6”, dated Dec. 4, 2013, 26 Pages. |
“Fourth Office Action Issued in Chinese Patent Application No. 201080022530.6”, dated Jul. 13, 2015, 10 Pages. |
“Notice of Allowance Issued in Chinese Patent Application No. 201080022530.6”, dated Nov. 26, 2015, 3 Pages. |
“Second Office Action Issued in Chinese Patent Application No. 201080022530.6”, dated Aug. 11, 2014, 25 Pages. |
“Third Office Action Issued in Chinese Patent Application No. 201080022530.6”, dated Feb. 28, 2015, 27 Pages. |
“First Office Action and Search Report Issued in Chinese Patent Application No. 201610080975.9”, dated Dec. 14, 2017, 22 Pages. |
Almuallim, et al., “Development and Applications of Decision Trees”, In the Expert Systems, vol. 1, Jan. 2002, pp. 53-77. |
Bellegarda, Jerome R., “Statistical Language Model Adaptation: Review and Perspectives”, In Journal of the Speech Communication, vol. 42, Issue 1, Jan. 1, 2004, pp. 93-108. |
Bosch, et al., “Efficient Context-sensitive Word Completion for Mobile Devices”, In Proceedings of the 10th International Conference on Human Computer Interaction with Mobile Devices and Services, Sep. 2, 2008, pp. 465-470. |
Duda, “Pattern Classification”, 2nd Edition, Chapter 10, by Wiley-Interscience, Sep. 3, 1997, 41 Pages. |
Fayyad, et al., “The Attribute Selection Problem in Decision Tree Generation”, In Proceedings of the 10th National Conference on Artificial Intelligence, Jul. 2, 1992, pp. 104-110. |
Federico, Marcello, “Efficient Language Model Adaptation Through MDI Estimation”, In Proceedings of Sixth European Conference on Speech Communication and Technology, Sep. 5, 1999, 4 Pages. |
Florian, et al., “Dynamic Nonlocallanguage Modeling via Hierarchical Topic-Based Adaption”, In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics on Computational Linguistics, Jun. 20, 1999, 9 Pages. |
Frederico, Marcello, “Bayesian Estimation Methods for N-Gram Language Model Adaptation”, In Proceedings of the Fourth International Conference on Spoken Language, vol. 1, Oct. 3, 1996, 4 Pages. |
Giles, et al., “Comparison of Two Families of Entropy-Based Classification Measures With and Without Feature Selection”, In Proceedings of the 34th Annual Hawaii International Conference on System Sciences, Jan. 6, 2001. |
How, et al., “Optimizing Predictive Text Entry for Short Message Service on Mobile Phones”, In Proceedings of International Conference on Hybrid Information Technology (HCII), vol. 5, Jul. 2005, 10 Pages. |
Lesher, et al., “Domain-Specific Word Prediction for Augmentative Communication”, In Proceedings of RESNA Annual Conference, Jan. 2002, 3 Pages. |
Li, et al., “Using Uneven Margins SVM and Perception for Information Extraction”, In Proceedings of the 9th Conference on Computational Natural Language Learning, Jun. 29, 2005, pp. 72-79. |
Medlock, Ben, W., “Investigating Classification for Natural Language Processing Tasks”, In Technical Report No. 721, University of Cambridge, Computer Laboratory, Jun. 2008, 138 Pages. |
“International Search Report and Written Opinion Issued in PCT Application No. PCT/GB2010/000622”, dated May 8, 2010, 13 Pages. |
Silfverberg, et al., “Predicting Text Entry Speed on Mobile Phones”, In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, Apr. 1, 2000, pp. 9-16. |
Talbot, et al., “Smoothed Bloom Filter Language Models-Tera-Scale LMs on the Cheap”, In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Jun. 2007, pp. 468-476. |
Trapani, Gina, “Cupcake Adds Camcorder, Touchscreen Keyboard and More to Android”, Retrieved from <<https://lifehacker.com/5272021/cupcake-adds-camcorder-touchscreen-keyboard-and-more-to-android>>, May 28, 2009, 5 Pages. |
Trnka, et al., “Topic Modeling in Fringe Word Prediction for AAC”, In Proceedings of the 11th International Conference on Intelligent User Interfaces, Jan. 29, 2006, 8 Pages. |
“Final Office Action Issued in U.S. Appl. No. 15/213,168”, dated Aug. 29, 2018, 18 Pages. |
“Final Office Action Issued in U.S. Appl. No. 15/213,168”, dated Dec. 27, 2018, 11 Pages. |
“Second Office Action Issued in Chinese Patent Application No. 201610080975.9”, dated Sep. 5, 2018, 6 Pages. |
Number | Date | Country | |
---|---|---|---|
20170371522 A1 | Dec 2017 | US |