Predictive emoji keyboards predict emoji based on what the user has typed. The words and phrases typed by the user are used as inputs for a prediction model and the prediction model is used to generate one or more predicted emoji to be presented to the user. For example, referring to
Typically the prediction model is large and the process of calculating predictions using the model is Random Access Memory (RAM)-intensive. This is not suitable for a personal computing device which has a limited hard drive and limited RAM. As a result, the prediction model is stored on a server and the computing device requests predictions over a communications network. In this approach, predictions are not available to the computing device until they have been received from the server. This introduces a delay due to the lag involved in network communication. If the delay is long enough it can be quite noticeable to the user.
The embodiments described below are not limited to implementations which solve any or all of the disadvantages of known techniques.
The following presents a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not intended to identify key features or essential features of the claimed subject matter nor is it intended to be used to limit the scope of the claimed subject matter. Its sole purpose is to present a selection of concepts disclosed herein in a simplified form as a prelude to the more detailed description that is presented later.
The description relates to predicting terms based on text inputted by a user. One example can include a computing device comprising a memory storing text that the user has inputted to the computing device. The computing device also includes a processor configured to send, over a communications network, the text to a remote prediction engine having been trained to predict terms from text. The processor is also configured to send the text to a local prediction engine stored at the computing device. The processor is configured to monitor for a local predicted term from the local prediction engine and to monitor for a remote predicted term from the remote prediction engine, in response to the sent text. The computing device also includes a user interface configured to present a final predicted term to a user of the computing device such that the user is able to select the final term to enter the final predicted term into the computing device. The processor is configured to form the final predicted term using either the remote predicted term or the local predicted term on the basis of a time interval running from the time at which the user input the text.
Many of the attendant features will be more readily appreciated as the same becomes better understood by reference to the following detailed description considered in connection with the accompanying drawings.
The present description will be better understood from the following detailed description read in light of the accompanying drawings, wherein:
Like reference numerals are used to designate like parts in the accompanying drawings.
The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present example are constructed or utilized. The description sets forth the functions of the example and the sequence of operations for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different examples.
The present disclosure presents a technique for forming a final predicted term based on text input from a user. The final predicted term is formed using a term predicted remotely or a term predicted locally.
Referring to
The remote prediction engine 26 of the server 24 uses a remote prediction model. This is suitably a large file that supports a large vocabulary of input text, and the remote prediction engine 26 is RAM-intensive to run.
By contrast, the local prediction engine 22 of the computing device 20 uses a local prediction model which is related to the remote prediction model but is generally smaller, for example a subset of the remote prediction model. This may exclude rarely used words and therefore may not be able to predict emoji for rare input text, but the model is faster to download during app installation and the execution of the location prediction engine using the smaller local prediction model is less RAM-intensive. The local prediction model may have a special format arranged to reduce the RAM required to use it.
The predictions from the server 24 are considered to be better predictions, but they generally take longer that those generated locally at the computing device 20 because of the lag involved in network communication.
The present disclosure includes a technique for combining the two sources of predictions: the local predictions generated at the computing device 20 and the remote predictions generated at the server 24.
Referring to
Having performed the first and second processes, the computing device 20 forms 36 a final prediction term on the basis of a time interval running from the time at which the user inputted the input text. An example of this will be described below. After forming the final predicted term, the computing device 20 presents 37 the final predicted term to the user, for example in a similar way as shown for the predicted terms 14, 16 and 18 in
Referring to
In method 40, the steps 31 to 35 are the same as method 30, so the description of these steps will not be repeated. In the method 40, after or during the monitoring steps 33 and 35, the computing device 20 forms 42 a final predicted term on the basis of a remote predicted term received from the server 24 over the communications network 28 if the remote predicted term is received by the computing device 20 in a time interval running from the time the user inputted the input text.
If a remote predicted term is not received by the computing device 20 in the time interval, but a local predicted term is received from the local prediction engine 22 of the computing device 20 in the time interval, the computing device 20 forms 44 a final predicted term on the basis of the local predicted term.
If a remote predicted term is not received in the time interval and a local predicted term is not received in the time interval, the computing device 20 forms 46 a final predicted term on the basis of a first prediction term to arrive—i.e. whichever of a remote predicted term or a local predicted term arrives first. It is noted that if the remote prediction fails, then, assuming the local prediction does not fail, the local prediction term will arrive first (and the remote prediction term will not arrive at all). In this case the computing device 20 forms 46 a final predicted term on the basis of the local prediction term. This scenario provides a back-up in case the computing device 20 loses connection to the communications network 28.
Finally, the computing device 20 presents 48 the final predicted term to the user.
This approach makes use of both remote and local sources of prediction terms and strikes a balance between displaying predictions to the user within a reasonable time period and avoiding changing the displayed predictions (e.g. from local to remote) once they have been presented to the user. A suitable time interval is 0.45 seconds. This is short enough for the user not to be aware that they are waiting for something. It also results in approximately 75% of the presented predictions being remote predictions and approximately 25% being local predictions. Other suitable time intervals can be used, for example time intervals within the range 80 milliseconds to 1000 milliseconds.
The time interval could be dynamic by being varied based on various parameters such as the speed or type of a connection of the computing device to a communications network, the location of the computing device, or user preferences. In some examples, an input means is provided enabling the user to “upgrade” to remote predictions if the computing device initially presents local predicted terms when implementing method 40.
A computing device 50 suitable for implementing the methods 30 and 40 is shown in
A further method 70 according to the disclosure will now be described in relation to
Referring to
Referring to
Finally, the computing device 20 presents 78 the final predicted term to the user.
A further method 80 according to the disclosure will now be described in relation to
Referring to
The technique disclosed herein could be used for any predictive keyboard, whether for words, emoji or both, or any other kind of data. In the case of predicting words, the format of the model would include delimiters to distinguish between input text and predicted text.
In the above description, the techniques are implemented using instructions provided in the form of stored software. Alternatively, or in addition, the functionality described herein is performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that are optionally used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), Graphics Processing Units (GPUs).
The term ‘computer’ or ‘computing-based device’ is used herein to refer to any device with processing capability such that it executes instructions. Those skilled in the art will realize that such processing capabilities are incorporated into many different devices and therefore the terms ‘computer’ and ‘computing-based device’ each include personal computers (PCs), servers, mobile telephones (including smart phones), tablet computers, set-top boxes, media players, games consoles, personal digital assistants, wearable computers, and many other devices.
The methods described herein are performed, in some examples, by software in machine readable form on a tangible storage medium e.g. in the form of a computer program comprising computer program code means adapted to perform all the operations of one or more of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable medium. The software is suitable for execution on a parallel processor or a serial processor such that the method operations may be carried out in any suitable order, or simultaneously.
This acknowledges that software is a valuable, separately tradable commodity. It is intended to encompass software, which runs on or controls “dumb” or standard hardware, to carry out the desired functions. It is also intended to encompass software which “describes” or defines the configuration of hardware, such as HDL (hardware description language) software, as is used for designing silicon chips, or for configuring universal programmable chips, to carry out desired functions.
Those skilled in the art will realize that storage devices utilized to store program instructions are optionally distributed across a network. For example, a remote computer is able to store an example of the process described as software. A local or terminal computer is able to access the remote computer and download a part or all of the software to run the program. Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a digital signal processor (DSP), programmable logic array, or the like.
Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. It will further be understood that reference to ‘an’ item refers to one or more of those items.
The operations of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. Additionally, individual blocks may be deleted from any of the methods without departing from the scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought.
The term ‘comprising’ is used herein to mean including the method blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may contain additional blocks or elements.
The term ‘subset’ is used herein to refer to a proper subset such that a subset of a set does not comprise all the elements of the set (i.e. at least one of the elements of the set is missing from the subset).
It will be understood that the above description is given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exemplary embodiments. Although various embodiments have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from the spirit or scope of this specification.
The methods herein, which involve input text from users in their daily lives, may and should be enacted with utmost respect for personal privacy. Accordingly, the methods presented herein are fully compatible with opt-in participation of the persons being observed. In embodiments where personal data is collected on a local system and transmitted to a remote system for processing, that data can be anonymized in a known manner.
This non-provisional utility application claims priority to U.S. application Ser. No. 62/376,178 entitled “Remote And Local Predictions” and filed on 17 Aug. 2016, which is incorporated herein in its entirety by reference.
Number | Date | Country | |
---|---|---|---|
62376178 | Aug 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15358633 | Nov 2016 | US |
Child | 17400940 | US |