1. Technical Field
The invention relates to data input devices. More particularly, the invention relates to a spell-check mechanism for a keyboard system having automatic correction capability.
2. Description of the Prior Art
Classic spell-check (“Edit Distance”) techniques for transposed/added/dropped characters have a relatively long history. See, for example, Kukich, K., Techniques for Automatically Correcting Words, ACM Computing Surveys, Vol. 24, No. 4 (December 1992); Peterson, J. L., Computer Programs for Detecting and Correcting Spelling Errors, The Communications of the ACM, Volume 23, No. 12 (December 1980); and Daciuk, J., Spelling Correction in the paper Incremental Construction of Finite-State Automata and Transducers, and their Use in the Natural Language Processing (1998).
But classic spell-check techniques can only handle a certain number of differences between the typed word and the intended correct word. Because the best correction candidate is presumed to be the one with the fewest changes, spell-check algorithms are confounded by, for example, unknowing shifting of fingers by a typist on the keyboard, or tapping on a touchscreen keyboard hurriedly and inaccurately, and thus typing almost every letter wrong.
To limit the amount of computational processing, particularly on lower-performance mobile devices, implementations of the classic algorithms make assumptions or impose constraints to reduce the ambiguity and thus the number of candidate words being considered. For example, they may rely on the initial letters of the word being correct or severely limit the size of the vocabulary.
Another form of automatic error correction, useful both for keyboards on touch-sensitive surfaces and for standard phone keypads, calculates the distances between each input location and nearby letters and compares the entire input sequence against possible words. The word whose letters are the closest to the input locations, combined with the highest frequency and/or recency of use of the word, is the best correction candidate. This technique easily corrects both shifted fingers and hurried tapping. It can also offer reasonable word completions even if the initial letters are not all entered accurately.
The following patent publications describe the use of a “SloppyType” engine for disambiguating and auto-correcting ambiguous keys, soft keyboards, and handwriting recognition systems: Robinson; B. Alex, Longe; Michael R., Keyboard System With Automatic Correction, U.S. Pat. No. 6,801,190 (Oct. 5, 2004), U.S. Pat. No. 7,088,345 (Aug. 8, 2006), and U.S. Pat. No. 7,277,088 (Oct. 2, 2007); Robinson et al, Handwriting And Voice Input With Automatic Correction, U.S. Pat. No. 7,319,957 (Jan. 15, 2008), and U.S. patent application Ser. No. 11/043,525 (filed Jan. 25, 2005). See also, Vargas; Garrett R., Adjusting keyboard, U.S. Pat. No. 5,748,512 (May 5, 1998).
In addition, the following publications cover combinations of manual and vocal input for text disambiguation: Longe, et al., Multimodal Disambiguation of Speech Recognition, U.S. patent application Ser. No. 11/143,409 (filed Jun. 1, 2005); and Stephanick, et al, Method and Apparatus Utilizing Voice Input to Resolve Ambiguous Manually Entered Text Input, U.S. patent application Ser. No. 11/350,234 (filed Feb. 7, 2006).
The “SloppyType” technology referenced above uses distance-based error correction on full words. Assuming that the length of the input sequence equals the length of the intended word and that each input location is in the proper order helps compensate for the increased ambiguity introduced by considering multiple nearby letters for each input. But in addition to minor targeting errors, people also transpose keys, double-tap keys, miss a key completely, or misspell a word when typing.
It would be advantageous to provide a mechanism for addressing all forms of typing errors in a way that offers both accurate corrections and acceptable performance.
An embodiment of the invention provides improvements over standard edit distance spell-check algorithms by incorporating probability-based regional auto-correction algorithms and data structures. An embodiment of the invention provides helpful word completions in addition to typing corrections. The invention also provides strategies for optimization and for ordering results of different types. Many embodiments of the invention are particularly well suited for use with ambiguous keypads, reduced QWERTY keyboards, and other input systems for mobile devices.
The careful combination of edit distance techniques with regional auto-correction techniques creates new, even-better results for the user. This, an incorrectly typed word can be corrected to the intended word, or a word completion can be offered, regardless of the kind of typing error. Text entry on the ubiquitous phone keypad, already aided by input disambiguation systems, is further enhanced by the ability to correct typing errors. A series of optimizations in retrieval, filtering, and ranking keep the ambiguity manageable and the processing time within required limits.
For purposes of the discussion herein, the following terms have the meaning associated therewith:
Edit Distance (also “standard” E.D.)—the well-documented algorithm to compare two strings and determine the minimum number of changes necessary to make one the same as the other.
The following abbreviations may be used herein and in the Figures:
Enhanced Edit Distance, or Set-Edit-Distance (or “fuzzy compare”)—the subject of this patent; improved E.D. using a set of letters (with optional probabilities for each) to represent each input rather than a single letter as in standard E.D., plus other optimizations.
Mode—an operational state; for this invention, 1 of 2 states, “exact” (only using the exact-tap letter/value from each input event to match each candidate word, as with standard E.D.) or “regional” “set-based” (using multiple letters/values per input); the mode may be either user- or system-specified.
Regional input—a method (or event) including nearby/surrounding letters (with optional probabilities) in addition to the letter/key actually tapped/pressed.
Set-based—the use of multiple character values, rather than just one, to represent each input; each set member may have a different relative probability; a set may also include, e.g. the accented variations of the base letter shown on a key.
“Classic compare”, “classic match,” SloppyType, or “regional correction”—full-word matching using auto-correction considering nearby letters, supra; generally, the number of inputs equals the number of letters in each candidate word (or word stem of a completed word).
Filter or Screen—a rule for short-circuiting the full comparison or retrieval process by identifying and eliminating words that ultimately are not added to the selection list anyway.
KDB—Keyboard Database; the information about the keyboard layout, level of ambiguity surrounding each letter, and nearby letters for each letter.
LDB—Linguistic Database, i.e. main vocabulary for a language. “word tap frequency”—the contribution of physical distance from pressed keys to the likelihood the word is the target word.
Discussion
An embodiment of the invention provides an adaptation of standard edit distance spell-check algorithms that works with probability-based auto-correction algorithms and data structures for ambiguous keypads and other predictive text input systems. The invention also provides strategies for optimization and for ordering results of different types.
Upon receiving these inputs, the system performs incremental filtering and edit distance and regional/probability calculations (130), discarding any word that does not meet minimum thresholds for similarity with the inputs. Then the system compares the results for the input sequence and dictionary inputs with other top matches in a word choice list and discards the word if it is ranked too low on the list (140). The lowest-ranked word in the list is dropped if the list is full, and the word is inserted into the list based on ranking (150). The list is then presented to the user.
Edit Distance Combined with Regional Correction
Edit-Distance is the number of operations required to turn one string into another string. Essentially, this is the number of edits one might have to make, e.g. manually with a pen, to fix a misspelled word. For example, to fix an input word “ressumt” to a target word “result”, two edits must be made: an ‘s’ must be removed, and the ‘m’ must be changed to an ‘l’. Thus, “result” is edit-distance 2 from “ressumt”.
A common technique to determine the edit-distance between an input word and a target word uses a matrix as a tool. (See
That idea is now extended to ambiguous input where each input corresponds to a set of characters rather than single characters. One example of this is a text entry system on a mobile phone that allows a user to press keys corresponding to the characters the user wants to input, with the system resolving the ambiguity inherent in the fact that keys have multiple characters associated with them. The new term “Set-Edit-Distance” refers to the extension of the edit-distance idea to ambiguous input. To illustrate set-edit-distance, suppose that a user of a mobile phone text entry system presses the key (7,3,7,7,8,6,8) while attempting to enter the word ‘result.’ Spell correction on this ambiguous system looks for words that have the smallest set-edit-distance to the input key sequence. The technique is similar to that for edit-distance, but instead of comparing a character in the target word to a character in the input sequence, the character in the target word is compared against a set of characters represented by the input key. If the target character is in the input set, the set-edit-distance does not increase. If the target character is not in the input set, the set-edit-distance does increase according to a standard rule. A matrix corresponding to set-edit-distance is shown in
The example in
In such an extended system, the input sequence may be represented as an array of one or more character+probability pairs. The probability reflects the likelihood that the character identified by the system is what the user intended. As described in Robinson et al, Handwriting And Voice Input With Automatic Correction, U.S. Pat. No. 7,319,957 (Jan. 15, 2008) and Robinson, et al., Handwriting And Voice Input With Automatic Correction, U.S. patent application Ser. No. 11/043,525 (filed Jan. 25, 2005), each of which is incorporated herein in its entirety by this reference thereto. The probability may be based upon one or more of the following:
Therefore, set-edit-distance is the standard edit distance applied to ambiguous sets, where penalties are assigned to each difference between an entered and a target vocabulary word. Instead of asking “Is this letter different?” the invention asks “Is this letter one of the possible candidates in the probability set?”
Thus, an embodiment applies the following algorithm:
A number of values are calculated or accumulated for the matching and word list ordering steps:
Tap frequency (TF) of the word or stem may be calculated as:
TF=probability of letter 1*probability of letter 2* . . . probability of letter n. (1)
This is similar to the standard probability set auto-correction calculations, but where the edit distance algorithm creates alternatives then the largest calculated frequency among these alternatives is chosen.
The example in
Stem edit-distance is an edit distance value for the explicitly-entered or most probable characters, commonly the exact-tap value from each input probability set, compared with the corresponding letters of a longer target word. In this case, the most probable character from each input for a touchscreen QWERTY keyboard is the exact-tap letter. Because the letter ‘s’ in the third position of the target word is not the same as the exact-tap value for the third input in
The sets for stem set-edit-distance can also be language specific. For example, accented variants of a character in French may be members of the same set.
An embodiment of the invention also provides a number of innovative strategies for tuning the ordering of words in the selection list to mirror the intent or entry style of the user. For example, the results may be biased in one of two ways:
An embodiment of the invention provides typing correction and spell-check features that allow such systems as those which incorporate the “SloppyType” technology described above to be more useful to all typists, particularly on non-desktop devices. A “SloppyType” system provides an enhanced text entry system that uses word-level disambiguation to automatically correct inaccuracies in user keystroke entries. Specifically, a “SloppyType” system provides a text entry system comprising: (a) a user input device comprising a touch sensitive surface including an auto-correcting keyboard region comprising a plurality of the characters of an alphabet, wherein each of the plurality of characters corresponds to a location with known coordinates in the auto-correcting keyboard region, wherein each time a user contacts the user input device within the auto-correcting keyboard region, a location associated with the user contact is determined and the determined contact location is added to a current input sequence of contact locations; (b) a memory containing a plurality of objects, wherein each object is a string of one or a plurality of characters forming a word or a part of a word, wherein each object is further associated with a frequency of use; (c) an output device with a text display area; and (d) a processor coupled to the user input device, memory, and output device, said processor comprising: (i) a distance value calculation component which, for each determined contact location in the input sequence of contacts, calculates a set of distance values between the contact locations and the known coordinate locations corresponding to one or a plurality of characters within the auto-correcting keyboard region; (ii) a word evaluation component which, for each generated input sequence, identifies one or a plurality of candidate objects in memory, and for each of the one or a plurality of identified candidate objects, evaluates each identified candidate object by calculating a matching metric based on the calculated distance values and the frequency of use associated with the object, and ranks the evaluated candidate objects based on the calculated matching metric values; and (iii) a selection component for (a) (1) identifying one or a plurality of candidate objects according to their evaluated ranking, (2) presenting the identified objects to the user, enabling the user to select one of the presented objects for output to the text display area on the output device.
Optimizations
Theoretically, any word in a vocabulary could be considered to be a correction, given a large enough edit distance score. However, database processing must occur in real-time as the user is typing, and there is a limit to the available processing power and working memory, especially for mobile devices. Thus, it is important to optimize all parts of the combined edit distance algorithms and eliminate processing steps when possible. For example, a first-level criterion for discarding a possible word match is allowing only one edit/correction for every three actual inputs, up to a maximum of three edits against any one compared word.
Other performance enhancements can include, for example (without limitation):
Word Frequency may be approximated, based on Zipf's Law, which states that given some corpus of natural language utterances, the frequency of any word is inversely proportional to its rank in the frequency table. Thus, the most frequent word occurs approximately twice as often as the second most frequent word, which occurs twice as often as the fourth most frequent word, etc. In an embodiment, the approximation is used, rather than a value stored for each word in a vocabulary database:
Fn=F1/n(frequency of Nth word is frequency of 1st word divided by word position (2)
Other tunable configuration parameters may include:
Spell correction on a large word list is a very CPU intensive task, and even more so when memory is limited. Thus, to reach acceptable performance the whole system must be optimized based on the spell correction characteristics chosen. The resulting system thus becomes quite inflexible from a feature perspective. Without specific optimizations performance may be a magnitude or two worse.
Spell correction performance depends mostly on the following:
Each of these elements are described in more detail in the following sections.
Spell Correction Properties
Allowed Edits
The number of allowed edits is a very important performance factor. The more edits the more ambiguity in the compare and thus many more words match and go into the selection list for prioritization. If the compare is too generous the effect is that too many unwanted words get into the list.
In a preferred embodiment, the number of allowed edits is related to input length and one edit is granted for every third input up to a maximum of three. This parameter of one edit per three inputs is assumed throughout the examples below.
Modes and Filters
Modes and filters are used to control the result set as well as performance. Two examples of modes are exact input and regional. On a touchscreen soft keyboard, for example, the user can tap exactly on the desired letter as well as indicating an approximate region of letters. In exact input mode, only the exact-tap letter from each user input is considered. In regional mode, some or all of the nearby letters indicated by each user input are considered.
Spell correction against exact input reduces ambiguity and makes the candidates look more like what's entered (even if what is entered is incorrect). It is effective with KDBs that feature exact-tap values, such as touchscreen soft keyboards. 12 key systems (for standard phone keypads) may have no useful exact-tap value; each keypress may be represented by the key's digit instead of one of the letters, and there is no way to intuit that one letter on each key is more likely than the others to be the intended one.
Unfortunately for 12 key systems, the KDBs behave as a generous regional mode layout, i.e. each input produces at least 3 letters per set, often many more when accented vowels are included, while not having an exact-tap value that can be used for exact input mode and filtering.
A filter is a screening function that ends further consideration of a candidate word if it does not meet established minimum criteria. For example, the ONE/TWO filters are mostly for performance improvement, making the first character in the word correlate stronger with the first or second input and rejecting any candidate words that do not conform.
The “Fuzzy Compare” Function
The fuzzy compare function allows a certain difference between the input and the word being compared, the edit distance. The idea is to calculate the edit distance and then based on the value either pass or reject the word.
Calculating the exact edit distance is expensive performance-wise. A solution to that is to place a screening mechanism prior to the real calculation. It is acceptable to “under” reject within reason, but “over” rejection should be avoided if at all possible. Words that pass through screening because of “under rejection” is taken out later, after the real distance calculation.
The quick screening is crucial for maintaining acceptable performance on each keypress. Potentially a huge amount of words can be coming in for screening and normally only a fraction gets through. Thus, for good performance everything before the screening must also be very efficient. Things done after the screening is less important performance wise, but there is still a decent amount of data coming through, especially for certain input combinations where thousands of words makes it all the way into the selection list insert function.
In one or more embodiments, spell correction works alongside the probability set comparison logic of regional auto-correction. There are words that are accepted by set comparisons that are not accepted based on the spell correction calculation. This is the case for regional input when spell correction is set up in exact input mode or when using exact filters. Word completion is also simpler for classic compare while costing edits in spell correction.
In the preferred embodiment, the fuzzy compare steps are:
These steps are illustrated as a flow diagram in
Screening for classic compare and dealing with word completions, etc., is placed at step 2 before further spell correction calculations. That takes all the “classic” complexity out of the subsequent code. It also means that when spell correction is turned off, all other calculations can be skipped.
The algorithm is pictured as comparing two words against each other. In most embodiments this is generalized so that one word corresponds to the input symbols. In the sample matrixes in the figures referenced below, the input sequence is shown vertically. Thus, rather than each input word position being a single character as with standard Edit Distance, it is really a set of characters corresponding to ambiguous or regional input. A compare yields a match if any of the characters in the set is a match.
1. Screen for Too Short Words
If a word is too short even for spell correction, that is, shorter than the input length minus the available edit distance, then it can be rejected immediately.
2. Screen for Set-Based Matches
This is an iteration over the input sequence, verifying that each position is a match to the corresponding position in the compared word; i.e. each letter in the candidate word must be present in each input set.
If there is a non-match and the word is too long for spell correction, i.e. if it is longer than the input length, plus the available edit distance, then it can be rejected immediately.
3. Calculate Stemedit-Distance
This is an iteration over all symbols in the input sequence, and is only performed when there is a set-based match. Every difference from an exact-tap value increases the stem distance; e.g. the candidate word “tomorrow” might have a stem distance of 0 for an exact-tap input of “tom” and 1 for “tpm”. The word tap frequency is also calculated during the iteration.
If it is a valid classic match, the “fuzzy compare” of the candidate word is complete at this point. The candidate word is inserted into the selection list.
4. Screen for ONE/TWO
This is a quick check to see if the first character in the word matches the first ONE or TWO input symbols. If not, then the word is rejected.
5. Screen for Set-Edit-Distance
Conceptually this is a very simple task because enhanced edit distance follows the traditional definition using insert, delete, and substitution plus transpose (the last is commonly included for text entry correction). Doing it in an efficient way is much harder though.
The traditional way of calculating edit distance is using a matrix. An example is shown in
To find the values based on the cell that is being calculated, i.e. the cell marked with ‘X’ in
This is computationally a very expensive way to calculate the distance, especially with long words. In one embodiment, a maximum allowable edit distance is set and so that 1% or less of the words pass that limit. If the allowed distance is too high the whole word list might make it into the selection list and the whole idea of spell correction is lost. Thus, initially the exact distance is not of concern; rather just whether the result is below or above the rejection limit. For those few words that pass this test more effort can then be spent on calculating exact distance, frequency, etc.
The goal of the screening step is to, as quickly as possible, prove that the resulting distance is above the rejection limit.
Consider the situation when the compared words match, except for length, as shown in
This initial matrix can be used when calculating any two words. Only the values in cells that are actually chosen for comparison need be updated along the way. The goal becomes to push the lower right cell above its rejection limit. To do so, it must be proven that any of the cells it relies on to get this value actually has a higher value, and so on recursively.
For this example, with length difference 3 and the first character not matching (changing the first ‘x’ to ‘y’ in
The result is that the center diagonal and those towards the diagonal with the result value get increased values. This happens every time the last cell, that supports the lowest value in another cell, gets increased as a result of a completed compare mismatch.
The shown matrixes only describe what happens when there is a word length difference. If the length difference is zero, the center diagonal becomes the main one and the support, i.e. a cell value high enough to affect the calculation, must come from both sides of the result diagonal to prove a reject.
Diagonals in computations make data access patterns harder to optimize (accessing actual memory corresponding to the positions). Operating in a rotated/transformed matrix space is a further optimization; see
6. Screen for Position-Locked Characters
Because a full classic compare was not performed on a spell correction candidate, there is still a need to verify input symbols that have locked positions, i.e. not allowed to move or change value. This is just an iteration over input symbols with locked positions, checking that they match. If not, then the word is rejected.
7. Calculate Set-Edit-Distance and Frequency
The algorithm to screen for edit distance can be modified to calculate the edit distance and other things such as word frequency. It should not, however, be merged into the screening code. That code has to be kept separate and optimized for pure screening. A different version gets applied to the words that pass the screening, one that is more exhaustive because it has to evaluate different cells and pick the best choices for low distance and high frequency. It also has to deal with things, such as possible locked symbol values (just value, not position).
Candidate is rejected if the set-edit-distance value exceeds a certain threshold.
8. Calculate Stem Edit-Distance
This is also a modified copy of the screening algorithm, for two reasons:
First, the stem distance can be very different because it is always based on the exact match. Thus, the value can become higher than the intended maximum for distance. Distance values higher than the maximum might not be fully accurate because of algorithm optimizations, but it is still good enough.
Second, the stem distance is also different in that it might not take into account the full length of the candidate word. To be compatible with non spell corrected words, the stem distance calculation will stop at the length of the input. Some additional checking is needed around the end cell to get the minimum value depending on inserts and deletes.
Low Level LDB Search Function
The fuzzy compare function can be made very efficient in screening and calculation, but that alone is not enough for good performance, particularly on embedded platforms. Depending on the input, almost all words in a vocabulary can be potential spell correction candidates. This usually happens when entering the 9th and 10th inputs in most languages, when one edit is allowed per three inputs.
At input length 9 all words with length 6-12 are potential spell correction candidates and everything longer than 12 are potential completion candidates. For example, at input length 9, over 70% of a Finnish vocabulary might be considered for comparison based on spell correction and another 20% based on word completion. This creates significant efficiency problems since spell correction requires the most computational effort. The following strategies seek to increase the efficiency of the database retrieval process by integrating one or more of the screening functions described earlier.
Search Strategy for No Spell Correction
The preferred embodiment of the vocabulary database, as described in Unruh; Erland, Kay; David. Jon, Efficient Storage and Search Of Word Lists and Other Text, U.S. patent application Ser. No. 11/379,354 (filed Apr. 19, 2006) which is incorporated by reference, is designed and optimized for searching words without spell correction. The whole input length is directly mapped to interval streams and the sparsest streams are visited first to aid quick jumping in the word list. Once there is a match, completion characters can be picked up from streams not mapped to the input.
With this strategy too short words are automatically skipped because they do not have characters matching the corresponding input.
Search Strategy for Spell Correction
With spell correction the words in the LDB falls into three categories depending on the input length. These are:
Each of these categories are described in the following sections.
Too Short Words
These can easily be skipped over by checking the interval stream corresponding to the last character in the shortest allowed word; For example, if the minimum length is 6, then the 6th interval stream must not be empty (have the terminating zero); if empty, then it is possible to directly jump to the end of the interval.
Long Words
Just as a special interval stream can be used to check for too short words another stream can be used to check for long words. For example, if the maximum length is 12, then the 13th stream decides if a word is long or not.
Long words can be handled exactly the same way as if spell correction was turned off. Streams mapped to the input can be used for jumping and the completion part is picked up from the rest of the streams.
Spell Correction Words
Unlike the previous two categories which can be efficiently searched, all words that fall into this category basically have to be sent on for edit distance calculation. That is not feasible, performance-wise, though screening function is needed at the LDB search level. As long as it provides a performance gain, this screening can be quite under-rejecting.
A complicating factor is that the spell correction modes and filters might operate in exact mode while the input still is set-based, and thus non-spell correction candidates might be set-based matches while spell correction ones can not use set-based info. The consequence is that any screening process must adhere to the set-based comparison logic as well.
An aspect of the LDB retrieval screening function for a preferred embodiment is illustrated in
Many of the screening functions from the fuzzy compare function may be adapted and integrated into the database retrieval process, as described in the following paragraphs.
Filter ONE/TWO
Filter ONE and TWO can be used for jumping. If interval stream zero (first character in the word) does not match the corresponding input (first or second input, depending on the filter) a jump can take place.
If the filter setting (exact input or regional) does not match the set-based comparison logic, then it must be accompanied by a failing stream. The resulting jump is limited to the shorter of the two (nearest end in one of the two streams). This filter only applies to spell correction candidates.
Input Based Screening
Even though the available edits can make words match, that look quite different than the input, there are still limitations on what can match. A limited number of available edits means that only a limited number if inserts and deletes can be applied, and thus there is a limitation in how far away a character in a word can be from the input related stream and still count as a match.
This screening can be applied independent of filters, but the filters can be made part of the screening in an efficient way. The screening must be very fast, so the complexity must be kept low.
To reject a word, one miss more than the available number of edits is needed. For example, for edit distance 3, 4 misses must be found. If there are 9 inputs and the compared word has length 6, compare up to length 9 because position 7, 8 and 9 have the zero as termination code and that always fails to compare with any input union. If the word is longer than the input, compare up to the length of the word.
Length-Independent Screening
One solution to screening when the word length is not predetermined is to set up a second, fabricated, input that can be used for screening matching. It is fabricated in a way so that every position becomes a union of the surrounding original positions.
For input length 9, the union map looks like that shown in
If any character in the word fails to match the union it counts as a miss and thus calls for a potential edit. With enough misses the word can be discarded by this screening.
If a word is shorter than the input, then that difference can be subtracted from available edits immediately and the comparison only needs to check the available positions. Thus, if the length difference is the same as the number of available edits, only one position has to fail to reject the word.
The same restrictions apply here as it did for the filters. If there is an exact/regional significance then a rejection must be accompanied by a failing set-based interval stream.
The longest possible jump is to the nearest end of a failing interval stream, whether union or set-based.
Because there is a requirement for a failing set-based stream to exist to be able to make a jump, there is no need to further restrict the jump with regards to change in word length category.
Length-Dependent Screening
In the preferred embodiment of length-dependent screening, calculating the length of the compared word can restrict the unions to what is applicable for that length. For example, for length 6 and input length 9 the union map look like that of
This features more limited unions, but with the added cost of finding the word length to choose the unions. It also limits the possible jump length to within a chunk of words with the same length because, as soon as the length changes, so does the unions. Thus, it is also a requirement to minimize the number of word length changes throughout the LDB.
Apart from having length dependent patterns, the description of independent screening applies here as well.
Selection List Ordering Strategies and Algorithms
The result of the combined algorithms is a list of word choices for selection that includes, in most likely order, either of 1. the word that the user has already typed, if the input sequence is complete, or 2. the word that the user has begun to type, if the input sequence represents the stem of a word or phrase.
The word list sort order may be based on factors of regional probability, edit distance, word recency/frequency (as stored in each database), word length, and/or stem edit distance. Word list ordering may also depend on which of two or more different list profiles or strategies is being used. For example:
Full-Word Priority
Note the order of evaluation is as above, e.g. criterion 3 is only considered if criterion 2 is the same for the compared items. Because of this, for example, spell corrections on custom user words can appear ahead of regional corrections for standard vocabulary words.
Word Completions Promoted
Because stem edit distance is the first criterion, completion is the second, etc., the word list effectively gets segmented as:
The system may allow the basic strategy to be specified. It may also automatically adapt the ordering based on recognized patterns of word selection, over and above the frequency/recency information recorded in the source databases. For example, the system may detect that most of the time the user selects a word completion whose first letters exactly match the input so far, and so may shift the word list ordering bias towards the “Completions Promoted” profile.
Other Features and Applications
Auto-substitution, e.g. macros: Regional and spell correction may both apply to the shortcut, although word completion can apply to the expanded text. Thus, if an input sequence approximately matches both the shortcut and the stem of the expanded text, the ranking of the macro may be increased. Macros may be predefined or user-definable.
Keyword flagging, for advertising purposes, could benefit from auto-substitution and/or spell correction. For example, if the word in the mobile message was text slang or misspelled, the invention could still find a valid sponsored keyword.
An embodiment of the invention could be applied to an entire message buffer, i.e. batch mode, whether its text was originally entered ambiguously or explicitly, e.g. via multi-tap, or received as a message or file from another device.
The spell-corrected word choice can become the basis for further inputs, word completions, etc., if the input method permits auto-extending a word choice, including build-around rules with punctuation, etc. In one embodiment, a cascading menu pops up with a list of word completions for the selected word or stem.
The invention can also be applied to ambiguous entry for search and discovery. For example, if the user's input sequence is not closely matched by the content of the mobile device or the contents of server-based search engines, one or more spell-corrected interpretations which do result in matches may be offered.
While the examples above illustrate the invention's use with Latin-based languages, other embodiments may address the particular needs of other alphabets or scripts.
Although the invention is described herein with reference to the preferred embodiment, one skilled in the art will readily appreciate that other applications may be substituted for those set forth herein without departing from the spirit and scope of the present invention. Accordingly, the invention should only be limited by the Claims included below.
This application claims priority to U.S. provisional patent application Ser. No. 60/887,748, filed 1 Feb. 2007, the entirety of which is incorporated herein by this reference thereto.
Number | Name | Date | Kind |
---|---|---|---|
3980869 | Lombardino et al. | Sep 1976 | A |
4286329 | Goertzel et al. | Aug 1981 | A |
4365235 | Greanias et al. | Dec 1982 | A |
4439649 | Cecchi | Mar 1984 | A |
4454592 | Cason et al. | Jun 1984 | A |
4559598 | Goldwasser et al. | Dec 1985 | A |
4561105 | Crane et al. | Dec 1985 | A |
4573196 | Crane et al. | Feb 1986 | A |
4689768 | Heard et al. | Aug 1987 | A |
4710758 | Mussler et al. | Dec 1987 | A |
4725694 | Auer et al. | Feb 1988 | A |
4782464 | Gray et al. | Nov 1988 | A |
4783758 | Kucera | Nov 1988 | A |
4783761 | Gray et al. | Nov 1988 | A |
4891777 | Lapeyre | Jan 1990 | A |
4891786 | Goldwasser | Jan 1990 | A |
5109352 | O'Dell | Apr 1992 | A |
5127055 | Larkey | Jun 1992 | A |
5187480 | Thomas et al. | Feb 1993 | A |
5224179 | Denker et al. | Jun 1993 | A |
5261112 | Futatsugi et al. | Nov 1993 | A |
5305205 | Weber et al. | Apr 1994 | A |
5317507 | Gallant | May 1994 | A |
5347295 | Agulnick et al. | Sep 1994 | A |
5457454 | Sugano | Oct 1995 | A |
5462711 | Ricottone | Oct 1995 | A |
5533147 | Arai et al. | Jul 1996 | A |
5561446 | Montlick | Oct 1996 | A |
5572423 | Church | Nov 1996 | A |
5574482 | Niemeier | Nov 1996 | A |
5577170 | Karow | Nov 1996 | A |
5583946 | Gourdol | Dec 1996 | A |
5586198 | Lakritz | Dec 1996 | A |
5612690 | Levy | Mar 1997 | A |
5616031 | Logg | Apr 1997 | A |
5649223 | Freeman | Jul 1997 | A |
5664896 | Blumberg | Sep 1997 | A |
5675361 | Santilli | Oct 1997 | A |
5734749 | Yamada et al. | Mar 1998 | A |
5734750 | Arai et al. | Mar 1998 | A |
5745719 | Falcon | Apr 1998 | A |
5748512 | Vargas | May 1998 | A |
5754686 | Harada et al. | May 1998 | A |
5784008 | Raguseo | Jul 1998 | A |
5796867 | Chen et al. | Aug 1998 | A |
5798760 | Vayda et al. | Aug 1998 | A |
5799269 | Schabes et al. | Aug 1998 | A |
5805911 | Miller | Sep 1998 | A |
5812696 | Arai et al. | Sep 1998 | A |
5812697 | Sakai et al. | Sep 1998 | A |
5818437 | Grover et al. | Oct 1998 | A |
5828999 | Bellegarda et al. | Oct 1998 | A |
5870492 | Shimizu et al. | Feb 1999 | A |
5896321 | Miller et al. | Apr 1999 | A |
5917476 | Czerniecki | Jun 1999 | A |
5917889 | Brotman et al. | Jun 1999 | A |
5920303 | Baker et al. | Jul 1999 | A |
5923793 | Ikebata | Jul 1999 | A |
5926566 | Wang et al. | Jul 1999 | A |
5928588 | Chen et al. | Jul 1999 | A |
5933526 | Sklarew | Aug 1999 | A |
5937420 | Karow et al. | Aug 1999 | A |
5952942 | Balakrishnan et al. | Sep 1999 | A |
5953541 | King et al. | Sep 1999 | A |
5956021 | Kubota et al. | Sep 1999 | A |
5963671 | Comerford et al. | Oct 1999 | A |
5973676 | Kawakura | Oct 1999 | A |
6002390 | Masui | Dec 1999 | A |
6002799 | Sklarew | Dec 1999 | A |
6005495 | Connolly et al. | Dec 1999 | A |
6008799 | Van Kleeck | Dec 1999 | A |
6009444 | Chen | Dec 1999 | A |
6011554 | King et al. | Jan 2000 | A |
6018708 | Dahan et al. | Jan 2000 | A |
6028959 | Wang et al. | Feb 2000 | A |
6037942 | Millington | Mar 2000 | A |
6041137 | Van Kleeck | Mar 2000 | A |
6044165 | Perona et al. | Mar 2000 | A |
6052130 | Bardon et al. | Apr 2000 | A |
6054941 | Chen | Apr 2000 | A |
6075469 | Pong | Jun 2000 | A |
6088649 | Kadaba et al. | Jul 2000 | A |
6094197 | Buxton et al. | Jul 2000 | A |
6098034 | Razin et al. | Aug 2000 | A |
6104317 | Panagrossi | Aug 2000 | A |
6104384 | Moon et al. | Aug 2000 | A |
6111573 | Mccomb et al. | Aug 2000 | A |
6130962 | Sakurai | Oct 2000 | A |
6144764 | Yamakawa et al. | Nov 2000 | A |
6148104 | Wang | Nov 2000 | A |
6157379 | Singh | Dec 2000 | A |
6169538 | Nowlan et al. | Jan 2001 | B1 |
6172625 | Jin et al. | Jan 2001 | B1 |
6204848 | Nowlan et al. | Mar 2001 | B1 |
6212297 | Sklarew | Apr 2001 | B1 |
6215485 | Phillips | Apr 2001 | B1 |
6223059 | Haestrup | Apr 2001 | B1 |
6275611 | Parthasarathy | Aug 2001 | B1 |
6278445 | Tanaka et al. | Aug 2001 | B1 |
6285768 | Ikeda | Sep 2001 | B1 |
6286064 | King et al. | Sep 2001 | B1 |
6307548 | Flinchem et al. | Oct 2001 | B1 |
6307549 | King et al. | Oct 2001 | B1 |
6314418 | Namba | Nov 2001 | B1 |
6320943 | Borland | Nov 2001 | B1 |
6346894 | Connolly et al. | Feb 2002 | B1 |
6362752 | Guo et al. | Mar 2002 | B1 |
6392640 | Will | May 2002 | B1 |
6424743 | Ebrahimi | Jul 2002 | B1 |
6437709 | Hao | Aug 2002 | B1 |
6448987 | Easty et al. | Sep 2002 | B1 |
6453079 | Mcinerny | Sep 2002 | B1 |
6489951 | Wong et al. | Dec 2002 | B1 |
6493464 | Hawkins et al. | Dec 2002 | B1 |
6502118 | Chatterjee | Dec 2002 | B1 |
6542170 | Williams et al. | Apr 2003 | B1 |
6549219 | Selker | Apr 2003 | B2 |
6567072 | Watanabe | May 2003 | B2 |
6585162 | Sandbach et al. | Jul 2003 | B2 |
6611252 | Dufaux | Aug 2003 | B1 |
6616703 | Nakagawa | Sep 2003 | B1 |
6643647 | Natori | Nov 2003 | B2 |
6654733 | Goodman et al. | Nov 2003 | B1 |
6686852 | Guo | Feb 2004 | B1 |
6686907 | Su et al. | Feb 2004 | B2 |
6711290 | Sparr et al. | Mar 2004 | B2 |
6757544 | Rangarjan et al. | Jun 2004 | B2 |
6765554 | Millington | Jul 2004 | B2 |
6765567 | Roberson et al. | Jul 2004 | B1 |
6801190 | Robinson et al. | Oct 2004 | B1 |
6801659 | O'Dell | Oct 2004 | B1 |
6807529 | Johnson et al. | Oct 2004 | B2 |
6819315 | Toepke et al. | Nov 2004 | B2 |
6820075 | Shanahan et al. | Nov 2004 | B2 |
6829607 | Tafoya et al. | Dec 2004 | B1 |
6864809 | O'Dell et al. | Mar 2005 | B2 |
6904402 | Wang et al. | Jun 2005 | B1 |
6912581 | Johnson et al. | Jun 2005 | B2 |
6947771 | Guo et al. | Sep 2005 | B2 |
6955602 | Williams | Oct 2005 | B2 |
6956968 | O'Dell et al. | Oct 2005 | B1 |
6970599 | Longe et al. | Nov 2005 | B2 |
6973332 | Mirkin et al. | Dec 2005 | B2 |
6982658 | Guo | Jan 2006 | B2 |
6990534 | Mikhailov et al. | Jan 2006 | B2 |
7020270 | Ghassabian | Mar 2006 | B1 |
7020849 | Chen | Mar 2006 | B1 |
7030863 | Longe et al. | Apr 2006 | B2 |
7057607 | Mayoraz et al. | Jun 2006 | B2 |
7075520 | Williams | Jul 2006 | B2 |
7088345 | Robinson et al. | Aug 2006 | B2 |
7088861 | Van Meurs | Aug 2006 | B2 |
7095403 | Lyustin et al. | Aug 2006 | B2 |
7107204 | Liu et al. | Sep 2006 | B1 |
7117144 | Goodman et al. | Oct 2006 | B2 |
7139430 | Sparr et al. | Nov 2006 | B2 |
7149550 | Kraft et al. | Dec 2006 | B2 |
7151533 | Van | Dec 2006 | B2 |
7155683 | Williams | Dec 2006 | B1 |
7162305 | Tong et al. | Jan 2007 | B2 |
7177797 | Micher et al. | Feb 2007 | B1 |
7224989 | Kraft | May 2007 | B2 |
7256769 | Pun et al. | Aug 2007 | B2 |
7257528 | Ritchie et al. | Aug 2007 | B1 |
7272564 | Phillips et al. | Sep 2007 | B2 |
7275029 | Gao et al. | Sep 2007 | B1 |
7277088 | Robinson et al. | Oct 2007 | B2 |
7283999 | Ramesh et al. | Oct 2007 | B1 |
7286115 | Longe | Oct 2007 | B2 |
7293231 | Gunn et al. | Nov 2007 | B1 |
7313277 | Morwing et al. | Dec 2007 | B2 |
7349576 | Holtsberg | Mar 2008 | B2 |
7385531 | Zhang et al. | Jun 2008 | B2 |
7389235 | Dvorak | Jun 2008 | B2 |
7437001 | Morwing et al. | Oct 2008 | B2 |
7453439 | Kushler et al. | Nov 2008 | B1 |
7466859 | Chang et al. | Dec 2008 | B2 |
7584173 | Bax et al. | Sep 2009 | B2 |
7720682 | Stephanick et al. | May 2010 | B2 |
7750891 | Stephanick et al. | Jul 2010 | B2 |
7778818 | Longe et al. | Aug 2010 | B2 |
7821503 | Stephanick et al. | Oct 2010 | B2 |
7920132 | Longe et al. | Apr 2011 | B2 |
20010033295 | Phillips | Oct 2001 | A1 |
20010048425 | Partridge | Dec 2001 | A1 |
20020093491 | Allen et al. | Jul 2002 | A1 |
20020122072 | Selker | Sep 2002 | A1 |
20020135499 | Guo | Sep 2002 | A1 |
20020135561 | Rojewski | Sep 2002 | A1 |
20020145587 | Watanabe | Oct 2002 | A1 |
20020163544 | Baker et al. | Nov 2002 | A1 |
20020168107 | Tang et al. | Nov 2002 | A1 |
20020188448 | Goodman et al. | Dec 2002 | A1 |
20030006956 | Wu et al. | Jan 2003 | A1 |
20030011574 | Goodman | Jan 2003 | A1 |
20030023426 | Pun et al. | Jan 2003 | A1 |
20030033288 | Shanahan et al. | Feb 2003 | A1 |
20030048257 | Mattila | Mar 2003 | A1 |
20030054830 | Williams et al. | Mar 2003 | A1 |
20030144830 | Williams | Jul 2003 | A1 |
20030179930 | O'Dell et al. | Sep 2003 | A1 |
20030184451 | Li | Oct 2003 | A1 |
20030234766 | Hildebrand | Dec 2003 | A1 |
20040153963 | Simpson et al. | Aug 2004 | A1 |
20040153975 | Williams et al. | Aug 2004 | A1 |
20040163032 | Guo et al. | Aug 2004 | A1 |
20040243389 | Thomas et al. | Dec 2004 | A1 |
20040260694 | Chaudhuri et al. | Dec 2004 | A1 |
20050060138 | Wang et al. | Mar 2005 | A1 |
20050114770 | Sacher et al. | May 2005 | A1 |
20050135678 | Wecker et al. | Jun 2005 | A1 |
20050169527 | Longe et al. | Aug 2005 | A1 |
20050174333 | Robinson et al. | Aug 2005 | A1 |
20050190970 | Griffin | Sep 2005 | A1 |
20050210383 | Cucerzan et al. | Sep 2005 | A1 |
20050223308 | Gunn et al. | Oct 2005 | A1 |
20060062461 | Longe et al. | Mar 2006 | A1 |
20060129928 | Qiu | Jun 2006 | A1 |
20060136408 | Weir et al. | Jun 2006 | A1 |
20060155536 | Williams et al. | Jul 2006 | A1 |
20060158436 | LaPointe et al. | Jul 2006 | A1 |
20060173807 | Weir et al. | Aug 2006 | A1 |
20060176283 | Suraqui | Aug 2006 | A1 |
20060190819 | Ostergaard et al. | Aug 2006 | A1 |
20060193519 | Sternby | Aug 2006 | A1 |
20060236239 | Simpson et al. | Oct 2006 | A1 |
20060239560 | Sternby | Oct 2006 | A1 |
20060247915 | Bradford et al. | Nov 2006 | A1 |
20060274051 | Longe et al. | Dec 2006 | A1 |
20070016616 | Brill et al. | Jan 2007 | A1 |
20070040813 | Kushler et al. | Feb 2007 | A1 |
20070050360 | Hull et al. | Mar 2007 | A1 |
20070094718 | Simpson | Apr 2007 | A1 |
20070203879 | Templeton-Steadman et al. | Aug 2007 | A1 |
20070203894 | Jones et al. | Aug 2007 | A1 |
20070276653 | Greenwald et al. | Nov 2007 | A1 |
20070276814 | Williams | Nov 2007 | A1 |
20070285397 | LaPointe et al. | Dec 2007 | A1 |
20080100579 | Robinson et al. | May 2008 | A1 |
20080130996 | Sternby | Jun 2008 | A1 |
20080133222 | Kogan et al. | Jun 2008 | A1 |
20080291059 | Longe | Nov 2008 | A1 |
20090007001 | Morin et al. | Jan 2009 | A1 |
20090037399 | Bartz et al. | Feb 2009 | A1 |
20090089665 | White et al. | Apr 2009 | A1 |
20090105959 | Braverman et al. | Apr 2009 | A1 |
20090226098 | Takahashi et al. | Sep 2009 | A1 |
20090234826 | Bidlack | Sep 2009 | A1 |
20090284471 | Longe et al. | Nov 2009 | A1 |
20100082343 | Levit et al. | Apr 2010 | A1 |
20100257478 | Longe et al. | Oct 2010 | A1 |
20100325136 | Chaudhuri et al. | Dec 2010 | A1 |
20110193797 | Unruh | Aug 2011 | A1 |
20110234524 | Longe et al. | Sep 2011 | A1 |
Number | Date | Country |
---|---|---|
1116335 | Feb 1996 | CN |
1190205 | Aug 1998 | CN |
1232204 | Oct 1999 | CN |
1358299 | Jul 2002 | CN |
1606753 | Apr 2005 | CN |
3401942 | Nov 1984 | DE |
0114250 | Aug 1984 | EP |
0739521 | Oct 1996 | EP |
0762265 | Mar 1997 | EP |
0858023 | Aug 1998 | EP |
0961208 | Dec 1999 | EP |
1018679 | Jul 2000 | EP |
1085401 | Mar 2001 | EP |
1168780 | Jan 2002 | EP |
1355225 | Oct 2003 | EP |
2824979 | Nov 2002 | FR |
05-7010832 | Jan 1982 | JP |
60-204065 | Oct 1985 | JP |
60204065 | Oct 1985 | JP |
62065136 | Mar 1987 | JP |
1023021 | Jan 1989 | JP |
1047565 | Feb 1989 | JP |
05-027896 | Feb 1993 | JP |
1993081482 | Apr 1993 | JP |
05-233600 | Sep 1993 | JP |
6083512 | Mar 1994 | JP |
1994083512 | Mar 1994 | JP |
1994083816 | Mar 1994 | JP |
7094376 | Apr 1995 | JP |
1995146918 | Jun 1995 | JP |
1996305701 | Nov 1996 | JP |
8319721 | Dec 1996 | JP |
09-185612 | Jul 1997 | JP |
9185612 | Jul 1997 | JP |
10-143309 | May 1998 | JP |
10135399 | May 1998 | JP |
10143309 | May 1998 | JP |
10-154144 | Jun 1998 | JP |
10154144 | Jun 1998 | JP |
10-275046 | Oct 1998 | JP |
10275046 | Oct 1998 | JP |
11021274 | Jan 1999 | JP |
11028406 | Feb 1999 | JP |
1999338858 | Dec 1999 | JP |
2001043205 | Feb 2001 | JP |
2001043205 | Mar 2001 | JP |
2001282778 | Oct 2001 | JP |
2002244803 | Aug 2002 | JP |
2003005888 | Jan 2003 | JP |
2003500771 | Jan 2003 | JP |
2003533816 | Nov 2003 | JP |
20010107388 | Dec 2001 | KR |
20020004419 | Jan 2002 | KR |
498264 | Aug 2002 | TW |
9705541 | Feb 1997 | WO |
9816889 | Apr 1998 | WO |
9915952 | Apr 1999 | WO |
0072300 | Nov 2000 | WO |
0074240 | Dec 2000 | WO |
0188680 | Nov 2001 | WO |
0188680 | Nov 2001 | WO |
01088680 | Nov 2001 | WO |
03021788 | Mar 2003 | WO |
2004111812 | Dec 2004 | WO |
2004111871 | Dec 2004 | WO |
2006026908 | Mar 2006 | WO |
Number | Date | Country | |
---|---|---|---|
20080189605 A1 | Aug 2008 | US |
Number | Date | Country | |
---|---|---|---|
60887748 | Feb 2007 | US |