1. Technical Field
The invention relates to text input technology. More specifically, the invention relates to text entry solutions to wireless communication devices which have limited keypads.
2. Description of the Prior Art
For many years, portable computers have been getting smaller and smaller. The principal size-limiting component in the effort to produce a smaller portable computer has been the keyboard. If standard typewriter-size keys are used, the portable computer must be at least as large as the keyboard. Miniature keyboards have been used on portable computers, but the miniature keyboard keys have been found to be too small to be easily or quickly manipulated by a user. Incorporating a full-size keyboard in a portable computer also hinders true portable use of the computer. Most portable computers cannot be operated without placing the computer on a flat work surface to allow the user to type with both hands. A user cannot easily use a portable computer while standing or moving.
Presently, a tremendous growth in the wireless industry has spawned reliable, convenient, and very popular mobile devices available to the average consumer, such as cell phones, PDAs, MP3 players, etc. Handheld wireless communications and computing devices requiring text input are becoming smaller still. Further, advances in portable wireless technologies have led to a demand for small and portable two-way messaging systems, both SMS and e-mail, and for mobile Web browsing. Wireless communications device manufacturers also desire to provide devices which the consumer can operate with the same hand that is holding the device.
Disambiguation Background
Prior development work has considered use of a keyboard that has a reduced number of keys. As suggested by the keypad layout of a touch-tone telephone, many of the reduced keyboards have used a 3-by-4 array of keys. Each key in the array of keys contains multiple characters. There is therefore ambiguity as a user enters a sequence of keys, because each keystroke may indicate one of several letters. Several approaches have been suggested for resolving the ambiguity of the keystroke sequence, referred to as disambiguation.
One suggested approach for unambiguously specifying characters entered on a reduced keyboard requires the user to enter, on average, two or more keystrokes to specify each letter. The keystrokes may be entered either simultaneously (chording) or in sequence (multiple-stroke specification). Neither chording nor multiple-stroke specification has produced a keyboard having adequate simplicity and efficiency of use. Multiple-stroke specification is inefficient, and chording is complicated to learn and use.
Other suggested approaches for determining the correct character sequence that corresponds to an ambiguous keystroke sequence are summarized in Arnott, Probabilistic Character Disambiguation for Reduced Keyboards Using Small Text Samples, Journal of the International Society for Augmentative and Alternative Communication Arnott and M. Y. Javad (hereinafter the “Arnott article”). The Arnott article notes that the majority of disambiguation approaches employ known statistics of character sequences in the relevant language to resolve character ambiguity in a given context. The article also references research on word-based disambiguation systems and their respective advantages and disadvantages.
T9® Text Input is the leading commercial product offering word-level disambiguation for reduced keyboards, as taught in U.S. Pat. No. 5,818,437 and subsequent patents. Ordering the ambiguous words by frequency of use reduces the efficiency problems identified by previous research, and the ability to add new words makes it even easier to use over time. Input sequences may be interpreted simultaneously as words, word stems, and/or completions, numbers, and unambiguous character strings based on stylus tap location or keying patterns, such as multi-tap.
Another commonly used keyboard for small devices consists of a touch-sensitive panel on which some type of keyboard overlay has been printed, or a touch-sensitive screen with a keyboard overlay displayed. Depending on the size and nature of the specific keyboard, either a finger or a stylus can be used to interact with the panel or display screen in the area associated with the key or letter that the user intends to activate. Due to the reduced size of many portable devices, a stylus is often used to attain sufficient accuracy in activating each intended key.
The system described in U.S. Pat. No. 6,801,190 uses word-level auto-correction to resolve the accuracy problem and permit rapid entry on small keyboards. Because tap locations are presumed to be inaccurate, there is some ambiguity as to what the user intended to type. The user is presented with one or more interpretations of each keystroke sequence corresponding to a word, such that the user can easily select the desired interpretation. This approach enables the system to use the information contained in the entire sequence of keystrokes to resolve what the user's intention was for each character of the sequence.
Handwriting recognition is another approach that has been taken to solve the text input problem on small devices that have a touch-sensitive screen or pad that detects motion of a finger or stylus. Writing on the touch-sensitive panel or display screen generates a stream of data input indicating the contact points. The handwriting recognition software analyzes the geometric characteristics of the stream of data input to determine each character or word. Due to differences in individual writing styles and the limitations of handwriting technology on mobile devices, however, recognition accuracy is less than perfect, resulting in character ambiguity even though current handwriting systems do not typically reveal that ambiguity to the user.
A Need for Improvements to Current Disambiguation Methodologies
A specific challenge facing word-based disambiguation is that of providing sufficient feedback to the user about the keystrokes being input, particularly for the novice or infrequent user who is unfamiliar with the reduced keyboard layout or the disambiguation system. With an ordinary typewriter or word processor, each keystroke represents a unique character which can be displayed to the user as soon as it is entered. But with word-level disambiguation, for example, this is often not possible, because each entry represents multiple characters, and any sequence of entries may match multiple words or word stems. The ambiguity may confuse the first-time user if the system displays characters that change as each key is pressed: the user does not know that the system offers the desired word at the end, and he may needlessly attempt to fix each character before proceeding. Ambiguity is especially a problem when, for example, the user makes a spelling or entry error and is not aware of such error until the complete sequence is entered and the desired result is not presented. Displaying word stems that match the partial sequence reduces this problem, by showing when the user is on the right track towards the desired word, but does not eliminate it entirely, especially if some stems are concealed due to display space constraints.
U.S. Pat. No. 5,818,437 describes a keystroke window (102) that displays keycap icons representing the ambiguous key sequence, which confirms that the user pressed the intended keys. But the display provides neither an explanation of how the words are being assembled from that key sequence nor feedback that the user is on the right track.
Moreover, some alphabets, such as Thai and Arabic, contain more letters than the alphabet for English, which leads to even greater ambiguity on a reduced number of keys. Efficient and confident input of these languages demands a mechanism for reducing the appearance of that ambiguity when needed.
An apparatus and method for providing visual indication of character ambiguity and ensuing reduction of such ambiguity during text entry are described. An application text entry field is presented in a display screen, into which the user enters text by means of a reduced keyboard and a disambiguating system. The default or most likely word construct for the current key sequence may be presented at the insertion point of the text entry field. An indication of ambiguity is presented in the display screen to communicate to the user the possible characters associated with each key. A word choice list field may also be present to display at least one word construct matching the current key sequence.
It should be appreciated and understood by one of ordinary skill in the art that the discussion herein applies to characters and sequences of characters, which, when combined make a linguistic object or part of an object. A typical example is a character in any language, such as a letter, digit, punctuation mark, or any other symbol from a language. A typical example of an object or part of an object is a word or part of a word. However, the discussion herein equally applies to elements of other alphabetic, ideographic, and phonetic systems such as Chinese zhuyin, Japanese kana, and Korean jamos. Also, it should be noted that the objects do not have to be linguistic, because the disambiguating system claimed herein can be used to look up icons, phone numbers, or inventory records, as long as a type of symbolic string representation is present. Therefore, it should be appreciated that use of terms, such as letter, word, word stem, and the like is not limited only to those applications, and they are used to facilitate ease of reading and understanding the discussion herein.
When text is entered using a disambiguating text input system, the display often changes with each keystroke such as, for example, initially showing all possible ambiguities and further showing the ongoing convergence to a less ambiguous or even non-ambiguous solution, as illustrated in
As illustrated in
For example, as shown in
Next, if the user depresses the “6” key, corresponding characters “m”, “n”, “o” are displayed horizontally in the field 702, following the previously displayed characters “g”, “h”, and “i”, and word choices formed as a result of the successive depression of the “4” and “6” keys are displayed in field 703 of the succession screen
In the alternate embodiment illustrated in
In the alternate embodiment illustrated in
In one embodiment, the word choice list field 1003 is a scrolling field, which facilitates selection of a most likely word choice and enables the user to scroll up or down through the word choice list. The ambiguity field is updated as the user scrolls through the word choice list, to reflect the sequence of characters in the highlighted word choice.
In alternate embodiments, the ambiguity field 1002 may be displayed within or layered above the application text entry field 1001, or without the word choice list field displayed, and may include the sequence of numerals associated with each key depressed by the user.
U.S. patent application Ser. No. 10/176,933, which application is incorporated herein its entirety by this reference thereto, relates to a method and apparatus for explicit character filtering in ambiguous text entry. The invention provides embodiments including various explicit character selection techniques, such as 2-key and long-pressing, and means for matching words in a database using various methodologies.
Because explicit entry of characters reduces the ambiguity of an ambiguous key sequence, the effect can be reflected in the ambiguity field as shown in
In the sequence illustrated in
In one embodiment of the invention, the user is able to interact directly with the ambiguity field and select the desired character from each set of ambiguous characters. For example, on a touch-screen displaying the embodiment in
This becomes another means of explicitly entering a character. The system may offer a means to switch to a temporary mode for this explicit character entry. Depending on the letter chosen and the ambiguity of the matching word choices, the remaining ambiguity may be reduced significantly. For example, if at the step of
In another embodiment, the system determines which keypress of the current sequence is the most ambiguous, which may be defined as which explicit entry would most reduce the number of word choices, or based on the weighted frequency of the words and stems beginning with the sequence. The highlighted candidate character, or the entire ambiguous set, is marked with a different color or font attribute. Alternatively, the system may present the most ambiguous set in a list from which the user may select, automatically or upon a user action. This helps the user learn and appreciate when it may be beneficial to use an explicit method during entry. In another embodiment, each set of ambiguous characters is color-graded to indicate the characters' relative ambiguities.
For instance, if the user enters the sequence corresponding to “466” and the matching words/stems are:
Then, if the user selects “n”, the list changes to:
When the user subsequently presses the “3” key, only:
In a further embodiment, the user may temporarily connect an external keyboard to the device, e.g. via a Bluetooth wireless connection, and type the specific letter for each of one or more selected ambiguous inputs or complete the current word using only unambiguous inputs. The system may switch to a temporary mode for explicit character entry automatically when an external keyboard is detected. The reduction in ambiguity from any such inputs would again be reflected in the ambiguity field.
Some languages and key layouts may assign more characters per key than the device can display for each keypress, so a modified presentation of the ambiguity field must be used. In one embodiment, the characters on each key are sorted, first by those which are valid and second by frequency of use, and a predetermined number of them are displayed. In another embodiment, the character class, vowel group, or diacritic marks are indicated generically so that each need not be shown separately. With the horizontal format in
In another embodiment of the invention the ambiguity is only partially reduced by explicit entry. For example, after the user presses the “3” key on a phone that supports multiple European languages the system may display the set of characters “d”, “e”, “f”, “é”, “è”, and “ë” in the ambiguity field. If, through a means as described above, the user explicitly enters an ‘e’ but does not have the means or desire to specify the accent, the system reduces the ambiguous set displayed for that input to “e”, “é”, “è”, and “ë”. The remaining ambiguity based on the matching word choices may still be reduced significantly, which is reflected in a reduction in ambiguity among the other inputs shown in the ambiguity field.
As further illustrated in
Although the invention is described herein with reference to the preferred embodiment, one skilled in the art will readily appreciate that other applications may be substituted for those set forth herein without departing from the spirit and scope of the present invention.
Accordingly, the invention should only be limited by the Claims included below.
This application is a Continuation-in-Part of co-pending application, U.S. Ser. No. 10/176,933, filed on Jun. 20, 2002, and entitled “EXPLICIT CHARACTER FILTERING OF AMBIGUOUS TEXT ENTRY”. This co-pending application is incorporated herein in its entirety by this reference thereto. This application also claims priority from U.S. Provisional Patent Application Ser. No. 60/625,378, filed on Nov. 5, 2004, and entitled “VISUAL INDICATION OF CHARACTER AMBIGUITY AND THE ENSUING REDUCTION OF AMBIGUITY DURING (T9) TEXT ENTRY,” which is also incorporated herein in its entirety by this reference thereto.
Number | Name | Date | Kind |
---|---|---|---|
3967273 | Knowlton | Jun 1976 | A |
4164025 | Dubnowski et al. | Aug 1979 | A |
4191854 | Coles | Mar 1980 | A |
4339806 | Yoshida | Jul 1982 | A |
4360892 | Endfield | Nov 1982 | A |
4396992 | Hayashi et al. | Aug 1983 | A |
4427848 | Tsakanikas | Jan 1984 | A |
4442506 | Endfield | Apr 1984 | A |
4464070 | Hanft et al. | Aug 1984 | A |
4481508 | Kamei et al. | Nov 1984 | A |
4544276 | Horodeck | Oct 1985 | A |
4586160 | Amano et al. | Apr 1986 | A |
4649563 | Riskin | Mar 1987 | A |
4661916 | Baker et al. | Apr 1987 | A |
4669901 | Feng | Jun 1987 | A |
4674112 | Kondraske et al. | Jun 1987 | A |
4677659 | Dargan | Jun 1987 | A |
4679951 | King et al. | Jul 1987 | A |
4744050 | Hirosawa et al. | May 1988 | A |
4754474 | Feinson | Jun 1988 | A |
RE32773 | Goldwasser et al. | Oct 1988 | E |
4791556 | Vilkaitis | Dec 1988 | A |
4807181 | Duncan, IV et al. | Feb 1989 | A |
4817129 | Riskin | Mar 1989 | A |
4866759 | Riskin | Sep 1989 | A |
4872196 | Royer et al. | Oct 1989 | A |
4891786 | Goldwasser | Jan 1990 | A |
4969097 | Levin | Nov 1990 | A |
5018201 | Sugawara | May 1991 | A |
5031206 | Riskin | Jul 1991 | A |
5041967 | Ephrath et al. | Aug 1991 | A |
5067103 | Lapeyre | Nov 1991 | A |
5109352 | O'Dell | Apr 1992 | A |
5128672 | Kaehler | Jul 1992 | A |
5131045 | Roth | Jul 1992 | A |
5133012 | Nitta | Jul 1992 | A |
5163084 | Kim et al. | Nov 1992 | A |
5200988 | Riskin | Apr 1993 | A |
5210689 | Baker et al. | May 1993 | A |
5218538 | Zhang | Jun 1993 | A |
5229936 | Decker et al. | Jul 1993 | A |
5255310 | Kim et al. | Oct 1993 | A |
5258748 | Jones | Nov 1993 | A |
5288158 | Matias | Feb 1994 | A |
5289394 | Lapeyre | Feb 1994 | A |
5303299 | Hunt et al. | Apr 1994 | A |
5305205 | Weber et al. | Apr 1994 | A |
5339358 | Danish et al. | Aug 1994 | A |
5371851 | Pieper et al. | Dec 1994 | A |
5388061 | Hankes | Feb 1995 | A |
5392338 | Danish et al. | Feb 1995 | A |
5535421 | Weinreich | Jul 1996 | A |
5559512 | Jasinski et al. | Sep 1996 | A |
5642522 | Zaenen et al. | Jun 1997 | A |
5664896 | Blumberg | Sep 1997 | A |
5680511 | Baker et al. | Oct 1997 | A |
5748512 | Vargas | May 1998 | A |
5786776 | Kisaichi et al. | Jul 1998 | A |
5797098 | Schroeder et al. | Aug 1998 | A |
5805911 | Miller | Sep 1998 | A |
5818437 | Grover et al. | Oct 1998 | A |
5825353 | Will | Oct 1998 | A |
5828991 | Skiena et al. | Oct 1998 | A |
5847697 | Sugimoto | Dec 1998 | A |
5855000 | Waibel et al. | Dec 1998 | A |
5896321 | Miller et al. | Apr 1999 | A |
5917890 | Brotman et al. | Jun 1999 | A |
5917941 | Webb et al. | Jun 1999 | A |
5926566 | Wang et al. | Jul 1999 | A |
5936556 | Sakita | Aug 1999 | A |
5937380 | Segan | Aug 1999 | A |
5937422 | Nelson et al. | Aug 1999 | A |
5945928 | Kushler et al. | Aug 1999 | A |
5952942 | Balakrishnan et al. | Sep 1999 | A |
5953541 | King et al. | Sep 1999 | A |
5960385 | Skiena et al. | Sep 1999 | A |
5963671 | Comerford et al. | Oct 1999 | A |
5999950 | Krueger et al. | Dec 1999 | A |
6005498 | Yang et al. | Dec 1999 | A |
6009444 | Chen | Dec 1999 | A |
6011554 | King et al. | Jan 2000 | A |
6041323 | Kubota | Mar 2000 | A |
6044347 | Abella et al. | Mar 2000 | A |
6054941 | Chen | Apr 2000 | A |
6073101 | Maes | Jun 2000 | A |
6098086 | Krueger et al. | Aug 2000 | A |
6104317 | Panagrossi | Aug 2000 | A |
6120297 | Morse, III et al. | Sep 2000 | A |
6130628 | Schneider-Hufschmidt et al. | Oct 2000 | A |
6169538 | Nowlan et al. | Jan 2001 | B1 |
6172625 | Jin et al. | Jan 2001 | B1 |
6178401 | Franz et al. | Jan 2001 | B1 |
6204848 | Nowlan et al. | Mar 2001 | B1 |
6208966 | Bulfer | Mar 2001 | B1 |
6219731 | Gutowitz | Apr 2001 | B1 |
6223059 | Haestrup | Apr 2001 | B1 |
6246761 | Cuddy | Jun 2001 | B1 |
6286064 | King et al. | Sep 2001 | B1 |
6304844 | Pan et al. | Oct 2001 | B1 |
6307548 | Flinchem et al. | Oct 2001 | B1 |
6307549 | King et al. | Oct 2001 | B1 |
6346894 | Connolly et al. | Feb 2002 | B1 |
6362752 | Guo et al. | Mar 2002 | B1 |
6363347 | Rozak | Mar 2002 | B1 |
6377965 | Hachamovitch et al. | Apr 2002 | B1 |
6392640 | Will | May 2002 | B1 |
6421672 | McAllister et al. | Jul 2002 | B1 |
6424743 | Ebrahimi | Jul 2002 | B1 |
6466232 | Newell et al. | Oct 2002 | B1 |
6502118 | Chatterjee | Dec 2002 | B1 |
6542170 | Williams et al. | Apr 2003 | B1 |
6559778 | Hillmering | May 2003 | B1 |
6567075 | Baker et al. | May 2003 | B1 |
6574597 | Mohri et al. | Jun 2003 | B1 |
6584179 | Fortier et al. | Jun 2003 | B1 |
6633846 | Bennett et al. | Oct 2003 | B1 |
6636162 | Kushler et al. | Oct 2003 | B1 |
6646573 | Kushler et al. | Nov 2003 | B1 |
6665640 | Bennett et al. | Dec 2003 | B1 |
6684185 | Junqua et al. | Jan 2004 | B1 |
6686852 | Guo | Feb 2004 | B1 |
6711290 | Sparr et al. | Mar 2004 | B2 |
6728348 | Denenberg et al. | Apr 2004 | B2 |
6734881 | Will | May 2004 | B1 |
6738952 | Yamamuro | May 2004 | B1 |
6751605 | Gunji et al. | Jun 2004 | B2 |
6757544 | Rangarajan et al. | Jun 2004 | B2 |
6801190 | Robinson et al. | Oct 2004 | B1 |
6801659 | O'Dell | Oct 2004 | B1 |
6807529 | Johnson et al. | Oct 2004 | B2 |
6864809 | O'Dell et al. | Mar 2005 | B2 |
6885317 | Gutowitz | Apr 2005 | B1 |
6912581 | Johnson et al. | Jun 2005 | B2 |
6920420 | Lin | Jul 2005 | B2 |
6934564 | Laukkanen et al. | Aug 2005 | B2 |
6947771 | Guo et al. | Sep 2005 | B2 |
6955602 | Williams | Oct 2005 | B2 |
6956968 | O'Dell et al. | Oct 2005 | B1 |
6973332 | Mirkin et al. | Dec 2005 | B2 |
6982658 | Guo | Jan 2006 | B2 |
6985933 | Singhal et al. | Jan 2006 | B1 |
7006820 | Parket et al. | Feb 2006 | B1 |
7013258 | Su et al. | Mar 2006 | B1 |
7020849 | Chen | Mar 2006 | B1 |
7027976 | Sites | Apr 2006 | B1 |
7030863 | Longe | Apr 2006 | B2 |
7057607 | Mayoraz et al. | Jun 2006 | B2 |
7061403 | Fux | Jun 2006 | B2 |
7075520 | Williams | Jul 2006 | B2 |
7095403 | Lyustin et al. | Aug 2006 | B2 |
7098896 | Kushler et al. | Aug 2006 | B2 |
7139430 | Sparr et al. | Nov 2006 | B2 |
7152213 | Pu et al. | Dec 2006 | B2 |
7224292 | Lazaridis et al. | May 2007 | B2 |
7256769 | Pun et al. | Aug 2007 | B2 |
7257528 | Ritchie et al. | Aug 2007 | B1 |
7272564 | Phillips et al. | Sep 2007 | B2 |
7313277 | Morwing et al. | Dec 2007 | B2 |
7349576 | Hotsberg | Mar 2008 | B2 |
7386454 | Gopinath et al. | Jun 2008 | B2 |
7389235 | Dvorak | Jun 2008 | B2 |
7395203 | Wu et al. | Jul 2008 | B2 |
7437001 | Morwing et al. | Oct 2008 | B2 |
7466859 | Chang et al. | Dec 2008 | B2 |
7598890 | Park et al. | Oct 2009 | B2 |
7626574 | Kim | Dec 2009 | B2 |
7679534 | Kay et al. | Mar 2010 | B2 |
7881936 | Longe et al. | Feb 2011 | B2 |
8095364 | Longe et al. | Jan 2012 | B2 |
20010040517 | Kisaichi et al. | Nov 2001 | A1 |
20020019731 | Masui et al. | Feb 2002 | A1 |
20020038207 | Mori et al. | Mar 2002 | A1 |
20020072395 | Miramontes | Jun 2002 | A1 |
20020097227 | Chu et al. | Jul 2002 | A1 |
20020119788 | Parupudi et al. | Aug 2002 | A1 |
20020126097 | Savolainen | Sep 2002 | A1 |
20020135499 | Guo | Sep 2002 | A1 |
20020145587 | Watanabe | Oct 2002 | A1 |
20020152075 | Kung et al. | Oct 2002 | A1 |
20020188448 | Goodman et al. | Dec 2002 | A1 |
20020196163 | Bradford et al. | Dec 2002 | A1 |
20030011574 | Goodman | Jan 2003 | A1 |
20030023420 | Goodman | Jan 2003 | A1 |
20030023426 | Pun et al. | Jan 2003 | A1 |
20030036411 | Kraft | Feb 2003 | A1 |
20030054830 | Williams et al. | Mar 2003 | A1 |
20030078038 | Kurosawa et al. | Apr 2003 | A1 |
20030088398 | Guo et al. | May 2003 | A1 |
20030095102 | Kraft et al. | May 2003 | A1 |
20030101060 | Bickley | May 2003 | A1 |
20030104839 | Kraft et al. | Jun 2003 | A1 |
20030119561 | Hatch et al. | Jun 2003 | A1 |
20030144830 | Williams | Jul 2003 | A1 |
20030179930 | O'Dell et al. | Sep 2003 | A1 |
20030193478 | Ng | Oct 2003 | A1 |
20030212563 | Ju et al. | Nov 2003 | A1 |
20040049388 | Roth et al. | Mar 2004 | A1 |
20040052355 | Awada et al. | Mar 2004 | A1 |
20040067762 | Balle | Apr 2004 | A1 |
20040104896 | Suraqui | Jun 2004 | A1 |
20040127197 | Roskind | Jul 2004 | A1 |
20040127198 | Roskind et al. | Jul 2004 | A1 |
20040135774 | La Monica | Jul 2004 | A1 |
20040153963 | Simpson et al. | Aug 2004 | A1 |
20040153975 | Williams et al. | Aug 2004 | A1 |
20040155869 | Robinson et al. | Aug 2004 | A1 |
20040163032 | Guo et al. | Aug 2004 | A1 |
20040169635 | Ghassabian | Sep 2004 | A1 |
20040201607 | Mulvey et al. | Oct 2004 | A1 |
20040203656 | Andrew et al. | Oct 2004 | A1 |
20040243257 | Theimer | Dec 2004 | A1 |
20040259598 | Wagner et al. | Dec 2004 | A1 |
20050017954 | Kay et al. | Jan 2005 | A1 |
20050114770 | Sacher et al. | May 2005 | A1 |
20060007162 | Kato | Jan 2006 | A1 |
20060010206 | Apacible et al. | Jan 2006 | A1 |
20060028450 | Suraqui | Feb 2006 | A1 |
20060129928 | Qiu | Jun 2006 | A1 |
20060136408 | Weir et al. | Jun 2006 | A1 |
20060155536 | Williams et al. | Jul 2006 | A1 |
20060158436 | LaPointe et al. | Jul 2006 | A1 |
20060173807 | Weir et al. | Aug 2006 | A1 |
20060190822 | Basson et al. | Aug 2006 | A1 |
20060193519 | Sternby | Aug 2006 | A1 |
20060236239 | Simpson et al. | Oct 2006 | A1 |
20060239560 | Sternby | Oct 2006 | A1 |
20070094718 | Simpson | Apr 2007 | A1 |
20070203879 | Templeton-Steadman et al. | Aug 2007 | A1 |
20070276814 | Williams | Nov 2007 | A1 |
20070285397 | LaPointe et al. | Dec 2007 | A1 |
20080130996 | Sternby | Jun 2008 | A1 |
Number | Date | Country |
---|---|---|
0313975 | May 1989 | EP |
0319193 | Jun 1989 | EP |
0464726 | Jan 1992 | EP |
0540147 | May 1993 | EP |
0651315 | May 1995 | EP |
0660216 | Jun 1995 | EP |
0732646 | Sep 1996 | EP |
0751469 | Jan 1997 | EP |
1031913 | Aug 2000 | EP |
1035712 | Sep 2000 | EP |
1256875 | Nov 2002 | EP |
1296216 | Mar 2003 | EP |
1320023 | Jun 2003 | EP |
1324573 | Jul 2003 | EP |
1341156 | Sep 2003 | EP |
1347361 | Sep 2003 | EP |
1347362 | Sep 2003 | EP |
1522920 | Apr 2005 | EP |
2298166 | Aug 1996 | GB |
2383459 | Jun 2003 | GB |
61-282965 | Dec 1986 | JP |
A 1990-117218 | May 1990 | JP |
03-141460 | Jun 1991 | JP |
A 1993-265682 | Oct 1993 | JP |
8006939 | Jan 1996 | JP |
A 1997-114817 | May 1997 | JP |
A 1997-212503 | Aug 1997 | JP |
11-312046 | Nov 1999 | JP |
2000-508093 | Jun 2000 | JP |
2001509290 | Jul 2001 | JP |
2001-224075 | Aug 2001 | JP |
2001-251395 | Sep 2001 | JP |
2002-014956 | Jan 2002 | JP |
2002-141996 | May 2002 | JP |
A 2002-351862 | Dec 2002 | JP |
2003-116179 | Apr 2003 | JP |
2003-196273 | Jul 2003 | JP |
476033 | Feb 2002 | TW |
559783 | Nov 2003 | TW |
WO8200442 | Feb 1982 | WO |
WO9007149 | Jun 1990 | WO |
WO9627947 | Sep 1996 | WO |
WO9704580 | Feb 1997 | WO |
WO9705541 | Feb 1997 | WO |
WO-9833111 | Jul 1998 | WO |
WO 0035091 | Jun 2000 | WO |
WO03058420 | Jul 2003 | WO |
WO03060451 | Jul 2003 | WO |
WO2004003721 | Jan 2004 | WO |
WO2004110040 | Dec 2004 | WO |
WO2006026908 | Mar 2006 | WO |
Entry |
---|
Ajioka, Y. Anzai, Y. “Prediction of Nexet Alphabets and Words of Four Sentences by Adaptive Injunctions” IJCNN-91-Seattle: Int'l Joint Conference on Neural Networks (Cat. No. 91CH3049-4) p. 897, vol. 2; IEEE, NY, NY 1991 USA. |
Martin, T.Azvine, B., “Learning User Models for an Intelligent Telephone Assistant”; Proceedings Joint 9th IFSA World Congress and 20th NAFIPS Intnl. Conf. (Cat. No. 01TH8569) Part vol. 2, p. 669-74 vol. 2; IEEE 2001, Piscataway, NJ, USA. |
Yang, Y., Pedersen, J., “A Comparative Study on Feature Selection in Text Categorization”; 1997; Proceedings of ICML'1997, pp. 412-420. |
Kronlid, F., Nilsson, V. “TreePredict, Improving Text Entry on PDA's”; 2001; Proceedings of the Conference on Human Factors in Computing Systems (CHI2001), ACM press, pp. 441-442. |
Zernik, U., “Language Acquisition: Coping with Lexical Gaps”, Aug. 22-27, 1998; Proceedings of the 12th International Conference on Computational Linguistics, Budapest, Hungary. pp. 796-800. |
Gavalda, M. “Epiphenomenal Grammar Acquisition with GSG”; May 2000; Proceedings of the Workshop on Conversational Systems of the 6th Conf. on Applied Natural Language Processing and the 1st Conf. of the N. American Chapter of the Assoc. for Computational Linguistics (ANLP/NAACL-2000), Seattle, Washington. |
Cockburn, A., Siresena, “Evaluating Mobile Text Entry with Fastap™ Keyboard”; 2003; People and Computers XVII (vol. 2): British Computer Society Conference on Human Computer Interaction. Bath, England. pp. 77-80. |
Butts, L., Cockburn, A., “An Evaluation of Mobile Phone Text Input Methods”, University of Canterbury, Dept of Computer Science, Christchurch, New Zealand AUIC2002, Melbourne Australia, Conferences in Research and Practice in Information Technology, vol. 7; Copyright 2001, Australian Computer Society. |
Shieber, S., Baker, E., “Abbreviated Text Input”, Harvard University, Cambridge, MA, USA shieber@deas.harvard.edu ellie@eecs.harvard.edu; IUI'03, Jan. 12-15, 2003, ACM 1-58113-586-06/03/0001. |
Rosa, J. “Next Word Prediction in a Connectional Distributed Representation System”; 2002 IEEEE Intnl Conference on Systems, man and Cybernetics; Conf. Proceedings (Cat. No. 02CH37349) Part vol. 3, p. 6, Yasmine Hammamet, Tunisia, Oct. 2002. |
Rosa, J., “A Biologically Motivated Connectionist System for Predicting the Next Word in Natural Language Sentences”, 2002 IEEEE Intnl Conference on Systems, man and Cybernetics; Conf. Proceedings (Cat. No. 02CH37349) Part vol. 4, p. 6, Yasmine Hammamet, Tunisia, Oct. 2002. |
Masui, “POBox: An efficient Text input Method for Handheld and Ubiquitous Computers”; Sony Computer Science Labs inc. Mar. 14, 2013 Higashi-Gotanda, Shinagawa Tokyo 141-0022, Japan. |
Swiffin, A.L., et al., “PAL: An Effort Efficient Portable Communication Aid and Keyboard Emulator,” RESNA 8th Annual Conference, Memphis, Tennessee, 1985, pp. 197, 199. |
Witten, I.H., Principles of Computer Speech, New York: Academic Press, (1982), pp. 246-253. |
Dey, A.K. and Abowd, G. D. (1999). Towards a better understanding of context and context-awareness. GVU Technical Report GIT-GVU-99-2, GVU Center, 1999. |
Coppola, P. et al, Mobe: a framework for context-aware mobile applications. In: Proc. of Workshop on Context Awareness for Proactive Systems (CAPS2005), Helsinki University Press, 2005; ISBN:952-10-2518-2. |
Schmidt, A. et al; Advanced Interaction in Context, In Proceedings of First International Symposium of Handheld and Ubiquitous Computing, pp. 89-101, Karlsruhe, Germany, Sep. 1999. |
Siewiorek, D.P., et al, SenSay: A context-aware mobile phone. In proceedings of the 7th International Symposium on Wearable Computers, pp. 248-249, IEEE Press, 2003. |
Motorola Lexicus Technologies & SOK's iTAP page; Sep. 2002, retrieved from: www.motorola.com/lexicus/html/itap—FAQ.html. |
MacKenzie, et al; “Text Entry for Mobile Computing: Models and Methods, Theory and Practice”;Sep. 2002; retrieved from website www.yorku.ca/mack/hci3.html. |
Arnott, J.L., et al; Probabilistic Character Disambiguation for Reduced Keyboards Using Small Text Samples; Dept. Math & comp. Sci.; Univ of Dundee, Dundee, Tayside, Scotland; AAC Augmentative and Alternative Communication ; vol. 8, Sep. 1992; Copyright 1992 by ISAAC. |
Demasco, Patrick W., et al., “Generating Text From Compressed Input: An Intelligent Interface for People with Sever Motor Impairments”, Communications of the ACM, vol. 35 No. 5, May 1992, pp. 68-78. |
James, Christina L., et al., “Text Input for Mobile Devices: Comparing Model Prediction to Actual Performance”, SIGCHI '01, Seattle, WA, Mar. 31-Apr. 4, 2001, pp. 365-371 [ACM 1-58113-327-8/01/0003]. |
MacKenzie, I. Scott, et al., “LetterWise: Prefix-based Disambiguation for Mobile Text Input”, UIST '01, Orlando, FL, Nov. 11-14, 2001, pp. 111-120 [ACM 1-58113-438-x/01/11]. |
Levine, S.H., et al., “Multi-Character Key Text Entry Using Computer Disambiguation,” RESNA 10th Annual Conference, San Jose, California, 1987, pp. 177-178. |
Xu, Jinxi, et al., “Corpus-Based Stemming Using Cooccurrence of Word Variants”, ACM Transactions on Information Systems, vol. 16 No. 1, Jan. 1998, pp. 61-81 [ACM 1046-8188/98/0100-0061]. |
Press Release from Tegic Communications, “America Online, Inc. Acquires Tegic Communications”, Dec. 1, 1999, pp. 1-3 (downloaded from: www.tegic.com/pressreleases/pr—aolacquisition.html). |
News Release from Zi Corporation, “Zi Claims Second Patent Victory Against Tegic Communications, a unit of AOL Time Warner”, Mar. 14, 2002, pp. 1-2 (downloaded from: www.zicorp.com/pressreleases/031402.html). |
Summary Judgment Orders, Zi Corporation, Inc. v. Tegic Communications, Inc., Mar. 13, 2002, pp. 1-7 (downloaded from: www.zicorp.com/pressreleases/031402.html). |
Silfverberg, Miika, et al., “Bringing Text Input Beyond the Desktop”, CHI 2000, The Hague, Amsterdam, Apr. 1-6, 2000, pp. 9-16 [ACM 1-58113-216-6/00/04]. |
“Latest Philips Wireless Handset Ships With T9 Text Input in China”, Business Wire, Nov. 9, 1999, pp. 1-2 (downloaded from: www.businesswire.com/webbox/bx.110999/193130342.htm). |
Tygran, Amalyan, “T9 or Text Predicative Input in Mobile Telephones”, Business Wire, Jul. 23, 2001, pp. 1-5 (downloaded from: web.archive.org/wweb/20010723054055/http://www.digit-life.com/articles/mobilet9/). |
James, Christina, et al., “Bringing Text Input Beyond the Desktop”, CHI 2000, Seattle, WA, Apr. 1-6, 2000, pp. 49-50. |
Kushler, Cliff, “AAC Using a Reduced Keyboard”, downloaded from: www.dinf.ne.jp/doc/english/Us—Eu/conf/csun—98/csun98—140.htm, Web Posted Mar. 3, 1998, pp. 1-4. |
Sugimoto, Masakatsu, “Single-Hand Input Scheme for English and Japanese Text”, Fujitsu Sci. Tech.J., vol. 33 No. 2, Dec. 1997, pp. 189-195. |
http://www.pinyin.info/readings/texts/ideographic—myth.html. The Ideographic Myth. 1984. |
http://www.ling.upenn.edu/courses/Fall—2003/ling001/reading—writing.html. What is writing? Linguistics 001. Lecture 19. Reading and Writing 2003. |
Making Multi-tap Intelligent; retrieved Feb. 7, 2006 from website: http://www.zicorp.com/ezitap.htm. |
Tapless ppd Gen3.0; retrieved Feb. 7, 2006 from website: http://www.tapless.biz/. |
WordLogic for Handheld Computers—http://web.archive.org/web/20051030092534/www.wordlogic.com/products-predictive-keyboard-handheld-prediction.asp ; Oct. 30, 2005; retrieved from webarchive.org. |
Welcome to the Nuance Corporate Website; retrieved on Feb. 7, 2006 from website: http://www.nuance.com/. |
Suhm B., et al. “Multimodal Error Correction for Speech User Interfaces” ACM Transactions on Computer-Human Interaction, vol. 8. Mar. 2001. |
Oviatt,S. “Mutual Disambiguation of Recognition Errors in a Multimodal Architecture.” Chi 99. May 15-29, 1999. |
Foulds, R., et al. “Lexical Prediction Techniques Applied to Reduce Motor Requirements for Augmentative Communication,” RESNA 10th Annula Conference, San Jose, California, 1987, pp. 115-117. |
Foulds, R., et al., “Statistical Disambiguation of Multi-Character Keys Applied to Reduce Motor Requirements for Augmentative and Alternative Communication,” AAC Augmentative and Alternative Communication (1987), pp. 192-195. |
IBM Technical Disclosure Bulletin, “Speed Keyboard for Data Processor,” vol. 23, 3 pages, Jul. 1980. IBM Corp., 1993. |
Kamphuis, H., et al., “Katdas; A Small Number of Keys Direct Access System,” RESNA 12th Annual Conference, New Orleans, Louisiana, 1989, pp. 278-279. |
King, M.T., “JustType-Efficient Communication with Eight Keys,” Proceedings of the RESNA '95 Annual Conference, Vancouver, BC, Canada, 1995, 3 pages. |
Kreifeldt, J.G., et al., “Reduced Keyboard Designs Using Disambiguation,” Proceedings of the Human Factors Society 33rd Annual Meeting, 1989, pp. 441-444. |
Levine, S.H., “An Adaptive Approach to Optimal Keyboard Design for Nonvocal Communication,” IEEE, 1985, pp. 334-337. |
Levine, S.H., et al., “Adaptive Technique for Customized Interface Design With Application to Nonvocal Communication,” RESNA 9th Annual Conference, Minneapolis, Minnesota, 1986, pp. 399-401. |
Levine, S.H., et al., “Computer Disambiguation of Multi-Character Key Text Entry: An Adaptive Design Approach,” IEEE, 1986, pp. 298-301. |
Matias, E., et al., “Half-QWERTY: Typing With One Hand Using Your Two-Handed Skills,” Conference Companion, CHI '94 (Apr. 24-28, 1994), pp. 51-52. |
Minneman, S.L., “A Simplified Touch-Tone Telecommunication Aid for Deaf and Hearing Impaired Individuals,” RESNA 8th Annual Conference, Memphis Tennessee, 1985, pp. 209-211. |
Oommen, B.J., et al., “Correction to ‘An Adaptive Learning Solution to the Keyboard Optimization Problem’.” IEEE Transactions on Systems, Man and Cybernetics, vol. 22, No. 5 (Oct. 1992) pp. 1233-1243. |
Smith, Sidney L., et al., “Alphabetic Data Entry Via the Touch-Tone Pad: A Comment,” Human Factors, 13(2), Apr. 1971, pp. 189-190. |
Sugimoto, M., et al., “SHK: Single Hand Key Card for Mobile Devices,” CHI 1996 (Apr. 13-18, 1996), pp. 7-8. |
Swiffin, A.L., et al., “Adaptive and Predictive Techniques in a Communications Prosthesis,” AAC Augmentative and Alternative Communication, (1987), pp. 181-191. |
Oommen, B. John, et al.; “String Taxonomy Using Learning Automata”; Apr. 1997; IEEE Transactions on Systems, Mand and Cybernetics—Part B: Cybernetics, vol. 27 No. 20 pp. 354-365. |
Lesher, Gregory W. et al.; “Optimal Character Arrangements for Ambiguous Keyboards”; Dec. 1998; IEEE Transactions on Rehabilitation Engineering, vol. 6, No. 4, pp. 415-423. |
http://pitecan.com/OpenPOBox/info/index.html; Jul. 23, 2001. |
Number | Date | Country | |
---|---|---|---|
20050283358 A1 | Dec 2005 | US |
Number | Date | Country | |
---|---|---|---|
60625378 | Nov 2004 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10176933 | Jun 2002 | US |
Child | 11213131 | US |