1. Field
The disclosed and claimed concept relates generally to handheld electronic devices and, more particularly, to a method of learning a context for a character segment during text input.
2. Description of the Related Art
Numerous types of handheld electronic devices are known. Examples of such handheld electronic devices include, for instance, personal data assistants (PDAs), handheld computers, two-way pagers, cellular telephones, and the like. Many handheld electronic devices also feature wireless communication capability, although many such handheld electronic devices are stand-alone devices that are functional without communication with other devices.
In certain circumstances, a handheld electronic device having a keypad of Latin letters can be employed to enter text in languages that are not based upon Latin letters. For instance, pinyin Chinese is a type of phonetic Chinese “alphabet” which enables transcription between Latin text and Standard Mandarin text. Pinyin Chinese can thus enable the input of Standard Mandarin characters by entering Latin letters. A “pin” is a phonetic sound, oftentimes formed from a plurality of Latin letters, and each pin is associated with one or more Standard Mandarin characters. More than four hundred pins exist, and each pin typically corresponds with a plurality of different Standard Mandarin characters. While methods and devices for text input such as pinyin Chinese text input have been generally effective for their intended purposes, such methods and devices have not been without limitation.
Generally each Standard Mandarin character is itself a Chinese word. Moreover, a given Standard Mandarin character in combination with one or more other Standard Mandarin characters can constitute a different word. An exemplary pin could be phonetically characterized as “da”, which would be input on a Latin keyboard by actuating the <D> key followed by an actuation of the <A> key. However, the pin “da” corresponds with a plurality of different Chinese characters. Moreover, the pin “da” can be a single syllable represented by a character within a Chinese word having a plurality of syllables, with each syllable being represented by a Standard Mandarin character. As such, substantial difficulty exists in determining which specific Standard Mandarin character should be output in response to an input of a pin when the pin corresponds with a plurality of Standard Mandarin characters.
Numerous methodologies have been developed to assist in generating a character interpretation for a series of pins that have been input on a device. For instance, an exemplary algorithm would be the “simple maximum matching” algorithm, which is one algorithm among many, both simple and complex, of the well known Maximum Matching Algorithm. A given device may have stored thereon a number of Chinese words comprised of one or more Chinese characters, and the algorithm(s) executed on the device may employ such linguistic data to develop the best possible character interpretation of a series of input pins.
In response to the inputting of a sequence of pins, the aforementioned simple maximum matching algorithm might generate a character interpretation comprising the largest Chinese words, i.e., the words having the greatest quantity of Standard Mandarin characters. For example, the algorithm might, as a first step, obtain the largest Chinese word having characters that correspond with the pins at the beginning of the pin sequence. As a second step, the algorithm might obtain the largest Chinese word having characters that correspond with the pins in the sequence that immediately follow the previous word. This is repeated until Chinese words have been obtained for all of the pins in the input sequence. The result is then output.
Numerous other algorithms are employed individually or in combination with the objective of providing as a proposed output a character interpretation that matches what was originally intended by the user. It would be desired to provide an improved method and handheld electronic device that facilitate the input of text.
A full understanding of the disclosed and claimed concept can be obtained from the following Description when read in conjunction with the accompanying drawings in which:
Similar numerals refer to similar parts throughout the specification.
An improved handheld electronic device 4 in accordance with the disclosed and claimed concept is indicated generally in
The handheld electronic device and the associated method described herein advantageously enable the input of text. The exemplary device and method are described herein in terms of pinyin Chinese, but it is understood that the teachings herein can be employed in conjunction with other types of text input, and can be employed in conjunction with other languages such as Japanese and Korean, without limitation.
The input apparatus 8 comprises a keypad 20 and a thumbwheel 24. The keypad 20 in the exemplary embodiment depicted herein is a Latin keypad comprising a plurality of keys 26 that are each actuatable to input to the processor apparatus 16 the Latin character indicated thereon. The thumbwheel 24 is rotatable to provide navigational and other input to the processor apparatus 16, and additionally is translatable in the direction of the arrow 28 of
Examples of other input members not expressly depicted herein would include, for instance, a mouse or trackball for providing navigational inputs, such as could be reflected by movement of a cursor on the display 32, and other inputs such as selection inputs. Still other exemplary input members would include a touch-sensitive display, a stylus pen for making menu input selections on a touch-sensitive display displaying menu options and/or soft buttons of a graphical user interface (GUI), hard buttons disposed on a case of the handheld electronic device 4, and so on. Examples of other output devices would include a touch-sensitive display, an audio speaker, and so on.
An exemplary mouse or trackball would likely advantageously be of a type that provides various types of navigational inputs. For instance, a mouse or trackball could provide navigational inputs in both vertical and horizontal directions with respect to the display 32, which can facilitate input by the user.
The processor apparatus 16 comprises a processor 36 and a memory 40. The processor 36 may be, for example and without limitation, a microprocessor (μP) at interfaces with the memory 40. The memory 40 can be any one or more of a variety of types of internal and/or external storage media such as, without limitation, RAM, ROM, EPROM(s), EEPROM(s), and the like that provide a storage register for data storage such as in the fashion of an internal storage area of a computer, and can be volatile memory or nonvolatile memory.
The memory 40 is depicted schematically in
The objects 44 comprise a plurality of raw inputs 52, a plurality of characters 56, a plurality of combination objects 60, a plurality of generic segments 64, a number of candidates 68, and a number of learned segments 72. As employed herein, the expression “a number of” and variations thereof shall refer broadly to a nonzero quantity, including a quantity of one. The exemplary memory 40 is depicted as having stored therein at least a first candidate 68 and at least a first learned segment 72, although it is understood that the memory 40 need not at all times comprise candidates 68 and/or learned segments 72. For instance, the handheld electronic device 4, when new, may not yet have stored in the memory 40 any candidates 68 or any learned segments 72, it being understood that one or more candidates 68 and/or learned segments 72 can become stored in the memory 40 with use of the handheld electronic device 4.
The raw inputs 52 and characters 56 may be stored in a table wherein each raw input 52 is associated with one or more of the characters 56. In the exemplary embodiment described herein, the exemplary language is Chinese, and thus each raw input 52 would be a pin in the scheme of pinyin Chinese. Associated with each such raw input 52, i.e., pin, would be one or more characters 56, i.e., Standard Mandarin characters.
The generic segments 64 each comprise a plurality of the characters 56. In the present exemplary embodiment, each possible two-character permutation of the Standard Mandarin characters is stored as a generic segment 64. Additionally, other Chinese words comprising three or more Standard Mandarin characters are each stored as a generic segment 64, based upon prevalent usage within the language. In the exemplary embodiment depicted herein, the generic segments 64 are each at most six Standard Mandarin characters in length, although only an extremely small number of generic segments 64 comprise six Standard Mandarin characters.
As will be described in greater detail below, the candidates 68 are each a series of Standard Mandarin characters that were the subject of an initial portion of a learning cycle, i.e., an object for which the learning cycle has not yet been completed. The learned segments 72 are each a plurality of Standard Mandarin characters which resulted from candidates 68 which went through an entire learning cycle. As a general matter, the generic segments 64 are inviolate, i.e., are not capable of being changed by the user, but the candidates 68 and the learned segments 72 are changeable based upon, for instance, usage of the handheld electronic device 4.
The routines 48 advantageously comprise a segment learning routine which enables the learning and storage of the learned segments 72, which facilitates text input. Specifically, the generic segments 64 provide a statistically-based solution to a text input, but the learned segments 72 advantageously provide a more customized user experience by providing additional segments, i.e., the learned segments 72, in response to certain inputs. This provides to the user a character interpretation that is more likely to be the character interpretation intended by the user than if the character interpretation were based solely on the generic segments 64.
An exemplary flowchart in
Portions of the sequence of inputs obtained at 112 are then compared, as at 116, with various stored objects 44 in the memory 40 to obtain a character interpretation of the input sequence. That is, one or more of the raw inputs 52, characters 56, combination objects 60, generic segments 64, candidates 68, and learned segments 72 are consulted to determine the series of Standard Mandarin characters that are most likely to be the interpretation desired by the user. The input routine may employ algorithms from the Maximum Matching Algorithm, and/or other algorithms, for instance, to facilitate the identification of appropriate objects 44 from which to generate the character interpretation. The character interpretation is then output, as at 120.
Such an exemplary output of a character interpretation is depicted generally in
If it was determined at 108 that the current input member actuation was an edit input, processing would continue to 124 where a character learning string would be generated. An editing input is depicted generally in
In
After the character learning string has been generated at 124, it is then determined at 128 whether or not any portion of the character learning string matches a portion of a candidate 68. In this regard, a “portion” comprises the replacement character 296 and at least one character adjacent thereto in the character learning string. It is determined at 128 whether these characters match a set of adjacent characters in one of the candidates 68.
If it is determined at 128 that no such match exists between a portion of the character learning string and a portion of a candidate 68, the character learning string is itself stored, as at 132, as a candidate 68. Processing thereafter continues at 104 where additional input member actuations can be detected.
If it is determined at 128 that the replacement character 296 and at least one character adjacent thereto in the character learning string match an adjacent plurality of characters in one of the candidates 68, the set of matched characters are learned, as at 136. If the quantity of matched characters are five characters in length or less, the set of characters are stored as a learned segment 72. However, if the set of matched characters is more than five characters in length, the set of matched characters is stored, by way of a combination object 60, as a learned segment 72 plus another object, either a character 56, a generic segment 64, or another learned segment 72. That is, some of the Standard Mandarin characters 56 in the set of matched characters are compared with various objects 44 to identify a matching object 44. Since the generic segments 64 comprise each two character permutation of the Standard Mandarin characters, at least the two initial characters of the set of matched characters can be stored in the form of a reference or pointer to the preexisting generic segment 64. The other characters 56 in the set of matched characters, i.e., the characters 56 other than the characters 56 for which a preexisting object 44 was identified, are stored as the learned segment 72. The resultant combination object 60 would, in the exemplary embodiment, include pointers to both the identified preexisting object 44 and the newly stored learned segment 72.
After the set of matched characters has been “learned”, such as described above, the candidate 68 from which the matching characters were identified is deleted, as at 140. Processing thereafter returns to 104 where additional input member actuations can be detected.
The identification at 128 of a set of characters in the character learning string that match a set of characters in a candidate 68 can occur in any of a variety of fashions. In the exemplary embodiment depicted herein, the replacement character 296 in the character learning string plus at least one adjacent character in the character learning string must match a corresponding set of adjacent characters in a candidate 68. This can be accomplished, for example, by identifying among the candidates 68 all of the candidates 68 which comprise, as one of the characters thereof, the replacement character 296. The characters in the learning character string that precede the replacement character 296 and that follow the replacement character 296 thereof are compared with characters in a candidate 68 that are correspondingly positioned with respect to the character thereof that matches the replacement character 296. In the depicted exemplary embodiment, the comparison occurs one character at a time alternating between characters that precede and that follow the replacement character 296 in a direction progressing generally outwardly from the replacement character 296.
For example, the character learning string generated from the edit input depicted in
The result is a set of characters from the character learning string for which a matching series of characters was found within one of the candidates 68. The set of matched characters is stored, as indicated above, and the candidate 68 from which the matching characters was identified is deleted, as at 140.
Upon such storage of the matched characters as a learned segment 72 and/or a combination object 60, the learned segment 72 and/or the combination object 60 can be employed in conjunction with further text input to generate proposed character interpretations of sequences of inputs. Since the user has already indicated twice a preference for the set of matched characters, i.e., the characters were stored initially as a candidate 68 and were thereafter stored within a character learning string which was compared with the candidate 68, the user has indicated a desire to use the set of matched characters.
It is noted that the generic segments 64 and the learned segments 72 each comprise, in addition to the characters 56 thereof, a relative frequency value. In the exemplary depicted embodiment, the frequency value has a value between zero and seven, with higher values being indicative of relatively more frequent use. The learned segments 72 are each given a relatively high frequency value. As such, when at 116 a character interpretation of an input sequence is obtained, a preference will exist, as a general matter, for the learned segments 72 when both a learned segment 72 and a generic segment 64 would constitute a valid character interpretation of a given set of adjacent inputs. As such, as the user continues to use the handheld electronic device 4, progressively greater quantities of learned segments 72 are stored, and character interpretations of input sequences progressively have a greater likelihood of being the character interpretation intended by the user.
Learned segments 72 and combination objects 60 can additionally be derived from text received in other fashions on the handheld electronic device. For instance, the exemplary handheld electronic device 4 can receive messages, such as in the form email, or as messages such as through the use of short message service (SMS). As can be understood from
The string of raw inputs 52 is then compared, as at 316, with certain of the objects 44 in the memory 40 in order to obtain a character interpretation of the raw inputs 52. It is then determined, as at 318 whether any portion of the character interpretation is different than the string of reference characters received at 304 and which were converted into raw inputs 52 at 312. If it is determined at 318 that the character interpretation is the same as the received string of reference characters, the character interpretation is ignored as at 322. Processing thereafter continues, as at 312, where additional characters, if any, are converted into raw inputs 52 for further processing as indicated above.
If it is determined at 318 that some of the characters 56 of the character interpretation differ from the characters in the string of characters obtained at 304, a character learning string is generated, as at 324. The character learning string generated at 324 comprises the characters in the string of characters obtained at 304 which were identified as differing between the character interpretation and the received string of reference characters. If desired, the character learning string can additionally include one or more characters in the string of characters that precede and/or follow the differing characters.
Once the character learning string has been generated, as at 324, it is determined at 328 whether at least a portion of the character learning string matches at least a portion of a candidate 68. This occurs in a fashion similar to the processing at 128. If no such match is found at 328, the character learning string is stored, as at 322, as a candidate 68. If, however, a set of matching characters is identified at 328, the matching characters are stored, as at 336, as at least one of a learned segment 72 and a combination object 60, in a fashion similar to the processing at 136. The candidate 68 from which the match was identified is then deleted, as at 340. After processing after 332 or at 340, processing thereafter continues at 312 where additional characters can be converted into raw inputs 52.
It thus can be seen that received text can be employed to learn new learn segments 72 and/or combination objects 60 in a fashion similar to the way in which learned segments 72 and combination objects 60 were learned during text input, as depicted generally in
One of the routines 48 additionally provides a context learning feature when a plurality of adjacent characters 56 in a character interpretation are replaced with an existing segment, either a generic segment 64 or a learned segment 72, or are replaced with individual characters 56. Such a context learning feature is depicted as a flowchart in
Such an operation is depicted, for example, in
The new combination object 60 thus can be employed by the input routine to determine whether a preference exists for one segment in the context of another object 44. For instance, the replacement segment 596 portion of the new combination object 60 might be selected over another segment that is a valid character interpretation of a part of a sequence of inputs when it follows the same character 556 or the other segment which preceded the replacement segment 596 during the aforementioned context learning operation. The combination objects 60 thus provide a further level of customization for the user, and facilitate providing a character interpretation that matches the user's original intention.
As noted above, the context learning feature can be initiated when a plurality of adjacent characters 56 in a character interpretation are replaced with other individual characters 56. If a particular character 56 in a string of characters is replaced with another particular character 56 as a result of an editing input, a character learning string is generated, as at 124 in
While specific embodiments of the disclosed and claimed concept have been described in detail, it will be appreciated by those skilled in the art that various modifications and alternatives to those details could be developed in light of the overall teachings of the disclosure. Accordingly, the particular arrangements disclosed are meant to be illustrative only and not limiting as to the scope of the disclosed and claimed concept which is to be given the full breadth of the claims appended and any and all equivalents thereof.
The application is a continuation application of U.S. patent application Ser. No. 11/427,971 filed Jun. 30, 2006, the entire contents of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5128672 | Kaehler | Jul 1992 | A |
5270927 | Sproat | Dec 1993 | A |
5742705 | Parthasarathy | Apr 1998 | A |
5945928 | Kushler et al. | Aug 1999 | A |
5952942 | Balakrishnan et al. | Sep 1999 | A |
6073146 | Chen | Jun 2000 | A |
6204848 | Nowlan et al. | Mar 2001 | B1 |
6286064 | King et al. | Sep 2001 | B1 |
6307548 | Flinchem et al. | Oct 2001 | B1 |
6346894 | Connolly et al. | Feb 2002 | B1 |
6646573 | Kushler et al. | Nov 2003 | B1 |
6760012 | Laurila | Jul 2004 | B1 |
6822585 | Ni et al. | Nov 2004 | B1 |
6848080 | Lee et al. | Jan 2005 | B1 |
6864809 | O'Dell et al. | Mar 2005 | B2 |
7061403 | Fux | Jun 2006 | B2 |
7218781 | Van Meurs | May 2007 | B2 |
7257528 | Ritchie et al. | Aug 2007 | B1 |
7385531 | Zhang | Jun 2008 | B2 |
7395203 | Wu et al. | Jul 2008 | B2 |
7398199 | Gong | Jul 2008 | B2 |
7403888 | Wang et al. | Jul 2008 | B1 |
7466859 | Chang et al. | Dec 2008 | B2 |
7478033 | Wu et al. | Jan 2009 | B2 |
7565624 | Fux et al. | Jul 2009 | B2 |
7584093 | Potter et al. | Sep 2009 | B2 |
7665037 | Fux et al. | Feb 2010 | B2 |
7769804 | Church et al. | Aug 2010 | B2 |
7801722 | Kotipalli et al. | Sep 2010 | B2 |
7810030 | Wu et al. | Oct 2010 | B2 |
20020045463 | Chen et al. | Apr 2002 | A1 |
20020180806 | Zhang et al. | Dec 2002 | A1 |
20030023420 | Goodman | Jan 2003 | A1 |
20030038735 | Blumberg | Feb 2003 | A1 |
20030093263 | Chen et al. | May 2003 | A1 |
20040004558 | Fux | Jan 2004 | A1 |
20040006458 | Fux et al. | Jan 2004 | A1 |
20040096105 | Holtsberg | May 2004 | A1 |
20040239534 | Kushler et al. | Dec 2004 | A1 |
20050027524 | Wu et al. | Feb 2005 | A1 |
20050027534 | Meurs et al. | Feb 2005 | A1 |
20050060138 | Wang et al. | Mar 2005 | A1 |
20050144566 | Zhang | Jun 2005 | A1 |
20050162395 | Unruh | Jul 2005 | A1 |
20050209844 | Wu et al. | Sep 2005 | A1 |
20050268231 | Wen et al. | Dec 2005 | A1 |
20060221057 | Fux et al. | Oct 2006 | A1 |
20070038456 | Hsu | Feb 2007 | A1 |
20070110222 | Kim | May 2007 | A1 |
20080004860 | Fux et al. | Jan 2008 | A1 |
20080235003 | Lai et al. | Sep 2008 | A1 |
20080243808 | Rieman et al. | Oct 2008 | A1 |
20090058814 | Rubanovich et al. | Mar 2009 | A1 |
20090089666 | White et al. | Apr 2009 | A1 |
20090119582 | Elizarov et al. | May 2009 | A1 |
20110219337 | Fux et al. | Sep 2011 | A1 |
Number | Date | Country |
---|---|---|
1567358 | Jan 2005 | CN |
0953933 | Nov 1999 | EP |
1085401 | Mar 2001 | EP |
2000-148911 | May 2000 | JP |
406241 | Sep 2000 | TW |
2005039064 | Apr 2005 | WO |
2005089215 | Sep 2005 | WO |
2005091167 | Sep 2005 | WO |
2005119512 | Dec 2005 | WO |
Entry |
---|
Office Action for Application No. 2653823, from the Canadian Intellectual Property Office, dated Aug. 26, 2011. |
Office Action for Application No. 200610142280.5, from the State Intellectual Property Office of People's Republic of China, dated Oct. 9, 2012. |
Office Action for Application No. 200610142280.5, from the State Intellectual Property Office of People's Republic of China, dated Feb. 16, 2013. |
Search Report for Patent Application No. 096123813, from the Taiwan Patent Office, dated Dec. 17, 2010. |
International Search Report and Written Opinion in International Application No. PCT/CA2006/001089, mailed Oct. 3, 2006 (7 pages). |
International Preliminary Report on Patentability in International Application No. PCT/CA2006/001089, mailed Jan. 6, 2009 (6 pages). |
International Search Report and Written Opinion in International Application No. PCT/CA2006/001087, mailed Mar. 14, 2007 (5 pages). |
International Preliminary Report on Patentability in International Application No. PCT/CA2006/001087, mailed Jan. 6, 2009 (4 pages). |
International Search Report and Written Opinion in International Application No. PCT/CA2006/001088, mailed Oct. 3, 2006 (7 pages). |
International Preliminary Report on Patentability in International Application No. PCT/CA2006/001088, mailed Jan. 6, 2009 (6 pages). |
Number | Date | Country | |
---|---|---|---|
20130144820 A1 | Jun 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11427971 | Jun 2006 | US |
Child | 13760518 | US |