The subject matter of this application is generally related to input editing interfaces.
A computer device can be configured to receive input of text and characters from a computer keyboard. Modern computer keyboards are composed of rectangular or near-rectangular keys, and characters, such as the letters A-Z in the English alphabet, are usually engraved or printed on the keys. In most cases, each press of a key corresponds to typing of a single character.
Traditional computer keyboards may sometimes be too large for portable devices, such as cellular phones, MPEG-1 Audio Layer 3 (MP3) players, or personal digital assistants (PDAs). Some portable devices include a smaller version of the traditional computer keyboard or use a virtual keyboard to receive user input. A virtual keyboard can be of the form of a software application or a feature of a software application to simulate a computer keyboard. For example, in a stylus-operated PDA or a touch-sensitive display on a communication device, a virtual keyboard can be used by a user to input text by selecting or tabbing keys of the virtual keyboard
These smaller keyboards and virtual keyboards may have keys that correspond to more than one character. For example, some of the keys can, by default, correspond to a common character in the English language, for example, the letter “a,” and may also correspond to other additional characters, such as another letter or the letter with an accent option, e.g., the character “ä,” or other characters with accent options. Because of the physical limitations (e.g., size) of the virtual keyboard, a user may find it difficult to type characters not readily available on the virtual keyboard.
Input methods for devices having multi-language environments can present unique challenges with respect to input and spelling correction which may need to be tailored to the selected language to ensure accuracy and an efficient workflow.
Text input is corrected on a touch-sensitive display by presenting a list of candidate words in the interface which can be selected by touch input. The candidate list can include candidate words having two or more character types (e.g., Roman, kana, kanji). In one aspect, the candidate list can be scrolled using a finger gesture. When a user's finger traverses a candidate word, the position of the candidate word is adjusted (e.g., offset from the touch input), so that the candidate word is not obscured by the user's finger. When the touch is released, the candidate word is inserted into a document being edited. In another aspect, characters can be erased by touching a key (e.g., a backspace or delete key) and making a sliding, swiping, or other finger gesture. A number of characters proportional to a distance (e.g., a linear distance) of the finger gesture across the display are erased. If there are characters in a text input area, those characters are erased first, followed by characters in the document being edited. In another aspect, in a Japanese language environment, auto-correcting is performed to account for possible typographical errors on input.
Other implementations are disclosed, including implementations directed to systems, methods, apparatuses, computer-readable mediums, and user interfaces.
The virtual keyboard 102 can be displayed in various layouts based on a user selection. For example, the user can select to display one of a number of virtual keyboard layouts using an action button 120 or other finger gesture. As shown, the virtual keyboard 102 is an English keyboard layout (e.g., QWERTY). The keyboard layout, however, can be configured based on a selected language, such as Japanese, French, German, Italian, etc. In a Japanese language environment, the user can switch between a kana keyboard, a keyboard for Roman characters, and a keyboard for kanji symbols.
A user can interact with the virtual keyboard 102 to enter text into a document (e.g., text document, instant message, email, address book) in the editing region 106. As the user enters characters, an input correction process is activated which can detect text input error and display candidate words 112 in the input region 108. Any number of candidate words 112 can be generated. A group of displayed candidate words 112 can include candidate words 112 having characters in two or more character types (e.g., Roman, kana, kanji). In some implementations, additional candidate words 112 can be displayed by clicking on arrows 114 or other user interface element, which causes a new page of candidate words 112 to be displayed in the input region 108. In some implementations, the candidate list can be determined based on the user-selected language and statistics (e.g., a user dictionary or a history of user typing data for the user-selected language). An example method of determining correction options for virtual keyboards is described in U.S. patent application Ser. No. 11/228,737, for “Activating Virtual Keys of a Touch-screen Virtual Keyboard,” which patent application is incorporated by reference herein in its entirety.
In some implementations, a candidate word look-up takes place using an auto-correcting search. In performing the auto-correcting search, a list of candidate words can be produced based on the text input and accounting for possible typographical errors in the text input.
In the example shown, the user has selected a candidate word 110 to replace “touky” in a Japanese language environment. Selection of the candidate word 110 is made by the user touching the candidate word 110 with one or more fingers. When the user releases the touch, the selected candidate word 110 is inserted into the document in the editing region 106. In some implementations, when the user touches the candidate word 110, the candidate word 110 is displayed in a different position on the touch-sensitive display 104 (e.g., an offset position) to avoid the user's finger obscuring the candidate word 110. A user can scroll the candidate list by swiping a finger over the candidate words 112. As the finger traverses each candidate word 112, the candidate word is displayed at the different position. For example, the user can run their index finger over the candidate words 112 in the input region 108 until the user reaches the candidate word 110. When the user releases the touch, the candidate word 110 is inserted into the document being edited.
If the text input includes an incorrect character or if the text input is ambiguous, then a candidate list of possibly correct candidate words is determined (206) and displayed on the touch-sensitive display (208) to the user. For example, in a Japanese language environment, if the text input is a phonetic spelling in Roman characters of a Japanese character, the candidate list can include candidate words having two or more character types (e.g., kanji and kana). Even if the text input does not include an incorrect character, there can be ambiguity in the conversion from the Roman characters to Japanese characters. To account for this ambiguity, the process 200 includes determining a candidate list of multiple possibly correct candidate words, allowing the user to select the intended Roman to Japanese conversion if it is in the candidate list. Any number of candidate words can be included in the candidate list. The list can be displayed in a dedicated region (e.g., an input region 108) of a touch-sensitive display, for example.
The user can scroll the candidate list with the user's finger. When the finger is over (or proximate to) a candidate word, the candidate word can be displayed in a different position of the touch-sensitive display, offset from the original location of the candidate word to avoid the user's finger from obscuring the selected candidate word. After touch input (e.g., one or more touches or a finger gesture) is obtained for the selected candidate word (210), the selected candidate word is inserted into the document being edited (212).
In the example shown, the user can erase characters in text input by touching a backspace or delete key 116, then sliding their finger from the key 116 towards the opposite end of the virtual keyboard 102. As the user slides their finger, a number of characters proportional to the distance traversed by the finger across the touch-sensitive display 104 are erased. If there are characters in the input region 108 (e.g., characters currently being added to a document), those characters can be erased first. When the characters in the input region 108 are exhausted, characters in the editing region 106 can be erased (e.g., characters in a word previously entered in the document).
The memory 420 stores information within the architecture 400. In some implementations, the memory 420 is a computer-readable medium. In other implementations, the memory 420 is a volatile memory unit. In yet other implementations, the memory 420 is a non-volatile memory unit.
The storage device 430 is capable of providing mass storage for the architecture 400. In some implementations, the storage device 430 is a computer-readable medium. In various different implementations, the storage device 430 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device.
The input/output device 440 provides input/output operations for the architecture 400. In some implementations, the input/output device 440 includes a keyboard and/or pointing device. In other implementations, the input/output device 440 includes a display unit for displaying graphical user interfaces.
The features described can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The features can be implemented in a computer program product tangibly embodied in an information carrier, e.g., in a machine-readable storage device or in a propagated signal, for execution by a programmable processor; and method steps can be performed by a programmable processor executing a program of instructions to perform functions of the described implementations by operating on input data and generating output. The described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors of any kind of computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer will also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
To provide for interaction with a user, the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.
The features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them. The components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a LAN, a WAN, a wireless network, and the computers and networks forming the Internet.
The computer system can include clients and servers. A client and server are generally remote from each other and typically interact through a network, such as those described above with respect to
In a Japanese language environment, use of user interface elements, (e.g., popup menus or head-up displays) tied to virtual keyboard keys can be used to select unambiguous characters. There can be one key for every consonant and one for the vowels. In one implementation, if a user touches and slides on a key of a virtual keyboard, a popup menu is opened that lets the user select a syllable with that consonant (or none), and the appropriate vowel. Dragging on the “k” (ka) key lets the user select ka, ki, ku, ke, or ko. Dragging on the vowel key lets the user select a, i, u, e, or o, and so forth.
While the user slides horizontally to select a vowel, changing the direction of the drag to vertical lets the user select variants. For example, if the user starts on the “k” (ka) key and slides right, the user sees options for ka, ki, ku, ke, and ko. If the user slides down, the options change to ga, gi, gu, ge, and go, and the user can slide horizontally again to select one of these syllables starting with the “g” consonant. The user can also slide up, giving up to x (e.g., 3 rows) of options for each popup menu (e.g., unshifted, shift down, shift up).
If the user taps a key, the user gets a wildcard (ambiguous) character that can match anything the user could produce using that key. Tapping the “k” (ka) key gives the user something that matches ka; those syllables in that position are considered. The wildcard character can be converted to an unambiguous syllable or character by sliding on it, in exactly the same way as the user can slide on a key.
A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made. For example, elements of one or more implementations may be combined, deleted, modified, or supplemented to form further implementations. Logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims.
This application claims priority to U.S. Provisional Patent Application Ser. No. 60/972,185 filed Sep. 13, 2007, and entitled “Input Methods for Device Having Multi-Language Environment,” the contents of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5379057 | Clough et al. | Jan 1995 | A |
5535119 | Ito et al. | Jul 1996 | A |
5675362 | Clough et al. | Oct 1997 | A |
5959629 | Masui | Sep 1999 | A |
6115053 | Perlin | Sep 2000 | A |
6278968 | Franz et al. | Aug 2001 | B1 |
6323846 | Westerman et al. | Nov 2001 | B1 |
6570557 | Westerman et al. | May 2003 | B1 |
6661409 | Demartines et al. | Dec 2003 | B2 |
6677932 | Westerman | Jan 2004 | B1 |
6766179 | Shiau et al. | Jul 2004 | B1 |
6888536 | Westerman et al. | May 2005 | B2 |
7030863 | Longe et al. | Apr 2006 | B2 |
7096432 | Huapaya et al. | Aug 2006 | B2 |
7147562 | Ohara et al. | Dec 2006 | B2 |
7619677 | Matsuda et al. | Nov 2009 | B2 |
20020167545 | Kang et al. | Nov 2002 | A1 |
20020168107 | Tang et al. | Nov 2002 | A1 |
20030024375 | Sitrick | Feb 2003 | A1 |
20030100965 | Sitrick et al. | May 2003 | A1 |
20030160817 | Ishida et al. | Aug 2003 | A1 |
20030216913 | Keely | Nov 2003 | A1 |
20040140956 | Kushler et al. | Jul 2004 | A1 |
20040230912 | Clow et al. | Nov 2004 | A1 |
20050024341 | Gillespie et al. | Feb 2005 | A1 |
20050099408 | Seto et al. | May 2005 | A1 |
20050152600 | Chen et al. | Jul 2005 | A1 |
20050174333 | Robinson et al. | Aug 2005 | A1 |
20060053387 | Ording | Mar 2006 | A1 |
20060085757 | Andre et al. | Apr 2006 | A1 |
20060117067 | Wright et al. | Jun 2006 | A1 |
20060144211 | Yoshimoto | Jul 2006 | A1 |
20060274051 | Longe et al. | Dec 2006 | A1 |
20070024736 | Matsuda et al. | Feb 2007 | A1 |
20070120822 | Iso | May 2007 | A1 |
20070198950 | Dodge et al. | Aug 2007 | A1 |
20080030481 | Gunn et al. | Feb 2008 | A1 |
20080072156 | Sitrick | Mar 2008 | A1 |
20080094356 | Ording et al. | Apr 2008 | A1 |
20090037837 | Raghunath et al. | Feb 2009 | A1 |
20090051661 | Kraft et al. | Feb 2009 | A1 |
20090058823 | Kocienda | Mar 2009 | A1 |
20090193361 | Lee et al. | Jul 2009 | A1 |
20090225041 | Kida et al. | Sep 2009 | A1 |
20090226091 | Goldsmith et al. | Sep 2009 | A1 |
20090265669 | Kida et al. | Oct 2009 | A1 |
Number | Date | Country |
---|---|---|
1949158 | Apr 2007 | CN |
1 698 982 | Sep 2006 | EP |
08-272787 | Oct 1996 | JP |
10-049272 | Feb 1998 | JP |
2000-112636 | Apr 2000 | JP |
2002-108543 | Apr 2002 | JP |
03-314276 | Aug 2002 | JP |
2002-0325965 | Nov 2002 | JP |
2005-092441 | Apr 2005 | JP |
WO 0074240 | Dec 2000 | WO |
WO 2005064587 | Jul 2005 | WO |
WO 2007037809 | Apr 2007 | WO |
WO 2007047188 | Apr 2007 | WO |
WO 2007070223 | Jun 2007 | WO |
WO 2009032483 | Mar 2009 | WO |
WO 2009111138 | Sep 2009 | WO |
Entry |
---|
T. Masui, “An Efficient Text Input Method for Pen-based Computers,” Proceedings of the ACM Conference on Human Factors in Computing System (CHI '98), Apr. 1998, ACM press, pp. 328-335. |
T. Masui, “POBox: An Efficient Text Input Method for Handheld and Ubiquitous Computers,” Proceedings of the Internation Symposium on Handheld and Ubiquitous Computer (HUC '99), Sep. 1999, pp. 289-300. |
C. Liu et al., “Online Recognition of Chinese Characters: The State-of-the-Art,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, No. 2, Feb. 2004, pp. 198-213. |
H. Sacher, “Interactions in Chinese: Designing Interfaces for Asian Languages,” Interactions Magazine, vol. 5, Issue 5, Sep.-Oct. 1998, pp. 28-38. |
International Search Report and Written Opinion, dated Apr. 29, 2009, issued in International Application No. PCT/US2009/033696. |
International Search Report and Written Opinion, dated Feb. 18, 2009, issued in International Application No. PCT/US2009/072803. |
Invitation to Pay Fees and Partial International Search Report, dated Nov. 11, 2008, issued in International Application No. PCT/US2009/072803. |
Kida et al., “Language Input Interface on a Device”, U.S. Appl. No. 12/107,711, filed Apr. 22, 2008. |
Goldsmith et al., “Identification of Candidate Characters for Text Input”, U.S. Appl. No. 12/167,044, filed Jul. 2, 2008. |
Chou, “Zhuyin Input Interface on a Device”, U.S. Appl. No. 12/476,121, filed Jun. 1, 2009. |
Authorized officer Philippe Becamel, International Preliminary Report on Patentability in PCT/US2009/033696 mailed Sep. 16, 2010, 7 pages. |
International Preliminary Report on Patentability in International Application No. PCT/US2009/072803 mailed Mar. 18, 2010. |
Translated First Office Action dated Jul. 29, 2010 issued in Chinese Application No. 200910118235.X, 9 pages. |
Number | Date | Country | |
---|---|---|---|
20090077464 A1 | Mar 2009 | US |
Number | Date | Country | |
---|---|---|---|
60972185 | Sep 2007 | US |