INPUT METHODS FOR DEVICE HAVING MULTI-LANGUAGE ENVIRONMENT

Abstract
Text input is corrected on a touch-sensitive display by presenting a list of candidate words in the interface which can be selected by touch input. The candidate list can include candidate words having two or more character types (e.g., Roman, kana, kanji). In one aspect, the candidate list can be scrolled using a finger gesture. When a user's finger traverses a candidate word and the touch is released, the candidate word is inserted into a document being edited. In another aspect, characters can be erased by touching a key (e.g., a backspace or delete key) and making a sliding, swiping, or other finger gesture. A number of characters proportional to a distance (e.g., a linear distance) of the finger gesture across the display are erased. If there are characters in a text input area, those characters are erased first, followed by characters in the document being edited.
Description
TECHNICAL FIELD

The subject matter of this application is generally related to input editing interfaces.


BACKGROUND

A computer device can be configured to receive input of text and characters from a computer keyboard. Modern computer keyboards are composed of rectangular or near-rectangular keys, and characters, such as the letters A-Z in the English alphabet, are usually engraved or printed on the keys. In most cases, each press of a key corresponds to typing of a single character.


Traditional computer keyboards may sometimes be too large for portable devices, such as cellular phones, MPEG-1 Audio Layer 3 (MP3) players, or personal digital assistants (PDAs). Some portable devices include a smaller version of the traditional computer keyboard or use a virtual keyboard to receive user input. A virtual keyboard can be of the form of a software application or a feature of a software application to simulate a computer keyboard. For example, in a stylus-operated PDA or a touch-sensitive display on a communication device, a virtual keyboard can be used by a user to input text by selecting or tabbing keys of the virtual keyboard


These smaller keyboards and virtual keyboards may have keys that correspond to more than one character. For example, some of the keys can, by default, correspond to a common character in the English language, for example, the letter “a,” and may also correspond to other additional characters, such as another letter or the letter with an accent option, e.g., the character “ä,” or other characters with accent options. Because of the physical limitations (e.g., size) of the virtual keyboard, a user may find it difficult to type characters not readily available on the virtual keyboard.


Input methods for devices having multi-language environments can present unique challenges with respect to input and spelling correction which may need to be tailored to the selected language to ensure accuracy and an efficient workflow.


SUMMARY

Text input is corrected on a touch-sensitive display by presenting a list of candidate words in the interface which can be selected by touch input. The candidate list can include candidate words having two or more character types (e.g., Roman, kana, kanji). In one aspect, the candidate list can be scrolled using a finger gesture. When a user's finger traverses a candidate word, the position of the candidate word is adjusted (e.g., offset from the touch input), so that the candidate word is not obscured by the user's finger. When the touch is released, the candidate word is inserted into a document being edited. In another aspect, characters can be erased by touching a key (e.g., a backspace or delete key) and making a sliding, swiping, or other finger gesture. A number of characters proportional to a distance (e.g., a linear distance) of the finger gesture across the display are erased. If there are characters in a text input area, those characters are erased first, followed by characters in the document being edited. In another aspect, in a Japanese language environment, auto-correcting is performed to account for possible typographical errors on input.


Other implementations are disclosed, including implementations directed to systems, methods, apparatuses, computer-readable mediums, and user interfaces.





DESCRIPTION OF DRAWINGS


FIG. 1 shows an example portable device for receiving text input.



FIG. 2 is a flow diagram of an example process for correcting input in a multi-language environment.



FIG. 3 is a flow diagram of an example process for erasing characters in a multi-language environment.



FIG. 4 is a block diagram of an example system architecture for performing the operations described in reference to FIGS. 1-3.



FIG. 5 is a flow diagram of an example process for displaying selectable character options for a document being edited.





DETAILED DESCRIPTION
Input Editing User Interface


FIG. 1 shows an example portable device 100 for receiving text input. The portable device 100 can be phone, a media player, an email device, or any other portable device capable of receiving text input. The device 100 includes a virtual keyboard 102, an editing region 106, and an input region 108. Each of these regions can be part of touch-sensitive display 104. In some implementations, the touch-sensitive display 104 can be a multi-touch-sensitive display for receiving multi-touch input or finger gestures. A multi-touch-sensitive display 104 can, for example, process multiple simultaneous touch points, including processing data related to the pressure, degree, and/or position of each touch point. Such processing facilitates gestures and interactions with multiple fingers, chording, and other interactions. Some examples of multi-touch-sensitive display technology are described in U.S. Pat. Nos. 6,323,846, 6,570,557, 6,677,932, and U.S. Patent Publication No. 2002/0015024A1, each of which is incorporated by reference herein in its entirety.


The virtual keyboard 102 can be displayed in various layouts based on a user selection. For example, the user can select to display one of a number of virtual keyboard layouts using an action button 120 or other finger gesture. As shown, the virtual keyboard 102 is an English keyboard layout (e.g., QWERTY). The keyboard layout, however, can be configured based on a selected language, such as Japanese, French, German, Italian, etc. In a Japanese language environment, the user can switch between a kana keyboard, a keyboard for Roman characters, and a keyboard for kanji symbols.


A user can interact with the virtual keyboard 102 to enter text into a document (e.g., text document, instant message, email, address book) in the editing region 106. As the user enters characters, an input correction process is activated which can detect text input error and display candidate words 112 in the input region 108. Any number of candidate words 112 can be generated. A group of displayed candidate words 112 can include candidate words 112 having characters in two or more character types (e.g., Roman, kana, kanji). In some implementations, additional candidate words 112 can be displayed by clicking on arrows 114 or other user interface element, which causes a new page of candidate words 112 to be displayed in the input region 108. In some implementations, the candidate list can be determined based on the user-selected language and statistics (e.g., a user dictionary or a history of user typing data for the user-selected language). An example method of determining correction options for virtual keyboards is described in U.S. patent application Ser. No. 11/228,737, for “Activating Virtual Keys of a Touch-screen Virtual Keyboard,” which patent application is incorporated by reference herein in its entirety.


In some implementations, a candidate word look-up takes place using an auto-correcting search. In performing the auto-correcting search, a list of candidate words can be produced based on the text input and accounting for possible typographical errors in the text input.


Candidate Word Lists

In the example shown, the user has selected a candidate word 110 to replace “touky” in a Japanese language environment. Selection of the candidate word 110 is made by the user touching the candidate word 110 with one or more fingers. When the user releases the touch, the selected candidate word 110 is inserted into the document in the editing region 106. In some implementations, when the user touches the candidate word 110, the candidate word 110 is displayed in a different position on the touch-sensitive display 104 (e.g., an offset position) to avoid the user's finger obscuring the candidate word 110. A user can scroll the candidate list by swiping a finger over the candidate words 112. As the finger traverses each candidate word 112, the candidate word is displayed at the different position. For example, the user can run their index finger over the candidate words 112 in the input region 108 until the user reaches the candidate word 110. When the user releases the touch, the candidate word 110 is inserted into the document being edited.



FIG. 2 is a flow diagram of an example process 200 for correcting input in a multi-language environment. In some implementations, the process 200 begins when text input is obtained for a document being edited on a touch-sensitive display (202). The text input can be obtained as one or more touches or a finger gesture (e.g., on a virtual keyboard). Some or all of the text input can be, for example, in Roman characters or in Japanese characters (e.g., kana or kanji). The process 200 then determines if the text input includes one or more incorrect characters (204). For example, a language dictionary, statistics, and/or fuzzy logic can be used to determine incorrect text input.


If the text input includes an incorrect character or if the text input is ambiguous, then a candidate list of possibly correct candidate words is determined (206) and displayed on the touch-sensitive display (208) to the user. For example, in a Japanese language environment, if the text input is a phonetic spelling in Roman characters of a Japanese character, the candidate list can include candidate words having two or more character types (e.g., kanji and kana). Even if the text input does not include an incorrect character, there can be ambiguity in the conversion from the Roman characters to Japanese characters. To account for this ambiguity, the process 200 includes determining a candidate list of multiple possibly correct candidate words, allowing the user to select the intended Roman to Japanese conversion if it is in the candidate list. Any number of candidate words can be included in the candidate list. The list can be displayed in a dedicated region (e.g., an input region 108) of a touch-sensitive display, for example.


The user can scroll the candidate list with the user's finger. When the finger is over (or proximate to) a candidate word, the candidate word can be displayed in a different position of the touch-sensitive display, offset from the original location of the candidate word to avoid the user's finger from obscuring the selected candidate word. After touch input (e.g., one or more touches or a finger gesture) is obtained for the selected candidate word (210), the selected candidate word is inserted into the document being edited (212).


Erasing Characters

In the example shown, the user can erase characters in text input by touching a backspace or delete key 116, then sliding their finger from the key 116 towards the opposite end of the virtual keyboard 102. As the user slides their finger, a number of characters proportional to the distance traversed by the finger across the touch-sensitive display 104 are erased. If there are characters in the input region 108 (e.g., characters currently being added to a document), those characters can be erased first. When the characters in the input region 108 are exhausted, characters in the editing region 106 can be erased (e.g., characters in a word previously entered in the document).



FIG. 3 is a flow diagram of an example process 300 for erasing characters in a multi-language environment. In some implementations, the process 300 begins by generating a user interface on a touch-sensitive display for editing text input (302). The user interface can include a virtual keyboard, an editing region, and a text input region. A finger touch and gesture is detected starting from a key on the virtual keyboard (e.g., a backspace key, a delete key), indicating the intention of a user to erase one or more characters of text input displayed in the input region (304). In some implementations, the gesture can be a finger sliding or swiping from the touched key across the touch-sensitive display. The sliding or swiping can be in any desired direction on the touch-sensitive display. A distance for which the swipe or gesture will result in the erasure of characters (e.g., linear distance traversed by the finger across the display) can be bounded by the visual borders of the virtual keyboard displayed on the touch-sensitive display or any other desired boundaries. The number of characters erased due to the gesture can be proportional to the linear distance traversed by the finger across the touch-sensitive display (306). In some implementations, the characters displayed in the input region are erased first, followed by characters in the editing region, as described in reference to FIG. 1.


Example System Architecture


FIG. 4 is a block diagram of an example system architecture 400 for performing the various operations described in reference to FIGS. 1-3. For example, the architecture 400 may be included in the portable device 100, described in reference to FIG. 1. The architecture 400 includes a processor 410, a memory 420, a storage device 430, and an input/output device 440. Each of the components 410, 420, 430, and 440 are interconnected using a system bus 450. The processor 410 is capable of processing instructions for execution within the architecture 400. In some implementations, the processor 410 is a single-threaded processor. In other implementations, the processor 410 is a multi-threaded processor. The processor 410 is capable of processing instructions stored in the memory 420 or on the storage device 430 to display graphical information for a user interface on the input/output device 440.


The memory 420 stores information within the architecture 400. In some implementations, the memory 420 is a computer-readable medium. In other implementations, the memory 420 is a volatile memory unit. In yet other implementations, the memory 420 is a non-volatile memory unit.


The storage device 430 is capable of providing mass storage for the architecture 400. In some implementations, the storage device 430 is a computer-readable medium. In various different implementations, the storage device 430 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device.


The input/output device 440 provides input/output operations for the architecture 400. In some implementations, the input/output device 440 includes a keyboard and/or pointing device. In other implementations, the input/output device 440 includes a display unit for displaying graphical user interfaces.


The features described can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The features can be implemented in a computer program product tangibly embodied in an information carrier, e.g., in a machine-readable storage device or in a propagated signal, for execution by a programmable processor; and method steps can be performed by a programmable processor executing a program of instructions to perform functions of the described implementations by operating on input data and generating output. The described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.


Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors of any kind of computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer will also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).


To provide for interaction with a user, the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.


The features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them. The components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a LAN, a WAN, a wireless network, and the computers and networks forming the Internet.


The computer system can include clients and servers. A client and server are generally remote from each other and typically interact through a network, such as those described above with respect to FIG. 1. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.


Other Implementations

In a Japanese language environment, use of user interface elements, (e.g., popup menus or head-up displays) tied to virtual keyboard keys can be used to select unambiguous characters. There can be one key for every consonant and one for the vowels. In one implementation, if a user touches and slides on a key of a virtual keyboard, a popup menu is opened that lets the user select a syllable with that consonant (or none), and the appropriate vowel. Dragging on the “k” (ka) key lets the user select ka, ki, ku, ke, or ko. Dragging on the vowel key lets the user select a, i, u, e, or o, and so forth.


While the user slides horizontally to select a vowel, changing the direction of the drag to vertical lets the user select variants. For example, if the user starts on the “k” (ka) key and slides right, the user sees options for ka, ki, ku, ke, and ko. If the user slides down, the options change to ga, gi, gu, ge, and go, and the user can slide horizontally again to select one of these syllables starting with the “g” consonant. The user can also slide up, giving up to x (e.g., 3 rows) of options for each popup menu (e.g., unshifted, shift down, shift up).


If the user taps a key, the user gets a wildcard (ambiguous) character that can match anything the user could produce using that key. Tapping the “k” (ka) key gives the user something that matches ka; those syllables in that position are considered. The wildcard character can be converted to an unambiguous syllable or character by sliding on it, in exactly the same way as the user can slide on a key.



FIG. 5 is a flow diagram of an example process 500 for displaying selectable character options for a document being edited. In some implementations, the process 500 begins by generating a user interface on a touch-sensitive display for selecting characters for a document being edited on the touch-sensitive display (502). The user interface can include a virtual keyboard. A touch input is detected starting from a key of the virtual keyboard, where the key is associated with a consonant or vowels (504). In some implementations, the touch input can be a finger sliding or swiping from the touched key across the touch-sensitive display. A user interface element is displayed on the touch-sensitive display, where the user interface element (e.g., a popup menu) includes multiple character options for the consonant or vowels associated with the key (506). Each character option is selectable by a user. In some implementations, at least some of the character options are in Japanese. In some implementations, a dragging or sliding finger gesture is detected (508). The finger gesture can indicate an intention of a user to select one of the character options. Upon detection of the finger gesture, the selected character option can be inserted into the document being edited (510).


A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made. For example, elements of one or more implementations may be combined, deleted, modified, or supplemented to form further implementations. Logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims.

Claims
  • 1. A method comprising: obtaining text input for a document being edited on a touch-sensitive display;determining if the text input includes an incorrect character;if the text input includes an incorrect character or if the text input is ambiguous, determining a list of possibly correct candidate words;displaying the list of candidate words on the touch-sensitive display;obtaining touch input selecting one of the candidate words; andinserting the candidate word into the document being edited.
  • 2. The method of claim 1, where at least some of the text input is in Japanese.
  • 3. The method of claim 1, where the list of candidate words includes candidate words having characters in two or more character types.
  • 4. The method of claim 1, where the list of candidate words is determined based on one or more of a user-selected language or statistics.
  • 5. The method of claim 1, where the list of candidate words is determined using an auto-correcting search, which accounts for possible typographical errors in the text input.
  • 6. The method of claim 1, where obtaining touch input selecting one of the candidate words further comprises: detecting a finger gesture touching or traversing one or more candidate words in the list of candidate words.
  • 7. The method of claim 6, further comprising: for each candidate word touched or traversed by the detected finger gesture, displaying the candidate word in a different position on the touch-sensitive display than an original position where the candidate word was displayed before detecting the finger gesture.
  • 8. A method comprising: generating a user interface on a touch-sensitive display for editing text input, the user interface including a virtual keyboard, an editing region and an input region;detecting a finger gesture starting from a key on the virtual keyboard indicating an intention of a user to erase one or more characters of text input displayed in the input region; anderasing a number of characters proportional to a distance traversed by the finger across the touch-sensitive display.
  • 9. The method of claim 8, where the characters displayed in the input region are erased first, followed by characters in the editing region.
  • 10. The method of claim 8, where the number of characters erased is proportional to the distance traversed by the finger bounded by a visual border of the virtual keyboard.
  • 11. A method comprising: generating a user interface on a touch-sensitive display for selecting characters for a document being edited on the touch-sensitive display, the user interface including a virtual keyboard;detecting touch input starting from a key of the virtual keyboard, the key associated with a consonant or vowels; anddisplaying on the touch-sensitive display a user interface element with a plurality of character options for the consonant or vowels associated with the key, each character option selectable by a user.
  • 12. The method of claim 11, further comprising: detecting a dragging or sliding finger gesture indicating an intention of a user to select one of the character options; andinserting the selected character option into the document being edited.
  • 13. The method of claim 11, where at least some of the character options are in Japanese.
  • 14. A computer-readable medium having instructions stored thereon, which, when executed by a processor, causes the processor to perform operations comprising: obtaining text input for a document being edited on a touch-sensitive display;determining if the text input includes an incorrect character;if the text input includes an incorrect character or if the text input is ambiguous, determining a list of possibly correct candidate words;displaying the list of candidate words on the touch-sensitive display;obtaining touch input selecting one of the candidate words; andinserting the candidate word into the document being edited.
  • 15. A computer-readable medium having instructions stored thereon, which, when executed by a processor, causes the processor to perform operations comprising: generating a user interface on a touch-sensitive display for editing text input, the user interface including a virtual keyboard, an editing region and an input region;detecting a finger gesture starting from a key on the virtual keyboard indicating an intention of a user to erase one or more characters of text input displayed in the input region; anderasing a number of characters proportional to a distance traversed by the finger across the touch-sensitive display.
  • 16. A computer-readable medium having instructions stored thereon, which, when executed by a processor, causes the processor to perform operations comprising: generating a user interface on a touch-sensitive display for selecting characters for a document being edited on the touch-sensitive display, the user interface including a virtual keyboard;detecting touch input starting from a key of the virtual keyboard, the key associated with a consonant or vowels; anddisplaying on the touch-sensitive display a user interface element with a plurality of character options for the consonant or vowels associated with the key, each character option selectable by a user.
  • 17. A system comprising: a processor; andmemory coupled to the processor and storing instructions, which, when executed by the processor, causes the processor to perform operations comprising: obtaining text input for a document being edited on a touch-sensitive display;determining if the text input includes an incorrect character;if the text input includes an incorrect character or if the text input is ambiguous, determining a list of possibly correct candidate words;displaying the list of candidate words on the touch-sensitive display;obtaining touch input selecting one of the candidate words; andinserting the candidate word into the document being edited.
  • 18. A system comprising: a processor; andmemory coupled to the processor and storing instructions, which, when executed by the processor, causes the processor to perform operations comprising: generating a user interface on a touch-sensitive display for editing text input, the user interface including a virtual keyboard, an editing region and an input region;detecting a finger gesture starting from a key on the virtual keyboard indicating an intention of a user to erase one or more characters of text input displayed in the input region; anderasing a number of characters proportional to a distance traversed by the finger across the touch-sensitive display.
  • 19. A system comprising: a processor; andmemory coupled to the processor and storing instructions, which, when executed by the processor, causes the processor to perform operations comprising: generating a user interface on a touch-sensitive display for selecting characters for a document being edited on the touch-sensitive display, the user interface including a virtual keyboard;detecting touch input starting from a key of the virtual keyboard, the key associated with a consonant or vowels; anddisplaying on the touch-sensitive display a user interface element with a plurality of character options for the consonant or vowels associated with the key, each character option selectable by a user.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority to U.S. Provisional Patent Application Ser. No. 60/972,185 filed Sep. 13, 2007, and entitled “Input Methods for Device Having Multi-Language Environment,” the contents of which are incorporated herein by reference.

Provisional Applications (1)
Number Date Country
60972185 Sep 2007 US