This invention generally relates to electronic devices.
Input Method Editors (IMEs) in Japanese (e.g., via romanji) and Chinese (e.g., via pinyin) and other languages make frequent use of user menu selections to disambiguate possible character combinations based on phonetic input. For some entries, this disambiguation can occur for every character or short phrase, which can greatly slow down input or frustrate users. For example, existing approaches present menus that are selected by pressing the number associated with the menu item or through the use of arrow keys.
Input devices including proximity sensor devices (e.g., touchpads or touch sensor devices) are widely used in a variety of electronic systems. A proximity sensor device typically includes a sensing region, often demarked by a surface, in which the proximity sensor device determines the presence, location and/or motion of one or more input objects. Proximity sensor devices may be used to provide interfaces for the electronic system. For example, proximity sensor devices are often used as input devices for larger computing systems such as opaque touchpads either integrated in, or peripheral to, notebook or desktop computers. Proximity sensor devices are also often used in smaller computing systems such as touch screens integrated in cellular phones.
Methods and apparatus for menu navigation and selection are described. The apparatus comprises a keyboard having a plurality of keys for a user to enter information by interacting with one or more of the plurality of keys, and a multi-function key having a touch sensitive portion for the user to enter position information by a touch or gesture input or enter information via user interaction with the multi-function key. A processing system is coupled to the keyboard for processing the user entered information and user entered position information from the keyboard and a display coupled to the processing system for displaying the user entered information and a menu of options related to the user entered information. The user enters position information to navigate through the menu of options and selects an option from the menu of options by user entered interaction with the multi-function key.
The method comprises receiving information input from one or more of the plurality of keys and displaying a menu of options related to the received information on a display. The user navigates through the menu of options by entered position information received from the touch sensitive portion of the multi-function key. Thereafter, the user selects an option from the menu of options via a user entered interaction with the multi-function key.
Example embodiments of the present invention will hereinafter be described in conjunction with the appended drawings which are not to scale unless otherwise noted, where like designations denote like elements, and:
Example embodiments of the present invention will hereinafter be described in conjunction with the drawings which are not to scale unless otherwise noted and where like designations denote like elements. The following detailed description is merely exemplary in nature and is not intended to limit the invention or the application and uses of the invention.
Various embodiments of the present invention provide input devices and methods that facilitate improved usability.
In
Sensing region 120 encompasses any space above, around, in and/or near the input device 100 in which the input device 100 is able to detect user input (e.g., user input provided by one or more input objects 140). The sizes, shapes, and locations of particular sensing regions may vary widely from embodiment to embodiment. In some embodiments, the sensing region 120 extends from a surface of the input device 100 in one or more directions into space until signal-to-noise ratios prevent sufficiently accurate object detection. The distance to which this sensing region 120 extends in a particular direction, in various embodiments, may be on the order of less than a millimeter, millimeters, centimeters, or more, and may vary significantly with the type of sensing technology used and the accuracy desired. Thus, some embodiments sense input that comprises no contact with any surfaces of the input device 100, contact with an input surface (e.g., a touch surface) of the input device 100, contact with an input surface of the input device 100 coupled with some amount of applied force or pressure, and/or a combination thereof. In various embodiments, input surfaces may be provided by surfaces of casings within which the sensor electrodes reside, by face sheets applied over the sensor electrodes or any casings, etc. In some embodiments, the sensing region 120 has a rectangular shape when projected onto an input surface of the input device 100.
The input device 100 may utilize any combination of sensor components and sensing technologies to detect user input in the sensing region 120. The input device 100 comprises one or more sensing elements for detecting user input. As several non-limiting examples, the input device 100 may use capacitive, elastive, resistive, inductive, magnetic, acoustic, ultrasonic and/or optical techniques.
Some implementations are configured to provide images that span one, two, three, or higher dimensional spaces. Some implementations are configured to provide projections of input along particular axes or planes.
In some resistive implementations of the input device 100, a flexible and conductive first layer is separated by one or more spacer elements from a conductive second layer. During operation, one or more voltage gradients are created across the layers. Pressing the flexible first layer may deflect it sufficiently to create electrical contact between the layers, resulting in voltage outputs reflective of the point(s) of contact between the layers. These voltage outputs may be used to determine positional information.
In some inductive implementations of the input device 100, one or more sensing elements pick up loop currents induced by a resonating coil or pair of coils. Some combination of the magnitude, phase, and frequency of the currents may be used to determine positional information.
In some capacitive implementations of the input device 100, voltage or current is applied to create an electric field. Nearby input objects cause changes in the electric field, and produce detectable changes in capacitive coupling that may be detected as changes in voltage, current, or the like.
Some capacitive implementations utilize arrays or other regular or irregular patterns of capacitive sensing elements to create electric fields. In some capacitive implementations, separate sensing elements may be ohmically shorted together to form larger sensor electrodes. Some capacitive implementations utilize resistive sheets, which may be uniformly resistive.
Some capacitive implementations utilize “self capacitance” (or “absolute capacitance”) sensing methods based on changes in the capacitive coupling between sensor electrodes and an input object. In various embodiments, an input object near the sensor electrodes alters the electric field near the sensor electrodes, thus changing the measured capacitive coupling. In one implementation, an absolute capacitance sensing method operates by modulating sensor electrodes with respect to a reference voltage (e.g., system ground), and by detecting the capacitive coupling between the sensor electrodes and input objects.
Some capacitive implementations utilize “mutual capacitance” (or “transcapacitance”) sensing methods based on changes in the capacitive coupling between sensor electrodes. In various embodiments, an input object near the sensor electrodes alters the electric field between the sensor electrodes, thus changing the measured capacitive coupling. In one implementation, a transcapacitive sensing method operates by detecting the capacitive coupling between one or more transmitter sensor electrodes (also “transmitter electrodes” or “transmitters”) and one or more receiver sensor electrodes (also “receiver electrodes” or “receivers”). Transmitter sensor electrodes may be modulated relative to a reference voltage (e.g., system ground) to transmit transmitter signals. Receiver sensor electrodes may be held substantially constant relative to the reference voltage to facilitate receipt of resulting signals. A resulting signal may comprise effect(s) corresponding to one or more transmitter signals, and/or to one or more sources of environmental interference (e.g., other electromagnetic signals). Sensor electrodes may be dedicated transmitters or receivers, or may be configured to both transmit and receive.
In
The processing system 110 may be implemented as a set of modules that handle different functions of the processing system 110. Each module may comprise circuitry that is a part of the processing system 110, firmware, software, or a combination thereof. In various embodiments, different combinations of modules may be used. Example modules include hardware operation modules for operating hardware such as sensor electrodes and display screens, data processing modules for processing data such as sensor signals and positional information, and reporting modules for reporting information. Further example modules include sensor operation modules configured to operate sensing element(s) to detect input, identification modules configured to identify gestures such as mode changing gestures, and mode changing modules for changing operation modes.
In some embodiments, the processing system 110 responds to user input (or lack of user input) in the sensing region 120 directly by causing one or more actions. Example actions include changing operation modes, as well as graphical user interface (GUI) actions such as cursor movement, selection, menu navigation, and other functions. In some embodiments, the processing system 110 provides information about the input (or lack of input) to some part of the electronic system (e.g., to a central processing system of the electronic system that is separate from the processing system 110, if such a separate central processing system exists). In some embodiments, some part of the electronic system processes information received from the processing system 110 to act on user input, such as to facilitate a full range of actions, including mode changing actions and GUI actions.
For example, in some embodiments, the processing system 110 operates the sensing element(s) of the input device 100 to produce electrical signals indicative of input (or lack of input) in the sensing region 120. The processing system 110 may perform any appropriate amount of processing on the electrical signals in producing the information provided to the electronic system. For example, the processing system 110 may digitize analog electrical signals obtained from the sensor electrodes. As another example, the processing system 110 may perform filtering or other signal conditioning. As yet another example, the processing system 110 may subtract or otherwise account for a baseline, such that the information reflects a difference between the electrical signals and the baseline. As yet further examples, the processing system 110 may determine positional information, recognize inputs as commands, recognize handwriting, and the like.
“Positional information” as used herein broadly encompasses absolute position, relative position, velocity, acceleration, and other types of spatial information. Exemplary “zero-dimensional” positional information includes near/far or contact/no contact information. Exemplary “one-dimensional” positional information includes positions along an axis. Exemplary “two-dimensional” positional information includes motions in a plane. Exemplary “three-dimensional” positional information includes instantaneous or average velocities in space. Further examples include other representations of spatial information. Historical data regarding one or more types of positional information may also be determined and/or stored, including, for example, historical data that tracks position, motion, or instantaneous velocity over time.
Some embodiments include buttons 130 that may be used to select or activate certain functions of the processing system 110. In some embodiments, the buttons 130 represent the functions provided by a left or right mouse click as is conventionally known.
It should be understood that while many embodiments of the invention are described in the context of a fully functioning apparatus, the mechanisms of the present invention are capable of being distributed as a program product (e.g., software) in a variety of forms. For example, the mechanisms of the present invention may be implemented and distributed as a software program on information bearing media that are readable by electronic processors (e.g., non-transitory computer-readable and/or recordable/writable information bearing media readable by the processing system 110). Additionally, the embodiments of the present invention apply equally regardless of the particular type of medium used to carry out the distribution. Examples of non-transitory, electronically readable media include various discs, memory sticks, memory cards, memory modules, and the like. Electronically readable media may be based on flash, optical, magnetic, holographic, or any other storage technology.
A “multi-function key” is used herein to indicate keys are capable of detecting and distinguishing between two types, three types, or more types of input or user interaction with the multi-function key. Some multi-function keys are capable of sensing multiple levels of key depression, key depression force, location of a touch or gesture on the key surface, etc. Some multi-function keys are capable of sensing and distinguishing between non-press touch or gesture on a key and a press on the key or a press/release interaction.
Multi-function keys having a touch sensitive portion may be configured with sensor systems using any appropriate technology, including any one or combination of technologies described in this detailed description section or by the references noted in the background section. As a specific example, in some embodiments, a sensor system for a spacebar comprises a capacitive sensing system capable of detecting touch on the spacebar and presses of the spacebar. As another specific example, in some embodiments, a sensor system for a spacebar comprises a capacitive sensing system capable of detecting touch on the spacebar and a resistive membrane switch system capable of detecting presses of the spacebar. Alternately, a touch sensitive area could be located on a bezel of a keyboard adjacent to the spacebar key. In those embodiments having a touchpad located adjacent to the spacebar, the touchpad could be used as a menu navigating touch surface. Generally, for improved ergonomics, some embodiments are configured to facilitate menu navigation and menu option selection without requiring a user's hands to leave the typing ready position.
Multi-function keys can be used to enhance user interfaces, such as improving ergonomics, speeding up information entry, providing more intuitive operation, etc. For example, multi-function keys configured in keypads and keyboards that capable of detecting and distinguishing between non-press touch input and press input may enable both navigation of a menu and selection of menu options using a same key.
“Non-press touch input” is used herein to indicate input approximating a user contacting a key surface but not pressing the key surface sufficiently to cause press input. “Press input” is used herein to indicate a user pressing a key surface sufficiently to trigger the main entry function of the key (e.g., to trigger alphanumeric entry for alphanumeric keys). In some embodiments, the sensor system is configured to consider the following as non-press touch input: inputs that lightly touch but does not significantly press the key surface, those input that presses on the key surface slightly, or a combination of these.
Most of the examples below discuss enhanced input possible with multi-function spacebars. However, other embodiments may enable similar functions using other keys such as shift, control, alt, tab, enter, backspace, function, numeric, or any other appropriate key. Further, some keyboard or keypad embodiments may each comprise multiple multi-function keys.
According to fundamental embodiments, as the user interacts with the plurality of keys 204, the processing system (110 in
While
The general input sequence illustrated by
In
Other embodiments contemplate additional advantages afforded by a multiple user interaction with the multi-function key. In
Still other embodiments contemplate further advantages afforded by a multiple user interaction with the multi-function key. For example, the user may navigate the menu 412 via interacting (e.g., touch or gesture) with one hand (typically the dominate hand) and interact with the multi-function key with the other hand (typically the non-dominate hand) to modify navigation of the menu 412 presented on the display 414. As a non-limiting example, the left hand 408 could enter a touch or press interaction with the multi-function key (spacebar 406 in this example) to modify how the menu 412 is navigated by the right hand 410 interaction (e.g., touch or gesture). Non-limiting examples of menu navigation modification include, changing the scrolling speed, changing from scrolling menu options to scrolling menu pages, change from vertical to horizontal menu navigation (or vise-versa). Still further, multiple user interactions with the multi-function key can combine the features of menu modification and menu navigation modification providing, for example, a combined menu scroll and menu zoom functions or a combined menu dimension change (e.g., 1-D to 2-D, or vise-versa) and menu navigation change from vertical to horizontal menu navigation (or vise-versa). Generally, any menu modification and/or navigation modification may be realized for any particular implementation.
Many variations of the approach discussed above are possible. As one example of variations contemplated by the present disclosure, some embodiments use similar techniques to enable non-IME input. For example, some embodiments use similar techniques to enable vertical menu scrolling instead of the horizontal menu scrolling.
In
As a further examples of contemplated variations, in some embodiments comprise processing systems (110 in
Some embodiments also respond to certain non-press touch input differently. For example, a “flick” is a short-duration, single-direction, short-distance stroke where lift-off of the input object on the touch surface occurs while the input object is still exhibiting significant lateral motion. In some embodiments, a flick-type non-press touch input on the spacebar or another key causes faster scrolling or value adjustment, increases the discrete amounts associated with the scrolling or value adjustment (e.g., scrolling by pages instead of individual entries), causes continued scrolling or value adjustment after finger lift-off, a combination of these, etc. Some embodiments continue this scrolling or value adjustment at a constant rate until an event (e.g., typing on a keyboard, or touch-down of an input object on the key surface) changes the rate to zero. Some embodiments continue the scrolling or value adjustment at a rate that decreases to zero over time.
Some embodiments provide continued scrolling or value adjustment (“edge motion”) in response to non-press touch input being stationary on the space bar (or other key) surface if the non-press touch input immediately prior to being stationary fulfill particular criteria. For example, if the non-press touch interaction has traveled in a direction for a certain distance, exhibited certain speed or velocity or position histories, reached particular locations on the spacebar, or a combination of these, before becoming stationary, “edge motion” may occur. Such “edge motion” may continue until the input object providing the relevant non-press touch input lifts from the key surface, or until some other event signals an end to the “edge motion” intended.
In some embodiments, the multi-function key is configured to sense motion of the non-press touch input along the shorter dimension (as viewed from a top plan view) of the spacebar instead, or in addition to, non-press touch input along the longer dimension of the spacebar. In some of these embodiments, vertical scrolling occurs in response to this shorter-dimension motion.
In some embodiments, pressing the spacebar past a time period causes an applicable list to scroll at a defined given rate. In response to a release of the spacebar from the pressed position, selection of the then-highlighted item occurs.
In various embodiments, interaction with the spacebar is treated as relative motion (e.g., relative to an initial touchdown location on the spacebar) or with absolute mapping. In the case of absolute mapping, a processing system (110 in
Other Multi-Function Key Applications
Multi-function keys have many other uses. Some examples include:
Overall presence detection (of a user near the keyboard). In some embodiments, in response to detecting the user's presence near the keyboard or hands over the keyboard, the system can cause a backlight to turn on or to wake up.
Accidental Contact Mitigation. In some embodiments, in response to fingers over the “F” and “J” keys (or some other keys) of the keyboard, the system does not respond to some or all of the input received on an associate touchpad near the keyboard.
Partial Key Press Detection. In some embodiments, the system can detect partial presses, and determines that a key has been pressed when the key depression is past a static or dynamic threshold. For example, some embodiments use 90% depression as a static “pressed” threshold. The system may be configured with hysteresis, such that a lower percentage of press (e.g., 85%) is associated with releasing the press.
Edge gestures. Some embodiments are configured to be able to detect non-press touch input over much or all of the keyboard. Some of these embodiments are configured to respond to input over the keyboard in the following way: a left-to-right swipe, top-to-bottom, right-to-left swipe, or bottom-to-top swipes each triggers a function. These functions may be the same or differ between these different types of swipes.
Thus, the embodiments and examples set forth herein were presented in order to best explain the present invention and its particular application and to thereby enable those skilled in the art to make and use the invention. However, those skilled in the art will recognize that the foregoing description and examples have been presented for the purposes of illustration and example only. The description as set forth is not intended to be exhaustive or to limit the invention to the precise form disclosed.
This application claims the benefit of U.S. Provisional Application No. 61/814,980 filed Apr. 23, 2013.
Number | Date | Country | |
---|---|---|---|
61814980 | Apr 2013 | US |