This invention generally relates to electronic devices, and more specifically relates to input devices such as proximity sensor devices.
Proximity sensor devices (also commonly called touchpads or touch sensor devices) are widely used in a variety of electronic systems. A proximity sensor device typically includes a sensing region, often demarked by a surface, which uses capacitive, resistive, inductive, optical, acoustic and/or other technology to determine the presence, location and/or motion of one or more fingers, styli, and/or other objects. The proximity sensor device, together with finger(s) and/or other object(s), may be used to provide an input to the electronic system. For example, proximity sensor devices are used as input devices for larger computing systems, such as those found integral within notebook computers or peripheral to desktop computers. Proximity sensor devices are also used in smaller systems, including handheld systems such as personal digital assistants (PDAs), remote controls, digital cameras, video cameras, communication systems such as wireless telephones and text messaging systems. Increasingly, proximity sensor devices are used in media systems, such as CD, DVD, MP3, video or other media recorders or players.
Many electronic systems include a user interface (UI) and an input device for interacting with the UI (e.g., interface navigation). A typical UI includes a screen for displaying graphical and/or textual elements. The increasing use of this type of UI has led to a rising demand for proximity sensor devices as pointing devices. In these applications, the proximity sensor device may function as a value adjustment device, cursor control device, selection device, scrolling device, graphics/character/handwriting input device, menu navigation device, gaming input device, button input device, keyboard and/or other input device. One common application for a proximity sensor device is as a touch screen. In a touch screen, the proximity sensor is combined with a display screen for displaying graphical and/or textual elements. Together, the proximity sensor and display screen function as the user interface.
There is a continuing need for improvements in input devices. In particular, there is a continuing need for improvements in the usability of proximity sensors as input devices in UI applications.
Systems and methods for controlling multiple degrees of freedom of a display, including rotational degrees of freedom, are disclosed.
A program product is disclosed. The program product comprises a sensor program for controlling multiple degrees of freedom of a display in response to user input in a sensing region separate from the display, and computer-readable media bearing the sensor program. The sensor program is configured to: receive indicia indicative of user input by one or more input objects in the sensing region; indicate a quantity of translation along a first axis of the display in response to a determination that the user input comprises motion of a single input object having a component in a first direction; and indicate rotation about the first axis of the display in response to a determination that the user input comprises contemporaneous motion of multiple input objects having a component in the second direction. The second direction may be any direction not parallel to the first direction, including substantially orthogonal to the first direction. The quantity of translation along the first axis of the display may be based on an amount of the component in the first direction. The rotation about the first axis of the display may be based on an amount of the component in the second direction.
A method for controlling multiple degrees of freedom of a display using a single contiguous sensing region of a sensing device is disclosed. The single contiguous sensing region is separate from the display. The method comprises: detecting a gesture in the single contiguous sensing region; causing rotation about a first axis of the display if the gesture is determined to comprise multiple input objects concurrently traveling along a second direction; causing rotation about a second axis of the display if the gesture is determined to comprise multiple input objects concurrently traveling along a first direction; and causing rotation about a third axis of the display if the gesture is determined to be another type of gesture that comprises multiple input objects. The first direction may be nonparallel to the second direction.
A proximity sensing device having a single contiguous sensing region is disclosed. The single contiguous sensing region is usable for controlling multiple degrees of freedom of a display separate from the single contiguous sensing region. The proximity sensing device comprises: a plurality of sensor electrodes configured for detecting input objects in the single contiguous sensing region; and a controller in communicative operation with plurality of sensor electrodes. The controller is configured to: receive indicia indicative of one or more input objects performing a gesture in the single contiguous sensing region; cause rotation about a first axis of the display if the gesture is determined to comprise multiple input objects concurrently traveling along a second direction; cause rotation about a second axis of the display if the gesture is determined to comprise multiple input objects concurrently traveling along a first direction; and cause rotation about a third axis of the display if the gesture is determined to be another type of gesture that comprises multiple input objects. The first direction may be nonparallel to the second direction
The preferred exemplary embodiment of the present invention will hereinafter be described in conjunction with the appended drawings, where like designations denote like elements, and:
a-17c show input devices with change-in-input-object-count continuation control capability, in accordance with embodiments of the invention;
The following detailed description is merely exemplary in nature and is not intended to limit the invention or the application and uses of the invention. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description.
Various aspects of the present invention provide input devices and methods that facilitate improved usability. Specifically, the input devices and methods relate user input to the input devices and resulting actions on displays. As one example, user input in sensing regions of the input devices and methods of processing the user input allow users to interact with electronic systems, thus providing more enjoyable user experiences and improved performance.
As discussed, embodiments of this invention may be used for multi-dimensional navigation and control. Some embodiments enable multiple degrees of freedom (e.g. six degrees of freedom, or 6 DOF, in 3D space) control using input by a single object to a proximity sensor. In 3D space, the six degrees of freedom is usually used to refer to the motions available to a rigid body. This includes the ability to translate in three axes (e.g. move forward/backward, up/down) and rotation about the three axes (e.g. roll, yaw, pitch). Other embodiments enable multiple degree of freedom control using simultaneous input by multiple objects to a proximity sensor. These can facilitate user interaction for various computer applications, including three dimensional (3D) computer graphics applications. Embodiments of this invention enable not only control of multiple DOF using proximity sensors, but also a broad array of 3D related or other commands. The 3D related or other commands may be available in other modes, which may be switched to with various mode switching inputs, including input with multiple objects or specific gestures.
Turning now to the figures,
The elements are communicatively coupled to the electronic system, and the parts of the electronic system, may communicate via any combination of buses, networks, and other wired or wireless interconnections. For example, an input device may be in operable communication with its associated electronic system through any type of interface or connection. To list several non-limiting examples, available interfaces and connections include I2C, SPI, PS/2, Universal Serial Bus (USB), Bluetooth, RF, IRDA, and any other type of wired or wireless connection.
The various elements (e.g. processors, memory, etc.) of the electronic system may be implemented as part of the input device associated with it, as part of a larger system, or as a combination thereof Additionally, the electronic system could be a host or a slave to the input device. Accordingly, the various embodiments of the electronic system may include any type of processor, memory, or display, as needed.
Returning now to
Sensing regions with rectangular two-dimensional projected shape are common, and many other shapes are possible. For example, depending on the design of the sensor array and surrounding circuitry, shielding from any input objects, and the like, sensing regions may be made to have two-dimensional projections of other shapes. Similar approaches may be used to define the three-dimensional shape of the sensing region. For example, any combination of sensor design, shielding, signal manipulation, and the like may effectively define a sensing region 118 that extends some distance away from the sensor.
In operation, the input device 116 suitably detects one or more input objects (e.g. the input object 114) within the sensing region 118. The input device 116 thus includes a sensor (not shown) that utilizes any combination sensor components and sensing technologies to implement one or more sensing regions (e.g. sensing region 118) and detect user input such as presences of object(s). Input devices may include any number of structures, such as one or more sensor electrodes, one or more other electrodes, or other structures adapted to detect object presence. As several non-limiting examples, input devices may use capacitive, resistive, inductive, surface acoustic wave, and/or optical techniques. Many of these techniques are advantageous to ones requiring moving mechanical structures (e.g. mechanical switches) as they may have a substantially longer usable life.
For example, sensor(s) of the input device 116 may use multiple arrays or other patterns of capacitive sensor electrodes to support any number of sensing regions 118. As another example, the sensor may use capacitive sensing technology in combination with resistive sensing technology to support the same sensing region or different sensing regions. Examples of the types of technologies that may be used to implement the various embodiments of the invention may be found in U.S. Pat. Nos. 5,543,591, 5,648,642, 5,815,091, 5,841,078, and 6,249,234.
In some resistive implementations of input devices, a flexible and conductive top layer is separated by one or more spacer elements from a conductive bottom layer. A voltage gradient is created across the layers. Pressing the flexible top layer in such implementations generally deflects it sufficiently to create electrical contact between the top and bottom layers. These resistive input devices then detect the position of an input object by detecting the voltage output due to the relative resistances between driving electrodes at the point of contact of the object.
In some inductive implementations of input devices, the sensor picks up loop currents induced by a resonating coil or pair of coils, and use some combination of the magnitude, phase and/or frequency to determine distance, orientation or position.
In some capacitive implementations of input devices, a voltage is applied to create an electric field across a sensing surface. These capacitive input devices detect the position of an object by detecting changes in capacitance caused by the changes in the electric field due to the object. The sensor may detect changes in voltage, current, or the like.
As an example, some capacitive implementations utilize resistive sheets, which may be uniformly resistive. The resistive sheets are electrically (usually ohmically) coupled to electrodes that receive from the resistive sheet. In some embodiments, these electrodes may be located at corners of the resistive sheet, provide current to the resistive sheet, and detect current drawn away by input devices via capacitive coupling to the resistive sheet. In other embodiments, these electrodes are located at other areas of the resistive sheet, and drive or receive other forms of electrical signals. Depending on the implementation, sometimes the sensor electrodes are considered to be the resistive sheets, the electrodes coupled to the resistive sheets, or the combinations of electrodes and resistive sheets.
As another example, some capacitive implementations utilize transcapacitive sensing methods based on the capacitive coupling between sensor electrodes. Transcapacitive sensing methods are sometimes also referred to as “mutual capacitance sensing methods.” In one embodiment, a transcapacitive sensing method operates by detecting the electric field coupling one or more transmitting electrodes with one or more receiving electrodes. Proximate objects may cause changes in the electric field, and produce detectable changes in the transcapacitive coupling. Sensor electrodes may transmit as well as receive, either simultaneously or in a time multiplexed manner. Sensor electrodes that transmit are sometimes referred to as the “transmitting sensor electrodes,” “driving sensor electrodes,” “transmitters,” or “drivers”—at least for the duration when they are transmitting. Other names may also be used, including contractions or combinations of the earlier names (e.g. “driving electrodes” and “driver electrodes.” Sensor electrodes that receive are sometimes referred to as “receiving sensor electrodes,” “receiver electrodes,” or “receivers”—at least for the duration when they are receiving. Similarly, other names may also be used, including contractions or combinations of the earlier names. In one embodiment, a transmitting sensor electrode is modulated relative to a system ground to facilitate transmission. In another embodiment, a receiving sensor electrode is not modulated relative to system ground to facilitate receipt.
In
The processing system 119 may provide electrical or electronic indicia based on positional information of input objects (e.g. input object 114) to the electronic system 100. In some embodiments, input devices use associated processing systems to provide electronic indicia of positional information to electronic systems, and the electronic systems process the indicia to act on inputs from users. One example system response is moving a cursor or other object on a display, and the indicia may be processed for any other purpose. In such embodiments, a processing system may report positional information to the electronic system constantly, when a threshold is reached, in response criterion such as an identified stroke of object motion, or based on any number and variety of criteria. In some other embodiments, processing systems may directly process the indicia to accept inputs from the user, and cause changes on displays or some other actions without interacting with any external processors.
In this specification, the term “processing system” is defined to include one or more processing elements that are adapted to perform the recited operations. Thus, a processing system (e.g. the processing system 119) may comprise all or part of one or more integrated circuits, firmware code, and/or software code that receive electrical signals from the sensor and communicate with its associated electronic system (e.g. the electronic system 100). In some embodiments, all processing elements that comprise a processing system are located together, in or near an associated input device. In other embodiments, the elements of a processing system may be physically separated, with some elements close to an associated input device, and some elements elsewhere (such as near other circuitry for the electronic system). In this latter embodiment, minimal processing may be performed by the processing system elements near the input device, and the majority of the processing may be performed by the elements elsewhere, or vice versa.
Furthermore, a processing system (e.g. the processing system 119) may be physically separate from the part of the electronic system (e.g. the electronic system 100) that it communicates with, or the processing system may be implemented integrally with that part of the electronic system. For example, a processing system may reside at least partially on one or more integrated circuits designed to perform other functions for the electronic system aside from implementing the input device.
In some embodiments, the input device is implemented with other input functionality in addition to any sensing regions. For example, the input device 116 of
Likewise, any positional information determined by the processing system may be any suitable indicia of object presence. For example, processing systems may be implemented to determine “zero-dimensional” 1-bit positional information (e.g. near/far or contact/no contact) or “one-dimensional” positional information as a scalar (e.g. position or motion along a sensing region). Processing systems may also be implemented to determine multi-dimensional positional information as a combination of values (e.g. two-dimensional horizontal/vertical axes, three-dimensional horizontal/vertical/depth axes, angular/radial axes, or any other combination of axes that span multiple dimensions), and the like. Processing systems may also be implemented to determine information about time or history.
Furthermore, the term “positional information” as used herein is intended to broadly encompass absolute and relative position-type information, and also other types of spatial-domain information such as velocity, acceleration, and the like, including measurement of motion in one or more directions. Various forms of positional information may also include time history components, as in the case of gesture recognition and the like. As will be described in greater detail below, positional information from processing systems may be used to facilitate a full range of interface inputs, including use of the proximity sensor device as a pointing device for cursor control, scrolling, and other functions.
In some embodiments, an input device such as the input device 116 is adapted as part of a touch screen interface. Specifically, a display screen is overlapped by at least a portion of a sensing region of the input device, such as the sensing region 118. Together, the input device and the display screen provide a touch screen for interfacing with an associated electronic system. The display screen may be any type of electronic display capable of displaying a visual interface to a user, and may include any type of LED (including organic LED (OLED)), CRT, LCD, plasma, EL or other display technology. When so implemented, the input devices may be used to activate functions on the electronic systems. In some embodiments, touch screen implementations allow users to select functions by placing one or more objects in the sensing region proximate an icon or other user interface element indicative of the functions. The input devices may be used to facilitate other user interface interactions, such as scrolling, panning, menu navigation, cursor control, parameter adjustments, and the like. The input devices and display screens of touch screen implementations may share physical elements extensively. For example, some display and sensing technologies may utilize some of the same electrical components for displaying and sensing.
It should be understood that while many embodiments of the invention are to be described herein the context of a fully functioning apparatus, the mechanisms of the present invention are capable of being distributed as a program product in a variety of forms. For example, the mechanisms of the present invention may be implemented and distributed as a sensor program on computer-readable media. Additionally, the embodiments of the present invention apply equally regardless of the particular type of computer-readable medium used to carry out the distribution. Examples of computer-readable media include various discs, memory sticks, memory cards, memory modules, and the like. Computer-readable media may be based on flash, optical, magnetic, holographic, or any other storage technology.
Referring now to
The kernel mode driver 210 is typically part of the operating system, and includes a device driver module (not shown) that acquires data from of the touch sensor 216. For example, a MICROSOFT WINDOWS operating system may provide built-in kernel mode drivers for acquiring data packets of particular types from input devices. Any of the communications and connections discussed above can be used in transferring data between the kernel mode driver 210 and the touch sensor 216, and oftentimes USB or PS/2 is used
The multi-dimensional command driver 212, which may also include a device driver module (not shown), receives the data from the touch sensor 216. The multi-dimensional command driver 212 also usually executes the following computational steps. The multi-dimensional command driver 212 interprets the user input, such as a multi-finger gesture. For example, the multi-dimensional command driver 212 may determine the number of finger touch points by counting the number of input objects sensed or by distinguishing finger touches from touches by other objects. As other examples, the multi-dimensional command driver 212 may determine local positions or trajectories of each object sensed or a subset of the objects sensed. For example, a subset of the objects may consist of a specific type of input object, such as fingers. As another example, the multi-dimensional command driver 212 may identify particular gestures such as finger taps.
The multi-dimensional command driver 212 of
If the 3D application program 214 does not recognize the touch sensor data as standard input data, then the multi-dimensional command driver 212 or another part of the system may translate the data for the 3D application program 214. For example, the multi-dimensional command driver 212 may send specific messages to the operating system, which then directs the 3D application program 214 to execute the multi-dimensional commands. These specific messages may emulate messages of keyboards, mice, or some other device that the operating system understands. In such a case, the 3D application program 214 processes the directions from the operating system as if they were from the emulated device(s). This approach enables the control of the 3D application program 214 (e.g. to update a 3D rendering process) according to user inputs understood by the multi-dimensional command driver 212, even if the 3D application program 214 is not specifically programmed to operate with the multi-dimensional command driver 212 or the touch sensor 216.
Although sensing regions and displays are in this separate configuration in most embodiments, the sensing region of input device 316 may be overlapped with the display that it is configured to control in some embodiments.
The input device 316 can be used for mouse equivalent 2D commands. The laptop notebook computer may have other input options that are not shown, such as keys typically found in keyboards, mechanical or capacitive switches, and buttons associated with the input device 316 for emulating left and right mouse buttons. The input device 316 generally accepts input by a single finger for 2D control, although it may accept single-finger input for controlling degrees of freedom in other dimensional spaces (e.g. a single dimension, in three dimensions, or in some other number of dimensions). In some embodiments, mode switching input to the input device 316 or some other part of the system 300 is used to switch between 2D and 3D control modes, or between different 3D control modes.
In a 3D control mode, the input device 316 may be used to control multiple degrees of freedom of a display shown by the display screen. The multiple degrees of freedom controlled may be within any reference system associated with the display. Three such reference systems are shown in
Reference system 322 also has three orthogonal axes (Axis 1″, Axis 2″, and Axis 3″) that define a 3D coordinate system. Reference system 322 is a viewpoint-based system. That is, 3D control commands using reference system 322 controls how that viewpoint moves. As the viewpoint rotates, for example, the reference system 322 also rotates.
Reference system 324 has three orthogonal axes (Axis 1, Axis 2, and Axis 3) that define a 3D coordinate system. Reference system 324 is an object-based system, as indicated by the controlled object 318. Here, controlled object 318 is part or all of a display. Specifically, controlled object 318 is shown as a box with differently-shaded sides presented by display screen 312. 3D control commands using reference system 324 controls how the controlled object 318 moves. As controlled object 318 rotates, for example, the reference system 324 also rotates. That is, the reference system 324 rotates with the controlled object 318. For example, for
In some cases where the reference system is mapped to a Cartesian system, Axis 1 may be associated with “X,” Axis 2 may be associated with “Z,” and Axis 3 may be associated with “Y.” In some of those cases, rotation about Axis 1 may be referred to as “Pitch” or “rotation about the X-axis,” rotation about Axis 2 may be referred to as “Yaw” or “rotation about the Z-axis,” and rotation about Axis 3 may be referred to as “Roll” or “rotation about the Y-axis.”
Although the above examples use reference systems with orthogonal axes, other reference systems with non-orthogonal axes may be used, as long as the axes define a 3D space.
The discussion that follows often uses object-based reference systems for ease and clarity of explanation. However, other reference systems, including those based on display screens (e.g. reference system 320) or viewpoints (e.g. reference system 322), can also be used. Similarly, although system 300 is shown as a notebook computer, the embodiments described below can be implemented in any appropriate electronic system.
Some embodiments enable users to define or modify the types of inputs that would cause particular degree of freedom responses. For example, various embodiments enable users to switch the type of gesture that causes rotation about the one axis with one or more of the types of gesture that causes rotation about the other two axes. As a specific example, in some cases of 3D navigation in computer graphics applications, rotation about Axis 2 or its analog may be used rarely. It may be useful to enable users or applications to re-associate the gesture usually associated with rotation about Axis 2 (e.g. motion of multiple objects along Dir 1) with rotation about Axis 3. This different association may be preferred for some users for efficiency, ergonomic, or some other reasons.
a-6c illustrate two different ways that multiple input objects may move in sensing regions to cause translation along Axis 3 (along Axis 3). In
In
c shows an alternate input usable by some embodiments for causing translation along Axis 3. In
Some embodiments use the pinching gestures for controlling translation along Axis 3, some embodiments use the movement of four input objects for controlling translation along Axis 3, and some embodiments use both. Thus, in operation, the input device 316 may indicate translation along a third axis of the display. The third axis may be substantially orthogonal to the display. This indication may be provided in response to a determination that the user input comprises a change in separation distance of multiple input objects. Alternatively, this indication may be provided in response to a determination that the user input comprises four input objects simultaneously moving in a trajectory that brings them closer or further away from the display screen.
Again, although the above discusses control of translational degrees of freedom using on object-based reference systems (with Axis 1, Axis 2, and Axis 3), that is done for clarity of explanation. Analogies can be drawn for other reference systems, such that the same or similar input results in translation along axes of those other reference systems instead. For example, reference systems based on one or more viewpoints (e.g. reference system 322 of
User input does not always involve object motion exactly parallel to the reference directions or reference axes. When faced with such input, the system may respond in a variety of ways.
a shows an input object 730 moving along a path 731 not parallel to either Dir 1 or Dir 2. Instead, path 731 has components along both Dir 1 and Dir 2.
c shows an alternate response to the input depicted in
a shows an input object 830 moving in a path 831 that is not linear. Instead, the path 831 has a direction that changes over time, such that a squiggly path is traced by the input object 830. With some embodiments, the system may respond by determining a predominant direction of travel, and producing translation of the controlled object 318 in a path in the axis associated with the predominant direction. This is shown in
The amount of the input's component in Dir 2 may be determined from the separate components that the different input objects 930 and 932 has along Dir 2. For example, the amount of the input's component may be a mean, max, min, or some other function or selection of the separate components of paths 931 and 933. The relationship between the amount of the component in the second direction and the rotation may involve any appropriate aspect of the rotation, including quantity, speed, or direction. The relationship may also be linear (e.g. proportional), piecewise linear (e.g. different proportional relationships), or non-linear (e.g. exponential, curvy, or stair-stepped increases as components reach different levels).
In
In
In
Embodiments of the invention may use any or all of the different ways of causing rotation about Axis 3 as discussed above. Whatever the method used, most embodiments would cause rotation about Axis 3 in the opposite direction (e.g. negative rotation about Axis 3) if the input objects are moved in an opposite way. One example is moving input objects 1130 and 1132 clockwise instead of counterclockwise. Another example is moving input object 1136 clockwise instead of counterclockwise. Yet another example is holding input object 1136 substantially still while moving input object 1134. A further example is moving input objects 1138 and 1140 clockwise instead of counterclockwise.
Analogous to what is discussed in association with
For example, in various embodiments, the input device 316 may determine if an input gesture comprises multiple input objects concurrently traveling predominantly along a second (or first) direction, and cause rotation about the first (or second) axis of the display if the gesture is determined to comprise the multiple input objects concurrently traveling predominantly along the second (or first) direction Determining if the input objects are traveling predominantly along the second direction (or the first direction) may be accomplished in many different ways. Non-limiting examples include comparing the travel of the multiple input objects with the second direction (or the first direction), examining a ratio of the input objects' travel in the first and second directions, or determining that the predominant direction is not the first direction (or the second direction).
As another example, in various embodiments, the input device 316 may determine an amount of rotation about the first axis based on an amount of travel of the multiple input objects along the second direction, and determine an amount of rotation about the second axis based on an amount of travel of the multiple input objects along the first direction. With such an approach, multiple input objects concurrently traveling along both the second and first directions would cause rotation about both the first and second axes.
Again, although the above discusses control of rotational of freedom using on object-based reference systems (with Axis 1, Axis 2, and Axis 3), that is done for clarity of explanation. Analogies can be drawn for other reference systems, such that the same or similar input results in translation along axes of those other reference systems instead.
For example, in many embodiments, if the input objects initiate input and then move into specified region(s), then the system may respond by continuing to control the degree of freedom that was last changed. In some embodiments, that is accomplished by repeating the command last generated before the input objects reached the specified region(s). In other embodiments, that is accomplished by repeating one of the commands that was generated shortly before the input objects reached the specified region(s). The regions may be defined in various ways, including being defined during design or manufacture, defined by the electronic system or applications running on the electronic system, by user selection, and the like. Some embodiments enable users or applications to define some or all aspects of these regions.
Although
In some embodiments, the extension of the translation along Axis 3 is in response to user input that starts in an inner region and then reaches and remains in the extensions regions. To produce the actual extended translation the system may monitor the trajectories of the input objects, and generate continued translation using a last speed of movement. In some embodiments, the input device 316 is configured to indicate continued translation along the third axis of the display in response to a particular determination. Specifically, that particular determination includes ascertaining that the user input comprises the multiple input objects moving into and staying within extension regions after a change in separation distance of the multiple input objects (which may have resulted in earlier translation along the third axis). In many embodiments, the extension regions comprise opposing corner portions of the sensing region.
Referring now to
The system may calculate a dynamically changing region 1350. Alternatively, the system may monitor for a pinching inward input followed by the input objects getting within a threshold distance of each other. Alternatively, the system may look for input objects that move closer to each other and eventually merge into what appears to be a larger input object. Thus, the region 1350 may not be specifically implemented with regional boundaries, but may be a mental abstraction of limitations on separation distances, increases in input object size accompanied by decreases in input object.
Any of the ways discussed above to indicate extended or continued motion can also be used. For example, the system may monitor the trajectories of input objects 1430 and 1432 for this type of input history, and produce continued rotation using a speed of input object movement just before the input objects 1430 and 1432 entered the edge region 1452. As another example, the input device 316 may indicate continued rotation about the second axis in response to a particular determination. Specifically, the input device 316 may determine that the user input comprises multiple input objects moving into and staying in a set of continuation regions after the multiple input objects has moved with a component in the first direction. In many embodiments, the set of continuation regions are opposing portions of the sensing region.
Referring now to
Continuation and extension regions may be used separately or together.
Thus, some embodiments of input device 316 may have a single contiguous sensing region that comprises a first set of continuation regions and a second set of continuation regions. The first set of continuation regions may be located at first opposing outer portions of the single contiguous sensing region and the second set of continuation regions may be located at second opposing outer portions of the single contiguous sensing region. In operation, the input device 316 may cause rotation about the first axis in response to input objects moving into and staying in the first set of continuation regions after multiple input objects concurrently traveled along the second direction. Further, the input device 316 may cause rotation about the second axis in response to input objects moving into and staying in the second set of continuation regions after multiple input objects concurrently traveled along the first direction.
Some embodiments also have extension regions similar to those discussed above for enabling continued translation along the first axis, second axis, or both. For example, the input device 316 may cause continued translation along the first axis in response to an input object moving into and staying in a first set of extension regions after the input object has traveled along the first direction. Further, the input device 316 may cause continued translation along the second axis in response to an input object moving into and staying in the second set of continuation regions after the input object has traveled along the second direction.
a-17c show input devices with change-in-input-object-count continuation control capability, in accordance with embodiments of the invention. For example, changes in the number of input objects in the sensing region can be used to continue rotation. In some embodiments, an increase in the number of input objects that immediately or closely follows an earlier input for causing rotation about Axis 3 (not shown) results in continued rotation about Axis 3. The continued rotation about Axis 3 may continue for the duration in which the additional input object(s) stay in the sensing region. The continuation of rotation can be accomplished using any of the methods described above. For example, to continue rotation about Axis 3, the system may monitor for user input that comprises a first part involving at least one of a plurality of input objects moving in a circular manner and a second part involving at least one additional finger entering the sensing region. As another example, the input device 316 may indicate continued rotation about a first axis in response to a particular determination. Specifically, the system may determine that the user input comprises an increase in a count of input objects in the sensing region. The increase in the count of input objects may be referenced to a count of input objects associated with the contemporaneous motion of the multiple input objects that caused rotation about the first axis (e.g. having a component in the first direction, in some embodiments). The input device 316 may use timers, counters, and the like to impose particular time requirements by which additional input objects may be added to continue rotation. For example, at least one input object may need to be added by a reference amount of time. As another example, at least two input objects may need to be added within a particular reference amount of time.
a shows the prior presence of input objects 1730 and 1732, which already performed a gesture that caused rotation, and the addition of input object 1734 to continue the rotation.
In many embodiments, input device 316 supports more than a single multi-degree of freedom control mode. To facilitate this, input device 316 or the electronic system in operative communication with input device 316 may be configured to accept mode-switching input to switch from a multi-degree of freedom control mode to one or more other modes. The other modes may be another multi-degree of freedom control mode with the same or a different number of degrees of freedom (e.g. to a 2-D mode, to another reference system, to manipulate a different object, etc.) or a mode for other functions (e.g. menu navigation, keyboard emulation, etc.). Different mode-switching input may be defined to switch to particular modes, or the same mode-switching input may be used to toggle between modes.
Being able to switch between different control modes may enable users to use the same input device 316 and similar gestures to control environments with more than six degrees of freedom. One example of a 3D environment with more than six degrees of freedom is the control of a wheeled robot with a camera, vehicle, and manipulation arm. A moveable camera view of the robot environment may involve five DOF (e.g. 3D translation, plus rotation about two of the axes). A simple robot vehicle may involve at least three DOF (e.g. 2D translation, plus rotation about one axis) and a simple robot arm may involve two DOF (e.g. rotation about two axes). Thus, control of this robot and camera view of the environment involves at least three different controllable objects (and thus at least three potential reference systems, if reference systems specific to each controlled object is used) and ten degrees of freedom. To facilitate user control of this 3D environment, the system may be configured to have at least a camera view mode, a vehicle mode, and a robot arm mode between which the user can switch.
As a specific example of mode switching, an input device 316 may have a default input mode for emulating a conventional 2D computer mouse. Switching from this 2D mouse emulation mode to a 6 DOF control mode may require a specific gesture input to the input device 316. The specific gesture input may comprise two fingers touching two corners of the sensing region of input device 316 simultaneously. Repeating the specific gesture input may switch back to the convention 2D mouse emulation mode. After switching away from the 2D mouse emulation mode, the input device 316 may temporarily suppress mouse emulation outputs (e.g. mouse data packets).
Other examples of mode-switching input options include at least one input object tapping more than 3 times, at least three input objects entering the sensing region, and the actuation of a key. The mode-switching input may be qualified by other criteria. For example, the at least one input object may be required to tap more than 3 times within a certain duration of time. As another example, at least three input objects entering the sensing region may mean multiple fingers simultaneously entering the sensing region multiple times, such as exactly 5 input objects entering the sensing region. As another example, the actuation of a key may mean a specific type of actuation of a specific key, such as a double click or a triple click of a key such as the “CONTROL” key on keyboard.
In operation, the input device 316 may be configured to indicate or enter a particular 3-dimensional degree of freedom control mode in response to a determination that the user input comprises a mode-switching input. The mode-switching input may comprise multiple input objects simultaneously in specified portions of the single contiguous sensing region. As an alternative or an addition, the mode-switching input may comprise at least one input object tapping more than 3 times in the sensing region, at least three input objects substantially simultaneously entering the sensing region, an actuation of a mode-switching key, or any combination thereof.
The input device 316 or the electronic system associated with it may provide feedback to indicate the mode change, the active control mode, or both. The feedback may be audio, visual, affect some other sense of the user, or a combination thereof. For example, if input device 316 is set up as a touch screen, such that the sensing region is overlapped with a display screen that can display graphical images visible through the sensing region, then visual feedback may be provided relatively readily.
Returning to the robot example described above, the control mode may be switched from a conventional 2D mouse mode to a camera view control mode. The touch screen may display an image of a camera to indicate that the currently selected control mode is the camera view control mode. In the camera view control mode, user input by single or multiple input objects may be used to control the 5 DOF of the camera view. The control mode may then be changed from the camera view control mode to the vehicle control mode by a mode-switching input, such as the simultaneous input to two corners on sensor pad. In response, the system mode changes to the vehicle control mode and the touch screen may display an image of a vehicle to indicate that the currently selected control mode is the vehicle control mode. Depending on the embodiment, the same or a different mode-switching input may be used to change the control mode from the vehicle control mode to the robot arm control mode. The touch screen may display an image of a robot arm to indicate that the currently selected control mode is the robot arm control mode.
Given the capabilities of a touch screen implementation, the image displayed through the sensing region can be made to interact with user input. For example, the image may allow user selection of particular icons or options displayed on the touch screen. As a specific example, if a robot has many arm components, each with its own set of DOF, the image may be rendered interactive so that users can select which arm component is to be controlled by interacting with the touch screen. Where the robot has a top arm component and a bottom arm component, the touch screen may display a picture with the entire arm. The user may select the bottom arm component by inputting to the part of the sensing region corresponding to the bottom arm component. Visual feedback may be provided to indicate the selection to the user. For example, the touch screen may display a color change to the bottom arm component or some other item displayed after user selection of the bottom arm component. After selection of the bottom arm component, the user may rotate the bottom arm component by using rotation input such as the sliding of two fingers in the sensing region of the input device 316.
As shown in
Rotation about Axis 1 can be caused by input object movement (e.g. along arrow 2251) in an edge region 2250 (e.g. along a left edge) of input device 316. In some embodiments, input device 316 may require that the object motion stay in edge region 2250 for rotation about Axis 1 to occur, although that need not be the case. Rotation about Axis 2 can be caused by input object movement (e.g. along arrow 2253) in an edge region 2252 (e.g. along a bottom edge, sometimes referred to as a back edge, as it is often farther from an associated display screen) of input device 316. In some embodiments, input device 316 may require that the object motion stays in edge region 2252 for rotation about Axis 2 to occur, although that need not be the case. Rotation about Axis 3 can be caused by input object movement (e.g. along arrow 2255) in a circular trajectory on the sensor pad. In some embodiments, input device 316 may require that the object motion stay in inner region 1660 (and outside of edge regions 2250, 2252, and 2254) for rotation about Axis 3 to occur, although that need not be the case.
Referring now to
As discussed above, different embodiments may perform the steps of method 2300 in a different order, repeat some steps while not others, or have additional steps.
For example, an embodiment may also include a step to indicate a quantity of translation along a second axis of the display in response to a determination. This determination may be that the user input comprises motion of a single input object having a component in the second direction. The second axis may be substantially orthogonal to the first axis, and the quantity of translation along the second axis of the display may be based on an amount of the component of the single input object in the second direction.
An embodiment may also include a step to indicate rotation about the second axis of the display in response to a determination. This determination may be that the user input comprises contemporaneous motion of multiple input objects all having a component in the first direction. The rotation about the second axis of the display may be based on an amount of the component of the multiple input objects in the first direction.
As another example of potential additional steps, embodiments may include a step to indicate translation along a third axis of the display in response to a determination that the user input comprises a change in separation distance of multiple input objects. The third axis may be substantially orthogonal to the display, if the display includes a substantially planar surface. As an alternative or an addition, embodiments may include a step to indicate rotation about the third axis of the display in response to a determination that the user input comprises circular motion of at least one input object of a plurality of input objects in the sensing region.
Embodiments may include a step to indicate continued translation along the third axis of the display in response to a determination of a continuation input. The continuation input may comprise multiple input objects moving into and staying within extension regions after a change in separation distance of the multiple input objects. The extension regions may comprise opposing corner portions of the sensing region.
Embodiments may include a step to indicate continued rotation about the first axis in response to a determination of a continuation input. The continuation input may comprise multiple input objects moving into and staying in one of a set of continuation regions after motion of the multiple input objects having the component in the second direction. The set of continuation regions may comprise opposing portions of the sensing region. As an alternative or an addition, the continuation input may comprise an increase in a count of input objects in the sensing region. The increase in the count of input objects may be referenced to a count of input objects associated with contemporaneous motion of the multiple input objects having the component in the first direction.
Embodiments may include a step to indicate a particular 3-dimensional degree of freedom control mode in response to a determination that the user input comprises a mode-switching input.
Referring now to
As discussed above, different embodiments may perform the steps of method 2400 in a different order, repeat some steps while not others, or have additional steps.
In some embodiments, the first and second axes are substantially orthogonal to each other, and the first and second directions are substantially orthogonal to each other. Also, an amount of rotation about the first axis may be based on a distance of travel of the multiple input objects along the second direction, and an amount of rotation about the second axis may be based on a distance of travel of the multiple input objects along the first direction.
In some embodiments, the display is substantially planar, the first and second axes are substantially orthogonal to each other and define a plane substantially parallel to the display, and the third axis of the display is substantially orthogonal to the display. Also, some embodiments may include the step of causing translation along the first axis of the display if the gesture is determined to comprise a single input object traveling along the first direction. An amount of translation along the first axis may be based on a distance of travel of the single input object along the first direction. As an alternative or an addition, some embodiments may include the step of causing translation along the second axis of the display if the gesture is determined to comprise a single input object traveling along the second direction. Similarly, an amount of translation along the first axis may be based on a distance of travel of the single input object along the second direction. Also, embodiments may include the step of causing translation along the third axis of the display if the gesture is determined to comprise a change in separation distance of multiple input objects with respect to each other, or at least four input objects concurrently moving substantially in a same direction.
Embodiments may determine that a type of gesture that comprises multiple input objects comprises circular motion of at least one of the multiple input objects, such that embodiments may cause rotation about the third axis of the display if the gesture is determined to comprise circular motion of at least one of the multiple input objects.
In response to gestures that include object motion along both first and second directions, some embodiments may cause the result associated with the predominant direction of the object motion. That is, some embodiments may determine if the gesture comprises multiple input objects concurrently traveling predominantly along the second (or first) direction, such that rotation about the first (or second) axis of the display occurs only if the gesture is determined to comprise the multiple input objects concurrently traveling predominantly along the second (or first) direction. Determining object motion as predominantly along the second (or first) direction may comprise determining that object motion is not predominantly along the first (or second) direction. The first and second directions may be pre-defined.
In response to gestures that include object motion along both first and second directions, some embodiments may cause the result that mixes responses associated with object motion in the first direction and object motion in the second direction. That is, some embodiments may determine an amount of rotation about the first axis based on an amount of travel of the multiple input objects along the second direction, and determine an amount of rotation about the second axis based on an amount of travel of the multiple input objects along the first direction. The amount of rotation determined in the first and second axes may be superimposed or combined in some other manner such that multiple input objects concurrently traveling along both the second and first directions causes rotation about both the first and second axes. Some embodiments may filter out or disregard smaller object motion in the second direction if the primary direction of travel is in the first direction (or vice versa), such that mixed rotation responses do not result from input that are substantially in the first direction (or the second direction).
Embodiments may also have continuing rotate regions for continuing rotation. Some embodiments have sensing regions that comprise a first set of continuation regions. The first set of continuation regions may be at first opposing outer portions of the sensing region. Such embodiments may include the step of causing rotation about the first axis in response to input objects moving into and staying in the first set of continuation regions after multiple input objects concurrently traveled along the second direction. Some embodiments also have a second set of continuation regions. The second set of continuation regions may be at second opposing outer portions of the single contiguous sensing region. Such embodiments may include the step of causing rotation about the second axis in response to input objects moving into and staying in the second set of continuation regions after multiple input objects concurrently traveled along the first direction.
Embodiments may also be configured to continue rotation, even if no further object motion occurs, in response to an increase in input object count. For example, embodiments may include the step of causing continued rotation in response to an increase in a count of input objects in the single contiguous sensing region after multiple input objects concurrently traveled in the single contiguous sensing region.
Embodiments may be configured to continue translation along the third axis in response to input in corner regions or multiple input objects converging in the same region. For example, an embodiment may have a sensing region that comprises a set of extension regions at diagonally opposing corners of the sensing region. The embodiment may comprise the additional step of causing continued translation along the third axis of the display in response to input objects moving into and staying in the extension regions after a prior input associated with causing translation along the third axis. Such a prior input may comprise multiple input objects having moved relative to each other in the sensing region such that a separation distance of the multiple input objects with respect to each other changes. As an alternative or an addition, an embodiment may include the step of causing continued translation along the third axis of the display in response to input objects moving into and staying in a same portion of the single contiguous sensing region after a prior input associated with translation along the third axis.
Embodiments may also have mode-switching capability (e.g. switching to a 2D control mode, another 3D control mode, some other multi-degree of freedom control mode, or some other mode), and include the step of entering a particular 3-dimensional degree of freedom control mode in response to a mode-switching input. The mode-switching input may comprise multiple input objects simultaneously in specified portions of the single contiguous sensing region. This mode switching input may be detected by the embodiments watching for multiple input objects substantially simultaneously entering specified portions of the single contiguous sensing region, multiple input objects substantially simultaneously tapping in specified portions of the single contiguous sensing region, multiple input objects substantially simultaneously entering and leaving corners of the single contiguous sensing region, and the like. As an alternatively or an addition, the mode-switching input may comprise at least one input selected from the group consisting of: at least one input object tapping more than 3 times in the single contiguous sensing region, at least three input objects substantially simultaneously entering the single contiguous sensing region, and an actuation of a mode-switching key.
The methods described above may be implemented in a proximity sensing device having a single contiguous sensing region. The single contiguous sensing region is usable for controlling multiple degrees of freedom of a display separate from the single contiguous sensing region. The proximity sensing device may comprise a plurality of sensor electrodes configured for detecting input objects in the single contiguous sensing region. The proximity sensing device may also comprise a controller in communicative operation with plurality of sensor electrodes. The controller is configured to practice any or all of the steps described above in various embodiments of the invention.
This application claims priority of U.S. Provisional Patent Application Ser. No. 61/127,139, which was filed on May 9, 2008, and is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61127139 | May 2008 | US |