This relates to multi-touch gestures in general, and more specifically to simulating multi-touch gestures utilizing a single pointing input device.
A multi-point sensor panel is a panel that can sense multiple point events at the same time. Thus, a multi-point sensor panel can, for example, sense two touch events that take place simultaneously at two different positions and caused by two fingers or other objects being pressed to the panel. Examples of multi-point sensor panels are discussed in U.S. patent application Ser. No. 11/649,998, entitled “PROXIMITY AND MULTI-TOUCH SENSOR DETECTION AND DEMODULATION,” filed on Jan. 3, 2007 and hereby incorporated by reference in its entirety. As discussed in the latter application, multi-point sensor panels can include multi-touch sensor panels as well as other types of sensor panels (such as multi-proximity sensor panels). Multi-point sensor panels can be used to provide an improved user interface for various electronic devices.
One way to leverage multi-point sensor panels to provide an improved user experience is to allow users to communicate with the device using multi-point gestures. A gesture is a user input that does not merely specify a location (as is the case with an ordinary mouse click, for example), but can also specify a certain movement of an object or objects, optionally with a certain direction and velocity. For example, traditional mouse based gestures usually provide that a user press a mouse button and move the mouse according to a predefined path in order to perform a gesture. Multi-touch functionality can allow for more complex gestures to be used. For example, a user can perform a gesture by moving two or more fingers on the surface of the panel simultaneously. Multi-point gestures (and more specifically multi-touch gestures) are discussed in more detail in U.S. patent application Ser. No. 10/903,964, entitled “GESTURES FOR TOUCH SENSITIVE INPUT DEVICES,” filed on Jul. 30, 2004 and hereby incorporated by reference in its entirety.
In order to obtain the full benefit of multi-touch gestures, software that runs on a multi-touch capable device may also need to be multi-touch capable. However, developing such software can be difficult. Existing computing platforms for developing software, such as ordinary personal computers and/or workstation computers, are usually not multi-touch capable. Without such capabilities, existing software development computers are usually unable to test the multi-touch capable software being developed on them.
A developer can load the software being developed on a multi-touch capable device and then test it there. However, in practice a developer may need to perform many repeated tests on different versions of the software, and having to load each version of the software to be tested on a separate device can prove to be very time consuming and can significantly slow down the development process.
This relates to allowing a computer system using a single pointing device to simulate multi-point gesture inputs. Simulating software can receive single pointing inputs (such as, for example, input from a mouse) and convert them to simulated multi-point gesture inputs such as finger pinches, reverse pinches, translations, rotation, and the like. The simulating software can also allow the user to use keyboard keys to give the user additional control when generating the multi-point gesture inputs.
A received single-point gesture input can be converted to a multi-point gesture input by various predefined methods. For example, a received single point gesture input can be used as a first gesture input while a second gesture input can be generated by displacing the first gesture input by a predefined vector. Alternatively, or in addition, the second gesture input can be defined as a being a gesture symmetrical to the first gesture input with respect to a predefined point. In another alternative, multiple single point gesture inputs can be consecutively received from the single pointing device and converted into a multi-point gesture input that defines an at least partially simultaneous performance of the consecutively received multiple single point inputs.
In the following description of preferred embodiments, reference is made to the accompanying drawings which form a part hereof, and in which it is shown by way of illustration specific embodiments in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the preferred embodiments of the present invention.
This relates to allowing a computer system using a single pointing device to simulate multi-point gesture inputs. Simulating software can receive single pointing inputs (such as, for example, input from a mouse) and convert them to simulated multi-point gesture inputs such as finger pinches, reverse pinches, translations, rotation, and the like. The simulating software can also allow the user to use keyboard keys to give the user additional control when generating the multi-point gesture inputs.
When a user enters simulated multi-point gesture inputs, the device simulator can cause markers to appear and move across the simulated subject device screen to indicate the type of touch event being performed using the mouse and keyboard (or other input devices). These markers can be, for example, small circles or other shapes representing fingertips detected on or in proximity to a multi-touch panel. The markers can then be interpreted as actual point inputs, such as the centroid of the circle, when testing multi-point software.
Although embodiments of the present invention may be described herein in terms of simulating the multi-point capabilities of portable devices, personal computers and/or workstations, it should be understood that embodiments of the invention are not limited to such devices, but are generally applicable to simulating the capabilities of any multi-point capable device on any other device. While the detailed description below centers on simulating multi-touch sensor panels, its teachings can apply to multi-point sensor panels in general.
Device 100 can include a monitor 101, a keyboard 102 and a mouse 103 for communicating with a user. Alternatively, the device can include other interface devices for communicating with the user. It should be noted that in the present example, device 100 includes a single pointing device (i.e., mouse 103). The mouse can be considered a single pointing device because it only allows the selection of one spatial point at a time. In contrast, a multi-touch sensor panel can be considered a multi-pointing device because it allows for multiple spatial points to be selected at a single time (e.g., by placement of two or more fingers down at two or more different points on or near the panel). Embodiments of the invention do not require that device 100 include only a single pointing device and can include multi-pointing devices. Device 100 can include a CPU and one or more memories. The one or more memories can store instructions and data, and the CPU can execute instructions stored by the memory. Thus, device 100 may execute various software, including but not limited to Software Development Kit (SDK) software.
As noted above, device 100 can be used for developing or testing software for device 110. Thus, device 100 can be referred to as a tester device and device 110 as a subject device.
In some embodiments of the invention, emulation software 205 can be used to allow UI APIs 201 to run on OS 200 and device 100. In other embodiments, OS 200 and the OS running at subject device (110) may be identical or substantially similar, so that no emulation software is necessary.
Tester device 100 can also run software to be tested 202. This software can be software that is eventually intended to be run on device 110, but is presently being developed and tested on device 100. Software to be tested can use UI APIs 201 to communicate with the user. UI APIs, can provide all communications between the software to be tested and the device it is running on. As noted above, the UI APIs 201 running on the tester device can be identical or very similar to similar APIs that run on the subject device 110. Thus, UI APIs can make it appear to the software to be tested that it is actually executing at device 110. Or, in other words, the UI APIs can allow the software to be tested to use the same methods for communicating with the outside world as it would have done if it had been running at the subject device 110.
Ordinarily (i.e., when being executed at subject device 110), UI APIs 201 can communicate with lower level software and/or hardware of device 110, that may perform various user interface functions. Thus, the UI APIs can communicate with display/multi touch panel 111 of device 110 (or lower level software that controls the display/multi touch panel) in order to cause information or graphics to be displayed, and/or receive touch events indicating user input. However, if the UI APIs are being executed at device 100, they may not be able to communicate with a display/multi touch panel 111, as device 100 may not include such an element. While tester device 100 can include a display 101, it can be of a different type than the display of the subject device 110. Furthermore, device 100 need not include any multi touch sensor panel.
Thus, device simulator 203 can be used to simulate the display and/or multi touch sensor panel of device 110 at device 100. The device simulator can provide for UI APIs 201 the same type of interface(s) that these APIs would communicate with in subject device 110 in order to connect to display/multi-touch panel 111. Device simulator 203 can cause a window 104 (see
Similarly, device simulator 203 can take in user input from a user of device 100 and convert it to a type that would have been received from a user of device 110. Thus, the device simulator can take in input provided through the interface devices of device 100 (e.g., keyboard 102 and mouse 103) and convert it to input that would have been produced by a multi-touch sensor panel. More details as to how the device simulator achieves this conversion are provided below.
In some embodiments, the device simulator can also simulate other input/output functionalities of device 110, such as sounds, a microphone, power or other buttons, a light sensor, an acceleration sensor, etc.
In some embodiments, tester device 100 and subject device 110 can use different types of processors with different instruction sets. In such cases, the software to be tested 202 and UI APIs can each include two different versions, one intended for execution at device 100 and the other at device 110. The two versions can be the results of compiling the same or similar high level code into the two different instruction sets associated with devices 100 and 110 (for the purposes of this example, high level code can include any code at a higher level than assembly and machine code). Thus, device 100 can be used to test the high level code of the software to be tested 202. This can be sufficient if the compilers for devices 100 and 110 do not introduce any errors or inconsistencies.
Software development kit (SDK) 204 can also be executed at device 100. The SDK can be used to develop the software to be tested 202. Furthermore, UI APIs (201) and device simulator (203) can be considered a part of the SDK used for the testing of software developed using the SDK. In alternative embodiments, no SDK needs to run on device 100. In these embodiments, device 100 can be used for testing purposes and not necessarily for software development.
In some embodiments, device 100 need not be used for testing or software development at all. Instead, it can be used to simply execute software intended for device 110 and provide a simulation of device 110. For example, an embodiment of the invention can be used to provide a demonstration of the operation of a multi-touch enabled device so that a user can decide whether to purchase that device.
As noted above, the simulating software can take in single pointing input, or single pointing gestures issued from the user (such as, for example, gestures input by a mouse) and convert it to multi-touch gesture inputs. The simulating software can also allow the user to use keyboard keys to give the user additional control over the resulting multi-touch gesture inputs. The conversion from user input to multi-touch gesture inputs can be performed according to predefined rules.
Ordinarily, multi-touch gestures can be performed by placement of fingers, palms, various other parts of the human body, or objects (e.g., stylus or pens) on or near a multi-touch sensor panel. Some embodiments of the present invention can allow a user to enter all of the above types of simulated gestures. One easily performed group of gestures involves placement and movement of two or more finger tips on or near the surface of a touch sensor panel.
While a user is entering simulated multi-touch gesture inputs, the device simulator 203 can cause markers to appear and move across the simulated subject device screen (i.e., window 104) to indicate to the user the type of gesture he/she is entering using the mouse and keyboard (or other interfaces of device 100). These markers can be, for example, small circles representing fingertips pressing against a multi-touch panel. The markers are discussed in more detail below.
In some embodiments, a user can begin a multi-touch gesture simulation by entering a starting position.
Windows 300 and 301 show an initial placement stage of entering a gesture. The initial placement stage can be initialized in various ways, such as by pressing a keyboard key, clicking on a mouse button (not shown) or simply moving a mouse cursor over the simulation window (300 or 301). Circles 302-305 represent the positions of touch inputs. In other words, they represent the positions of virtual fingertips that are touching the simulated screen/multi-touch panel.
In a first alternative (illustrated in
In a second alternative, instead of a predefined vector 306, a predefined middle point 307 can be used. The user can again position a first touch (304) using the mouse pointer (309). In this alternative, the second touch (305) can be positioned in a mirror or symmetrical position from that of the first touch with respect to middle point 307. In other words, if the displacement from the middle point to the first touch defines vector 310, then the position of second touch 305 is such that the displacement between the second touch and the middle point defines the same vector (310). Again, the user can move the cursor around to determine a desirable position and indicate the desirable starting position (e.g., by clicking on a mouse button). Again, the middle point 307 can be entered by the user, or a default value (e.g., the middle of the window) can be used.
Various embodiments can utilize either of the above discussed alternatives for entering a starting position. Some embodiments can implement both alternatives and allow the user to choose between them (e.g., by pressing or clicking on a button).
In some embodiments, a user may switch between the two alternatives while manipulating the touches. For example, the user may start out with the
In addition, the user can start with the
In both alternatives, the device simulator can indicate the positioning of touches 302-304 in the simulation window by, for example, showing small semi-transparent circles indicating the positions of touches. The position of the middle point can also be indicated in the simulation window. The method of positioning shown in
A person of skill in the art would recognize that the teachings discussed above in connection with
As noted above, the desired initial position can be indicated by the user by clicking a mouse button. In some embodiments, movement can be defined by keeping the mouse button clicked (or down) while moving the mouse.
Movement can be defined in a manner similar to that of defining the initial position. Thus,
One difference between the schemes of
The device simulator can move the touch that starts at position 405 (the second touch) from position 405 to position 405′ in such a manner that the position of the second touch is mirrored from that of the first touch across from middle point 407. Thus, the second touch may travel along path 415. Middle point 407 can be defined in accordance with the initial position of the two touches. Thus, it can be the middle point between initial positions 404 and 405 (as shown). Again, the device simulator can track the movement of both touches, convert it into proper data format and send it to UI APIs 201.
Some embodiments may offer both the methods of
In some embodiments, a user can switch between the schemes of
The above discussed methods can be useful for easily defining certain types of gestures that are used in certain multi-touch enabled devices. These gestures can include, for example, dragging two fingers in parallel, pinching and expanding two fingers, turning two fingers (as if turning an invisible knob), etc. However, these methods may not be able to define all possible gestures that utilize two or more fingers. This need not be an impediment, because definition of all possible gestures may not be needed. Only definition of gestures considered meaningful by the simulated device (i.e., subject device 110) and/or the software to be tested may need to be simulated.
Nevertheless,
According to the scheme of
Thus, one component single touch gesture of a multi-touch gesture can be defined. One or more additional components can be subsequently defined in a similar manner. For example, with reference to screen 502, a second gesture component can be defined after the first one by initially clicking the mouse at position 506 and then moving it along a path 507 to position 506′. In some embodiments, while a second or subsequent gesture component is being defined, one or more previously defined gesture components can be “played back” while the subsequent component is being defined. This can assist the user in defining the relevant component, as the gesture being defined assumes that all components are performed at least partially simultaneously. Thus, while the user is defining the second component by moving the cursor from position 506 to position 506′, animation 508 of another touch being moved from position 505 to position 505′ can be simultaneously displayed by the device simulator.
After the second gesture component is entered, a third gesture component can be entered. The third gesture component can involve moving a cursor from position 509 to position 509′ along path 510. Similarly, animations 511 and 512 of the two previously entered gesture components can be “played back” while the third gesture component is being entered.
Embodiments of the present invention can allow any number of gesture components to be thus entered. In some embodiments, the number of gesture components that can be entered can be limited in relation to the number of fingers a user of the subject device 110 can be expected to use to enter a gesture. Various embodiments can also allow one or more erroneously entered gesture components to be re-entered or deleted.
Once the user has entered a desired number of gesture components, the user can indicate so (e.g., by clicking on a designated button). At this point the device simulator can compose a single multi touch gesture by superimposing all gesture components (i.e., performing them simultaneously). Thus, based on the components discussed in connection with
In some embodiments, the device simulator can normalize the various gesture components. More specifically, the device simulator can adjust the speed of the various components so all gesture components can begin and end simultaneously. In alternative embodiments, the speed may not be adjusted, so that some components can end before others. In still other embodiments, users can be allowed to enter gesture components that begin after other gesture components begin.
A person of skill in the art would recognize that, in the addition to the above, other methods for entering multi-touch gestures may be used. For example, a shape of a touch outline can be entered, by for example tracing it with a mouse or selecting from predefined choices. The shape can signify a more complex touch event than simply touching the screen with a finger tip. It can, for example, signify touching the screen with a palm, or placing an object on the screen. Once the shape has been entered, it can be moved around by moving a mouse cursor in order to define a multi-touch gesture.
While the above discussion centers on the case in which the tester device features only a single pointing device (such as a mouse), in some embodiments the tester device can feature a multi touch panel as well. For example, the tester device can be a laptop featuring a multi-touch enabled trackpad. The subject device can include a multi-touch panel that is combined with a display (thus allowing a user to enter multi-touch inputs by interacting with the surface of the display). The tester device can simulate the subject device by providing a simulation of the subjects device's display in the simulation window 104 of the tester device's monitor 101, while allowing a user of the tester device to enter multi-touch inputs using the tester device's track pad. The tester device can indicate simulated locations of touches in the simulation window (e.g., by showing small circles in the simulation window) while the user is entering touches through the touchpad.
While some of the above discussed embodiments relate to converting single point gesture inputs into multi-touch gesture inputs, the invention need not be thus limited. More generally, embodiments of the invention can relate to converting single point inputs into multi-point inputs. Multi-point inputs can include multi-touch inputs, but can also include other types of inputs such as, for example, the multi-proximity inputs discussed by U.S. patent application Ser. No. 11/649,998.
Although the present invention has been fully described in connection with embodiments thereof with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of the present invention as defined by the appended claims.