A touch display is a display that serves the dual function of visually presenting information and receiving user input. Touch displays may be utilized with a variety of different devices to provide a user with an intuitive input mechanism that can be directly linked to information visually presented by the touch display. A user may use touch input to push soft buttons, turn soft dials, size objects, orientate objects, or perform a variety of different inputs.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
According to some aspects of this disclosure, touch input is constrained based on the hand posture of a hand executing the touch input. The hand posture of a touch gesture may be used to set a mode constraint that specifies a constrained parameter of a virtual object that is to be maintained responsive to subsequent touch input. Once a mode constraint is set, an unconstrained parameter of the virtual object may be modulated responsive to subsequent touch input while the constrained parameter is left unchanged.
Computing system 10 is also capable of recognizing the shapes of the contact patches where each user hand is contacting touch display 12. These shapes may be referred to as contact silhouettes of the user hands. The contact silhouettes can be correlated to different user hand postures so that computing system 10 may recognize different user hand postures based on the contact silhouettes detected by touch display 12. As described below, different hand postures may be linked to different mode constraints, which can be used to provide a user with increased levels of control when manipulating or otherwise controlling a virtual object 14 in a virtual workspace 16.
At to,
At 34, method 30 includes setting a mode constraint based on the hand posture recognized at 32. As introduced above with reference to
At t1,
A mode constraint is set with respect to a virtual object. For example, in
Furthermore, the orientation of the hand posture of a touch gesture and/or the location of the touch gesture relative to a virtual object may be used to control how a mode constraint is applied to a the virtual object. As nonlimiting examples, the orientation of a hand posture may be used to select a constrained axis. In the example shown in
At 36, method 30 includes recognizing a subsequent touch gesture. At t2, t3, and t4,
At 38, method 30 includes modulating an unconstrained parameter of the virtual object responsive to the subsequent touch gesture while maintaining the constrained parameter of the virtual object in accordance with the set mode constraint. At t3 and t4,
The rail mode constraint described above is nonlimiting, and method 30 is compatible with virtually any mode constraint. Furthermore, the constrained parameters and unconstrained parameters associated with mode constraint 42 in
For example,
These examples are provided to demonstrate that a variety of different combinations of constrained parameters and unconstrained parameters may be associated with a particular hand posture. While a rotation constraint is associated with each of the example rail mode constraints provided above, a different rail mode constraint may treat rotation as an unconstrained parameter. The examples provided herein are not limiting.
As described above, two touch gestures cooperate to control a virtual object. One touch gesture is described as an initial touch gesture and the other touch gesture is described as a subsequent touch gesture. It should be understood that the initial touch gesture may interrupt another gesture that is in progress, in which case a portion of the interrupted gesture that occurs after the initial touch gesture begins may be considered the subsequent touch gesture. In other words, the initial touch gesture that sets the mode constraint can be executed after a precursor to the subsequent touch gesture has already begun. Furthermore, the initial touch gesture and the subsequent touch gesture may begin at substantially the same time.
The corner mode constraint is set responsive to a corner gesture. A corner gesture may be characterized by a contact silhouette having two segments forming a corner. In the illustrated example, the corner forming segments that form the contact silhouette are the side of the little finger and the side of the palm.
In the examples provided above, the mode constraint is set until the initial touch gesture terminates. Using the example of
A pinned mode constraint may also be implemented in which the mode constraint remains set until a termination gesture is executed.
The examples discussed above with reference to
For the sake of simplicity, computing system 10 and touch display 12 are described in simplified form with respect to
Logic subsystem 72 may include one or more physical devices configured to execute one or more instructions. For example, the logic subsystem may be configured to execute one or more instructions that are part of one or more programs, routines, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more devices, or otherwise arrive at a desired result. The logic subsystem may include one or more processors that are configured to execute software instructions. Additionally or alternatively, the logic subsystem may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. The logic subsystem may optionally include individual components that are distributed throughout two or more devices, which may be remotely located in some embodiments.
Data-holding subsystem 74 may include one or more physical devices configured to hold data and/or instructions executable by the logic subsystem to implement the herein described methods and processes. When such methods and processes are implemented, the state of data-holding subsystem 74 may be transformed (e.g., to hold different data). Data-holding subsystem 74 may include removable media and/or built-in devices. Data-holding subsystem 74 may include optical memory devices, semiconductor memory devices, and/or magnetic memory devices, among others. Data-holding subsystem 74 may include devices with one or more of the following characteristics: volatile, nonvolatile, dynamic, static, read/write, read-only, random access, sequential access, location addressable, file addressable, and content addressable. In some embodiments, logic subsystem 72 and data-holding subsystem 74 may be integrated into one or more common devices, such as an application specific integrated circuit or a system on a chip.
Display 76 may be used to present a visual representation of data held by data-holding subsystem 74. As the herein described methods and processes change the data held by the data-holding subsystem, and thus transform the state of the data-holding subsystem, the state of display 76 may likewise be transformed to visually represent changes in the underlying data. As a nonlimiting example, as a parameter of a virtual object is adjusted in accordance with a mode constraint, the display 76 may change the visual appearance of the virtual object in accordance with the adjustments to the parameter. Display 76 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic subsystem 72 and/or data-holding subsystem 74 in a shared enclosure, or such display devices may be peripheral display devices.
Touch-input receptor 78 may be used to recognize multi-touch user input. The touch-input receptor and the display may optionally be integrated into a touch screen 82 that serves as both display 76 and touch-input receptor 78. For example, a surface computing device that includes a rear projection display and an infrared vision, touch detection system may be used. In other embodiments, the touch-input receptor may be separate from the display. For example, a multi-touch track pad may serve as the touch-input receptor.
It is to be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated may be performed in the sequence illustrated, in other sequences, in parallel, or in some cases omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
Number | Name | Date | Kind |
---|---|---|---|
5999185 | Kato et al. | Dec 1999 | A |
6842175 | Schmalstieg et al. | Jan 2005 | B1 |
7705830 | Westerman et al. | Apr 2010 | B2 |
20030214481 | Xiong | Nov 2003 | A1 |
20040028260 | Higaki et al. | Feb 2004 | A1 |
20070229477 | Ludwig | Oct 2007 | A1 |
20070262964 | Zotov et al. | Nov 2007 | A1 |
20080211766 | Westerman et al. | Sep 2008 | A1 |
20090079700 | Abernathy | Mar 2009 | A1 |
20090102800 | Keenan | Apr 2009 | A1 |
20090184939 | Wohlstadter et al. | Jul 2009 | A1 |
20100103118 | Townsend et al. | Apr 2010 | A1 |
20100151946 | Wilson et al. | Jun 2010 | A1 |
20100208033 | Edge et al. | Aug 2010 | A1 |
20100211897 | Cohen et al. | Aug 2010 | A1 |
20100212571 | White | Aug 2010 | A1 |
20100214243 | Birnbaum et al. | Aug 2010 | A1 |
20100215257 | Dariush et al. | Aug 2010 | A1 |
20100234094 | Gagner et al. | Sep 2010 | A1 |
20100305929 | Andersen et al. | Dec 2010 | A1 |
20110107270 | Wang et al. | May 2011 | A1 |
20110218953 | Hale et al. | Sep 2011 | A1 |
Entry |
---|
“NextWindow's Multi-Touch Overview”, Retrieved at <<http://www.touch-base.com/documentation/Documents/nextwindow—multitouch.pdf>> 2007, pp. 7. |
Ring Scott, “Microsoft Surface's Natural User Interface Technology”, Retrieved at <<http://ezinearticles.com/?Microsoft-Surfaces-Natural-User-Interface-Technology&id=2770159>> 2009, pp. 3. |
Malik, et al., “Interacting with Large Displays from a Distance with Vision-Tracked Multi-Finger Gestural Input”, Retrieved at <<http://delivery.acm.org/10.1145/1100000/1095042/p43-malik.pdf?key1=1095042&key2=0524523521&coll=GUIDE&dl=GUIDE&CFID=53390728&CFTOKEN=24030313>> In the proceedings of the 18th annual ACM symposium on User interface software and technology,Oct. 23-27, 2005, pp. 43-52. |
Wilson, Andrew D., “PlayAnywhere: A Compact Interactive Tabletop Projection-Vision System”, Retrieved at <<http://delivery.acm.org/10.1145/1100000/1095047/p83-wilson.pdf?key1=1095047&key2=8478523521&coll=GUIDE&dl=GUIDE&CFID=53398089&CFTOKEN=54379160>> In the proceedings of the 18th annual ACM symposium on User interface software and technology , Oct. 23-27, 2005, pp. 83-92. |
Number | Date | Country | |
---|---|---|---|
20110157025 A1 | Jun 2011 | US |