The present invention relates in general to the field of information handling system inputs, and more particularly to an information handling system enhanced gesture management, control and detection.
As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option available to users is information handling systems. An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information. Because technology and information handling needs and requirements vary between different users or applications, information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated. The variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications. In addition, information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
Information handling systems generally receive inputs from end users through a variety of input/output (I/O) devices. For example, typical information handling systems interface with a keyboard to accept keyed inputs and a mouse to accept point-and-click inputs. In addition to these basic I/O devices, end users often interact with an information handling system through touch devices, such as a touchscreen display or a touchpad. In a touch environment, totems are tools that act as an intermediary between the end user and the touchscreen to aid in making inputs. One example of a totem is a pen that writes on a touchscreen, thus providing a more precise contact location than an end user finger. Another example of a totem is a rotating disc that rests on a touchscreen and rotates to provide a dial input device. A rotating disc may, for instance, change the volume of speakers based on rotational position, which translates as touches to a touchscreen display. Alternatively, a totem may detect rotational positions internally and report positions by Bluetooth or other communication mediums.
When interacting with touch devices, end users can input information in the form of gestures. Generally, gestures are moving touches with one or more fingers that have a defined pattern associated with a defined input. For example, a common gesture is a pinching of two fingers towards each other to indicate zoom in and pinching of two fingers away from each other to indicate zoom out. In some instances, applications define gestures, such as a rotational movement in a computer assisted design (CAD) application which indicates rotation of the orientation of a displayed model. More recently, end users can make gestures as inputs without relying upon a touch device to detect the gestures. For example, a depth camera and/or ultrasonic sensor detects end user motions in space and apply the motions as gestures to data presented on a display. Thus, a pinching motion of fingers is detected by the depth camera rather than the touchscreen and then applied to an image presented on the touchscreen. In particular, gestures provide an intuitive input for virtual objects presented by a head mounted display. The end user is able to reach out towards and manipulate the virtual object as if the end user is manipulating a real object. The end user gestures are captured and applied as inputs to the model that generated the virtual object.
Although gestures provide an intuitive input interface with an information handling system, gestures generally lack the precision of more concrete input devices, such as a keyboard or mouse. For example, resolution of gestures that involve action or movements tends to be low and fixed in nature. For instance, a pinch to zoom typically changes scale from 100 to 200% zoom without providing an effective way to vary zoom at lower gradients, such as 63 to 87%. In addition, gestures tend to have unique patterns identifiable by a sensor, such as a touchscreen or depth camera, and not necessarily to have precise inputs. For instance, human motor control limitations based upon end user hand size and flexibility limit the scope of gesture actions. With the pinch zoom example, a user generally has a fixed response for a one-handed pinch without an option to change the scale of an input, such as 100% zoom for a detected pinch versus 500% for that pinch. Instead, an end user often has to perform multiple sequential gestures to obtain a desired input or has to turn to more precise input techniques, such as a selection input through a keyboard and/or mouse.
Therefore, a need has arisen for a system and method which provides information handling system enhanced gesture management, control and detection.
In accordance with the present invention, a system and method are provided which substantially reduce the disadvantages and problems associated with previous methods and systems for accepting gestures as inputs at an information handling system. Gesture inputs at an information handling system selectively adapt a physical input device to change modes during the gesture so that inputs to the physical input device support the gesture with more precise granularity. For example, a rotational totem adapts from inputs of a first type to gesture inputs at detection of a gesture input so that rotating the totem adjusts the gesture input, and reverts back to inputs of the first type at removal of the gesture.
More specifically, an information handling system processes information with a processor and memory to present the information as a visual image at a display, such as a touchscreen display or a head mounted display. Sensors interfaced with the information handling system detect end user inputs as gestures that adjust the presentation of the visual image, such as pinching the visual image to zoom in or out, swiping to move the visual image, or rotating to change the orientation of the visual image. Upon detection of a gesture, a gesture engine executing on the information handling system adapts a physical input device, such as a rotational totem, from inputs of a first type to inputs associated with the gesture. While the gesture is maintained, inputs to the physical input device are applied in the same manner as the gesture, such as zooming, swiping or rotating the visual image, to provide the end user with more precision and granularity with the input. For example, a gesture to zoom in by pinching two fingers together zooms the visual image by 100% and adapts a rotating totem to input zoom adjustments at a fractional zoom. such as commanding 100% of zoom with a full 360 degrees of rotation of the totem. Once the end user completes the gesture, such as by removing the fingers from the pinched position, the physical input device reverts to the first type of input. Thus the gesture engine manages gesture inputs by combining end user gestures with physical device inputs with temporary changes to the physical device mode and/or functionality while the gesture is active.
The present invention provides a number of important technical advantages. One example of an important technical advantage is that gesture inputs automatically adapt a rotating totem or other physical input device to apply inputs as gesture inputs while a gesture is active. Adapting a physical input device to apply inputs as gesture inputs allows an end user to maintain continuity of a gesture through the physical input device without acting separately to re-configure inputs of the physical input device. The end user initiates an input through a gesture and then manipulates the physical input device to provide refined gesture inputs at a desired granularity. For example, an end user initiates a low resolution gesture input, such as a swipe on a color bar to select a color, and then performs a high resolution input by rotating a totem to increment between available colors in a more narrow range. The end user automatically has available different resolutions of inputs with a gesture input by temporarily switching the mode or functionality of an input device during an active gesture to support the gesture input with an alternative resolution. In various embodiments, various gestures are supported to adapt visual images on a touchscreen or virtual objects presented by a head mounted display so that convenient physical input devices that offer greater input precision adapt to perform the gesture input, such as in an immersed environment that includes depth camera, ultrasonic, gaze and other types of gesture input tracking.
The present invention may be better understood, and its numerous objects, features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference number throughout the several figures designates a like or similar element.
Gesture inputs at an information handling system adapt a physical input device to inputs associated with the gesture while the gesture is active, the physical input device providing a different granularity and/or resolution to the end user of the information handling system for the gesture inputs. For purposes of this disclosure, an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, classify, process, transmit, receive, retrieve, originate, switch, store, display, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes. For example, an information handling system may be a personal computer, a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price. The information handling system may include random access memory (RAM), one or more processing resources such as a central processing unit (CPU) or hardware or software control logic, ROM, and/or other types of nonvolatile memory. Additional components of the information handling system may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, and a video display. The information handling system may also include one or more buses operable to transmit communications between the various hardware components.
Referring now to
In the example embodiment, end user 30 performs gesture inputs at touchscreen display 24 with predetermined finger or hand motions, such as a pinching movement with two fingers to command a zoom in or zoom out of visual image 26. The gesture is detected by touch controller 66 and reported through chipset 18 to CPU 12, such as through an operating system application programming interface (API) or driver. In the example embodiment, a gesture engine 34 includes instructions stored in flash memory to identify gestures, such as the pinch, and report the gestures to CPU 12, such as through the operating system. Once the gesture is recognized and evaluated, visual image 26 is adjusted by GPU 18 in response to the gesture. For instance, the amount of finger movement of the pinch detected by touch controller 66 is applied to determine an amount of zoom to change visual image 26, such as a change of 100% to double or halve the zoom.
In order to support more precise and granular inputs related to a gesture, when gesture engine 34 identifies a gesture, the gesture type is communicated to a totem engine 36 to switch the functionality of totem 28 to support the ongoing gesture until the gesture ends. For example, if totem 28 is configured to scroll visual image 26 in response to a rotational movement, detection of a pinch gesture re-configures inputs to totem 28 so that rotation of totem 28 instead commands zoom in or out, the same input as that of the pinch gesture. In one embodiment, totem 28 remains associated with the gesture input while end user 30 remains active in the gesture. For example, end user 30 maintains pinch gesture 32 on touchscreen display 24 for as long as end user 30 desires to fine tune the zoom level with inputs made through totem 28. Once end user 30 has completed inputs through totem 28, end user 30 removes pinch gesture 32 to revert totem 28 to the previous input type, such as scrolling. In one embodiment, while totem 28 is associated with gesture inputs, totem engine 36 coordinates presentation of a gesture input user interface at display 24 proximate totem 28 and haptic feedback at totem 28. For instance, while pinch gesture 32 is maintained by an end user, totem engine 36 presents a user interface at totem 28 indicating change in zoom at rotational orientations of totem 28 and initiates a click of haptic feedback for each percent change in zoom performed by an end user through rotation of totem 28. As another example, end user 30 swipes a color pallet to locate a color selection and then rotates totem 28 to narrow the color selection by incrementing through each color to find a desired color shade. The swipe gesture provides a large scale color selection and, then, while the end user holds the swipe gesture in place, totem 28 rotation provides precise color shade selection with an increment in color shade at each degree of rotation. Once the end user removes the swipe gesture, totem 28 returns to supporting its previous function, such as scrolling.
Advantageously, selective configuration of a physical input device, such as totem 28, to associate with a gesture input while the gesture input is active helps to match end user input precision to end user needs in a non-intrusive and intuitive manner. In an immersed environment, an end user may select gestures as an input method where the perceived cost or burden of a gesture is more efficient than other available inputs. In selecting an available input method, the end user also considers accuracy of the input and control for the desired task. For example, writing with a fingertip on a touchscreen is quick and simple but inaccurate compared to writing on the touchscreen with a pen. An end user may select to make short inputs with a finger while performing more complex inputs with a pen. As another example, a touchpad provides cursor movement control with reasonable precision and movement lag, however, a touchpad tends to have difficulty distinguishing pinch gestures for zoom control in a precise manner. Automated detection at a touchpad or touchscreen of a gesture to adapt a physical input device enhances end user interactions by enabling the end user to shift to a bi-manual input mode as needed, such as during fast-paced or intensive interactions or during precise input events. With input tasks that have lower cognitive loads and where precision is more important than response time, automatically and temporarily changing the control input of one hand to the same control as the other hand offers improved performance by eliminating the real or perceived burden of manually switching input modes. For instance, a precise optical sensor in a rotating totem and the fine motor control available when rotating a knob between an index finger and thumb provides an opportunity for fine precision unavailable from gesture detection while not burdening the end user with additional tasks associated with selecting an input mode.
Referring now to
Referring now to
In the example embodiments above, detection of a gesture re-configures a physical input device, such as a rotational totem 28, to have the same type of input as the gesture. In alternative embodiments, alternative types of configurations may be applied to physical device inputs. For example, in one alternative embodiment, totem 28 is re-configured to change the scale of a gesture input. For instance, if an end user pinches to zoom and sees a 100% zoom in, the end user may then turn totem 28 while the gesture is maintained to increase or decrease the amount of zoom applied for the gesture. For example, the end user may rotate totem 28 to reduce the scale of zoom so that on subsequent zoom gestures a 10% zoom is performed. As another example, the end user may rotate totem 28 to the right to increase the scale of zoom so that on subsequent zoom gestures a 1000% zoom is performed. After the gesture completes, the zoom gesture setting may return to the original configuration or remain set for application as subsequent gestures. As another example, totem 28 may provide rotational input as fine control for the gesture while translation of totem 28 may provide a related input, such as changing the scale of the object involved with the gesture. For instance, rotation may adjust shade of color as a fine input and translation up and down may provide a jump to a different color in the palette. As another example, an end user rotation of a virtual object about an axis re-configures totem 28 to change the rotational axis instead of the amount of rotation. After completion of the gesture, the physical input device reverts to its original functionality. In addition to a rotating totem, other types of input devices that change functionality may include a pen, a 3D Connection Space Mouse used for CAD applications that includes rotational inputs for find control and joystick sensors for directional input, etc. . . .
Referring now to
Although the present invention has been described in detail, it should be understood that various changes, substitutions and alterations can be made hereto without departing from the spirit and scope of the invention as defined by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
6134661 | Topp | Oct 2000 | A |
6297804 | Kashitani | Oct 2001 | B1 |
6504530 | Wilson et al. | Jan 2003 | B1 |
6614422 | Rafii et al. | Sep 2003 | B1 |
7252394 | Fu | Aug 2007 | B1 |
7692667 | Nguyen et al. | Apr 2010 | B2 |
8228315 | Starner et al. | Jul 2012 | B1 |
8321810 | Henitze | Nov 2012 | B2 |
8531352 | Zeng et al. | Sep 2013 | B2 |
8958158 | Raffle et al. | Feb 2015 | B1 |
9348420 | Krishnakumar et al. | May 2016 | B2 |
9690400 | Krishnakumar et al. | Jun 2017 | B2 |
9958959 | Dietz | May 2018 | B2 |
20020075240 | Lieberman et al. | Jun 2002 | A1 |
20030126317 | Chang | Jul 2003 | A1 |
20030132921 | Torunoglu et al. | Jul 2003 | A1 |
20040049462 | Wang | Mar 2004 | A1 |
20040100479 | Nakano et al. | May 2004 | A1 |
20040267995 | Peng | Dec 2004 | A1 |
20050024606 | Li et al. | Feb 2005 | A1 |
20050141752 | Bjorgan et al. | Jun 2005 | A1 |
20050225538 | Verhaegh | Oct 2005 | A1 |
20050254453 | Barneah | Nov 2005 | A1 |
20050264987 | Krancher et al. | Dec 2005 | A1 |
20050281475 | Wilson | Dec 2005 | A1 |
20060092170 | Bathiche et al. | May 2006 | A1 |
20060139714 | Gruhike et al. | Jun 2006 | A1 |
20060244719 | Brigham | Nov 2006 | A1 |
20060256090 | Huppi | Nov 2006 | A1 |
20070035521 | Jui et al. | Feb 2007 | A1 |
20070058879 | Cutler et al. | Mar 2007 | A1 |
20070270213 | Nguyen et al. | Nov 2007 | A1 |
20080231611 | Bathiche et al. | Sep 2008 | A1 |
20080238879 | Jaeger | Oct 2008 | A1 |
20090077504 | Bell et al. | Mar 2009 | A1 |
20090168027 | Dunn et al. | Jul 2009 | A1 |
20090184962 | Kuriakose et al. | Jul 2009 | A1 |
20090249339 | Larsson et al. | Oct 2009 | A1 |
20090278871 | Lewis | Nov 2009 | A1 |
20090295712 | Ritzau | Dec 2009 | A1 |
20100053110 | Carpenter et al. | Mar 2010 | A1 |
20100079369 | Hartmann et al. | Apr 2010 | A1 |
20100194677 | Fiebrink | Aug 2010 | A1 |
20100250801 | Sangster et al. | Sep 2010 | A1 |
20100328200 | Yu | Dec 2010 | A1 |
20110012727 | Pance et al. | Jan 2011 | A1 |
20110157056 | Karpfinger | Jun 2011 | A1 |
20110181523 | Grothe et al. | Jul 2011 | A1 |
20110304580 | Wu et al. | Dec 2011 | A1 |
20120035934 | Cunningham | Feb 2012 | A1 |
20120050314 | Wang | Mar 2012 | A1 |
20120089348 | Perlin | Apr 2012 | A1 |
20120166993 | Anderson et al. | Jun 2012 | A1 |
20120169598 | Grossman et al. | Jul 2012 | A1 |
20120176311 | Bittenson | Jul 2012 | A1 |
20120194457 | Cannon | Aug 2012 | A1 |
20120235909 | Birkenbach | Sep 2012 | A1 |
20120272179 | Stafford | Oct 2012 | A1 |
20120306767 | Campbell | Dec 2012 | A1 |
20120313858 | Park et al. | Dec 2012 | A1 |
20130050124 | Helot | Feb 2013 | A1 |
20130082928 | Kim et al. | Apr 2013 | A1 |
20130111391 | Penner et al. | May 2013 | A1 |
20130132885 | Maynard et al. | May 2013 | A1 |
20130285901 | Lee et al. | Oct 2013 | A1 |
20140043289 | Stern | Feb 2014 | A1 |
20140051484 | Hunt et al. | Feb 2014 | A1 |
20140168083 | Ellard | Jun 2014 | A1 |
20140168132 | Graig | Jun 2014 | A1 |
20140327628 | Tijssen | Jun 2014 | A1 |
20140191927 | Choi | Jul 2014 | A1 |
20140195933 | Rao | Jul 2014 | A1 |
20140204127 | Tann et al. | Jul 2014 | A1 |
20140210748 | Narita | Jul 2014 | A1 |
20140267866 | Short et al. | Sep 2014 | A1 |
20150039317 | Klein et al. | Feb 2015 | A1 |
20150097803 | Leigh et al. | Apr 2015 | A1 |
20150098182 | Liu et al. | Apr 2015 | A1 |
20150131913 | Anderson et al. | May 2015 | A1 |
20150169080 | Choi | Jun 2015 | A1 |
20150169531 | Campbell | Jun 2015 | A1 |
20150248235 | Offenberg | Sep 2015 | A1 |
20150250021 | Stice et al. | Sep 2015 | A1 |
20150268773 | Sanaullah | Sep 2015 | A1 |
20150379964 | Lee et al. | Dec 2015 | A1 |
20160054905 | Chai et al. | Feb 2016 | A1 |
20160063762 | Heuvel et al. | Mar 2016 | A1 |
20160085358 | Palanisamy | Mar 2016 | A1 |
20160091990 | Park | Mar 2016 | A1 |
20160126779 | Park | May 2016 | A1 |
20160179245 | Johansson et al. | Jun 2016 | A1 |
20160239145 | Chang | Aug 2016 | A1 |
20160294973 | Bakshi et al. | Oct 2016 | A1 |
20160316186 | Krishnakumar | Oct 2016 | A1 |
20160349926 | Okumura | Dec 2016 | A1 |
20160378258 | Lyons et al. | Dec 2016 | A1 |
20170269722 | Krishnakumar et al. | Sep 2017 | A1 |
20180074639 | Powell | Mar 2018 | A1 |
20180088792 | Klein | Mar 2018 | A1 |
20180129335 | Stone | May 2018 | A1 |
20180129336 | Files | May 2018 | A1 |
20180129337 | Stone | May 2018 | A1 |
20180129347 | Files | May 2018 | A1 |
20180129348 | Files | May 2018 | A1 |
20180314416 | Powderly | Nov 2018 | A1 |
20180373350 | Rao | Dec 2018 | A1 |
20190012003 | Grant | Jan 2019 | A1 |
Number | Date | Country |
---|---|---|
WO 2016155887 | Oct 2016 | WO |
Entry |
---|
Celluon, Magic Cube, printed Aug. 24, 2015 http://www.celluon.com/products_epic_overview.php. |
Harrison, C., OmniTouch: “Wearable Multitouch Interaction Everywhere,” Oct. 2011, http://chrisharrison.net/projects/omnitouch/omnitouch.pdf. |
Ibar, Intelligent Surface System, printed Aug. 24, 2015 http://www.i-bar.ch/. |
Indiegogo, E-inkey Dynamic Keyboard, printed Sep. 9, 2015 http://www.indiegogo.com/projects/e-inkey-dynamic-keyboard. |
Leap Motion, Mac & PC Motion Controller for Games, Design, Virtual Reality & More, printed Aug. 24, 2015 https://www.leapmotion.com/. |
Mistry et al., WUW—Wear UR World: A Wearable Gestural Interface, Apr. 4-9, 2009, Proceeding CHI '09 Extended Abstracts on Human Factors in Computing Systems, pp. 4111-4116. |
razerzone.com, Razer Blade Pro, printed Aug. 24, 2015, http://www.razerzone.com/gaming-systems/razer-blade-pro. |
Schmalstieg et al., Bridging Multiple User Interface Dimensions with Augmented Reality, Oct. 6, 2000, 2000 (ISAR 2000), Proceedings. IEEE and ACM International Symposium on Augmented Reality, pp. 20-29. |
Sine Walker, MS-Windows focus-follows-mouse Registry hacks, Mar. 10, 2010, wordpress.com, https://sinewalker.wordpress.com/2010/03/10/ms-windows-focus-follows-mouse-registry-hacks/, pp. 1-2. |
Smith, M., Alienware M17X R4 Review, Jun. 21, 2012, http://www.digitaltrends.com/laptop-reviews/alienware-m17x-r4-review/. |
Sternon, J., Intel Nikiski Laptop Prototype with See-Through Touchpad Hands-On Pictures and Video, Jan. 9, 2012, http://www.theverge.com/2012/1/9/2694171/Intel-Nikiski-hands-on-pictures-video. |
Sun Innovations, Expanded Heads-Up Display (E-HUD), printed Aug. 24, 2015 http://www.sun-innovations.com/index.php/products/fw-hud. |
UBI, Sensor, printed Aug. 24, 2015, http://www.ubi-interactive.com/product#UbiSensor. |
Number | Date | Country | |
---|---|---|---|
20190265801 A1 | Aug 2019 | US |