A wide variety of user interfaces are used in computing systems to enable a user to move a cursor to a desired position in a list, in order to highlight a particular item for selection. Some lists, however, can be quite large, and may include hundreds, thousands or more items. Additionally, many lists regardless of size have a sequential arrangement of items that a user may wish to browse through sequentially. In some cases, the user will want to quickly pass through large numbers of items, for example to move to a location that is distant from the current cursor position. In other cases, it will be desirable to make fine and/or slow adjustments and only slightly move the cursor (e.g., sequentially browsing through a relatively small number of items once a general area has been reached, in order to select the particular item of interest).
Existing user interfaces are often very slow when called upon to cycle through many items in order to reach a distant item in a long list of items. This can lead to user impatience and dissatisfaction with the user interface. Alternatively, a different navigation operation can be performed, such as a navigating up to a higher-level category associated with the items (e.g., navigating up from a visual display of musical artists to a visual display of associated musical genres). The user could then move to select the appropriate category, and then navigate back “down” in order to reach a local area containing the desired item. However, this hierarchical approach entails different and extra steps, which may make the browsing and selection process more cumbersome. The hierarchical approach also prevents the user from directly paging through the individual items between the current and target location, which may in some instances be desirable to the user. The above challenges of providing effective navigation of items can be even more pronounced in a natural user interface environment, such as a computing setting without a keyboard or mouse, in which a host of issues can arise with respect to interpreting user gestures.
Accordingly, the disclosure provides a system and method of using motion-capture data to control a computing system. The method includes obtaining a plurality of positions for an object from motion-capture model data, with the positions being representative of a user's movement of the object in a three-dimensional motion-capture space. The method determines a curved-gesture center point based on at least some of the plurality of positions for the object. Using the curved-gesture center point as an origin, an angular property is determined for one of the plurality of positions for the object. The method further includes navigating a cursor in a sequential arrangement of selectable items of a user interface based on the angular property.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
A capture device and supporting hardware/software may be used to recognize, analyze, and/or track one or more objects, such as user 18. Object movements may be interpreted as operating system and/or application controls. Virtually any controllable aspect of an operating system and/or application may be controlled by movements of a target, such as user 18.
In the example scenario of
It will be appreciated that “cursor” refers herein to the selectable position in a list or other arrangement of selectable items. Accordingly, references to moving or navigating the cursor in or through items can also mean having a selectable position that is stationary on a display screen, while the items move past the stationary selectable position. For example, in
Continuing with
For example, the hand positions in the motion-capture data may be interpreted to infer a circular motion for hand 18a, even though the actual motion from moment to moment may be imperfect and vary in character (e.g., imperfect curves, arcs, circles and the like). In the interpreted data, the hand position(s) and inferred circular motion may be associated with various determined parameters, including a center point of rotation; a radius; a direction of rotation; an angular position for a hand position; a change in angular position; and/or an angular velocity, to name but a few non-limiting examples. One or more of these parameters may be used to control the way in which cursor 30 moves through items 28.
For example, the user may make a number of rapid circles in space with their hand in order to quickly navigate to a general area of interest in the list of items (e.g., a particular alphabetical location). Then, as the speed of hand rotation slows (e.g., decreased angular velocity), cursor movement may slow down as a result and allow for finer adjustment of the cursor position in the list. In this example, the user is enabled to perform both relatively larger and relatively smaller-scale navigations with a single type of gesture. The user does not have to navigate upward hierarchically or display a different type of interface in order to achieve positioning of the desired item.
Assigning circular motion attributes to historical object positions (e.g., hand positions) may present various challenges.
Referring now to
A center point CP may be calculated for some or all of the positions P in history 100. For a given object position, the calculated center point is an approximation of the location about which the hand or other object is rotating to make the curved gesture. Because it is associated with such a gesture, the calculated center point will therefore at times be referred to as a “curved-gesture center point.” The generation of the center point CP may provide the basis for determining several of the other parameters, and a method of determining the center point will be set forth in more detail below. Once a center point CP is established for a position P, it may be desirable to employ vector and radius descriptions to characterize the relationship between the position and its associated center point. In the example of
Angle θ is a description of the angular position of position P, defined as the angle between vector V and a reference line or vector 102. Any appropriate reference may be employed to define angular position. In some cases, it will be convenient to select a horizontal reference (i.e., a line in the xz plane of
The change in angle Δθ may be calculated based on one or more of the prior entries in history 100 (
When average positions are used to determine a center point, weighting may be employed. For example, more recent object positions obtained from the motion-capture model may have a greater effect on the location of the calculated center point than older object positions in the history. It should be understood that the preceding description is but one non-limiting example of calculating a center point and many other alternatives are possible without departing from the scope of this disclosure.
Continuing with
The ability to update and change the center point can provide various benefits. With a rigid or fixed center point, it may be difficult for the user to see or feel how well they are rotating around the center point as they are making the gesture. For example, the user may loop too close to the center, or cut inside the center, which would yield errors or unexpected results in interpreting the gesture and producing the corresponding cursor control if the center point was not adjustable. The variable center point also allows for effective gesture interpretation while allowing the user to make curved-gestures that vary in character and that are comfortable and appropriate for their body type, range of motion, etc.
Referring again to
In many examples, the curved-gesture center point may be used as an origin for determining an angular property. The angular property, in turn, may be used to control navigation of the cursor, such as its placement in a list, the rate at which it moves through a list, the number of items that are traversed, etc.
For example, the angular property may be an angle or angular position determined with respect to a reference, such as angle θ in
In another example, changes in angular position may be used to determine the angular property used to control cursor movement. A scaling may be employed, in which the cursor traverses a number of items in proportion to the size of the change in angular position. More concretely, a full circle of the hand (360 degrees) might correspond to navigating through 200 or any other suitable number of selectable items. Any appropriate scaling value may be employed, and the value may depend on the number of items in the list to be navigated, among other factors. Such scaling may depend additionally on other parameters. For example, finer or larger adjustments to cursor position may be made in response to the radius values in history 100 (i.e., as derived from the current positions and associated center points). For example, it might be desirable that rotation through a given angle causes cursor movement through a larger number of items for a larger radius. In addition, radius measurements may be used in connection with measurement of the user's arm length to derive a ratio that can be used in conjunction with other parameters to control cursor speed.
In still another example, the angular velocities of
It should be understood that the present disclosure also encompasses a method of controlling a computing system using motion-capture data.
At 122, the method may first include determining whether curved-gesture UI control is to be activated. Motion capture may be used for a variety of other purposes in connection with controlling a computer, and thus it may be desirable to employ a procedure to specifically initiate the functionality for controlling cursor movement based on curved gestures, such as the dialing hand gesture described above. Having a delineated mode and operational context for curved-gesture control may in some cases simplify interpretation of motion-capture data. In one example, curved-gesture UI control is initiated via audio, such as detection of a vocal command or other sound produced by the user. In another example, visual cues may be provided on a display screen (e.g., display 14) to prompt the user to move their hand to a particular location or in a particular way to activate the curved-gesture control. In still another example, a specific gesture may be used to enter and engage the curved-gesture UI control.
At 124, the method includes obtaining a plurality of positions for an object. As discussed above, the positions of the object may be obtained from motion-capture data and are representative of the object moving in three-dimensional space, such as a user's hand making a curved-gesture.
At 126, the method includes determining a curved-gesture center point based on at least some of the object positions obtained at 124. In some examples, as discussed with reference to
At 128, the method includes determining an angular property using the curved-gesture center point as an origin. As in the example of
Disengagement from the curved-gesture control described herein may occur in various ways. In one approach, once a desired item has been reached via cursor positioning, maintaining the cursor position for a period of time causes the item to be selected and thus disengages the curved-gesture control. In another example, a specific arm gesture may be used, such as thrusting the user's arm toward the display screen 14 in
As described with reference to
Logic subsystem 202 may include one or more physical devices configured to execute one or more instructions. In particular, logic subsystem 202 is shown executing a copy of user interface instructions 208, which are contained on data-holding subsystem 204. As indicated in the figure and described in connection with the previous examples, object positions may be obtained from motion-capture model data 210 stored on data-holding subsystem and provided to the user interface instructions for processing. Processing may occur as previously described, in order to exert control over a user interface displayed on display subsystem 206. In particular, the object positions obtained from the model may be interpreted to control navigation of a cursor on a screen of display subsystem 206.
Continuing more generally with logic subsystem 202, it may be configured to execute one or more instructions that are part of one or more programs, routines, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more devices, or otherwise arrive at a desired result. The logic subsystem may include one or more processors that are configured to execute software instructions. Additionally or alternatively, the logic subsystem may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. The logic subsystem may optionally include individual components that are distributed throughout two or more devices, which may be remotely located in some embodiments.
Data-holding subsystem 204 may include one or more devices configured to hold data and/or instructions executable by the logic subsystem to implement the herein described methods and processes. When such methods and processes are implemented, the state of data-holding subsystem 204 may be transformed (e.g., to hold different data). Data-holding subsystem 204 may include removable media and/or built-in devices. Data-holding subsystem 204 may include optical memory devices, semiconductor memory devices, and/or magnetic memory devices, among others. Data-holding subsystem 204 may include devices with one or more of the following characteristics: volatile, nonvolatile, dynamic, static, read/write, read-only, random access, sequential access, location addressable, file addressable, and content addressable. In some embodiments, logic subsystem 202 and data-holding subsystem 204 may be integrated into one or more common devices, such as an application specific integrated circuit or a system on a chip.
Display subsystem 206 may be used to present a visual representation of data held by data-holding subsystem 204. As the herein described methods and processes change the data held by the data-holding subsystem, and thus transform the state of the data-holding subsystem, the state of display subsystem 206 may likewise be transformed to visually represent changes in the underlying data. Display subsystem 206 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic subsystem 202 and/or data-holding subsystem 204 in a shared enclosure, or such display devices may be peripheral display devices.
It is to be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated may be performed in the sequence illustrated, in other sequences, in parallel, or in some cases omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
| Number | Name | Date | Kind |
|---|---|---|---|
| 4627620 | Yang | Dec 1986 | A |
| 4630910 | Ross et al. | Dec 1986 | A |
| 4645458 | Williams | Feb 1987 | A |
| 4695953 | Blair et al. | Sep 1987 | A |
| 4702475 | Elstein et al. | Oct 1987 | A |
| 4711543 | Blair et al. | Dec 1987 | A |
| 4751642 | Silva et al. | Jun 1988 | A |
| 4796997 | Svetkoff et al. | Jan 1989 | A |
| 4809065 | Harris et al. | Feb 1989 | A |
| 4817950 | Goo | Apr 1989 | A |
| 4843568 | Krueger et al. | Jun 1989 | A |
| 4893183 | Nayar | Jan 1990 | A |
| 4901362 | Terzian | Feb 1990 | A |
| 4925189 | Braeunig | May 1990 | A |
| 5101444 | Wilson et al. | Mar 1992 | A |
| 5148154 | MacKay et al. | Sep 1992 | A |
| 5184295 | Mann | Feb 1993 | A |
| 5229754 | Aoki et al. | Jul 1993 | A |
| 5229756 | Kosugi et al. | Jul 1993 | A |
| 5239463 | Blair et al. | Aug 1993 | A |
| 5239464 | Blair et al. | Aug 1993 | A |
| 5288078 | Capper et al. | Feb 1994 | A |
| 5295491 | Gevins | Mar 1994 | A |
| 5320538 | Baum | Jun 1994 | A |
| 5347306 | Nitta | Sep 1994 | A |
| 5385519 | Hsu et al. | Jan 1995 | A |
| 5405152 | Katanics et al. | Apr 1995 | A |
| 5417210 | Funda et al. | May 1995 | A |
| 5423554 | Davis | Jun 1995 | A |
| 5454043 | Freeman | Sep 1995 | A |
| 5469740 | French et al. | Nov 1995 | A |
| 5495576 | Ritchey | Feb 1996 | A |
| 5516105 | Eisenbrey et al. | May 1996 | A |
| 5524637 | Erickson et al. | Jun 1996 | A |
| 5534917 | MacDougall | Jul 1996 | A |
| 5563988 | Maes et al. | Oct 1996 | A |
| 5577981 | Jarvik | Nov 1996 | A |
| 5580249 | Jacobsen et al. | Dec 1996 | A |
| 5594469 | Freeman et al. | Jan 1997 | A |
| 5597309 | Riess | Jan 1997 | A |
| 5616078 | Oh | Apr 1997 | A |
| 5617312 | Iura et al. | Apr 1997 | A |
| 5638300 | Johnson | Jun 1997 | A |
| 5641288 | Zaenglein | Jun 1997 | A |
| 5682196 | Freeman | Oct 1997 | A |
| 5682229 | Wangler | Oct 1997 | A |
| 5690582 | Ulrich et al. | Nov 1997 | A |
| 5703367 | Hashimoto et al. | Dec 1997 | A |
| 5704837 | Iwasaki et al. | Jan 1998 | A |
| 5715834 | Bergamasco et al. | Feb 1998 | A |
| 5875108 | Hoffberg et al. | Feb 1999 | A |
| 5877803 | Wee et al. | Mar 1999 | A |
| 5913727 | Ahdoot | Jun 1999 | A |
| 5933125 | Fernie | Aug 1999 | A |
| 5980256 | Carmein | Nov 1999 | A |
| 5989157 | Walton | Nov 1999 | A |
| 5995649 | Marugame | Nov 1999 | A |
| 6005548 | Latypov et al. | Dec 1999 | A |
| 6009210 | Kang | Dec 1999 | A |
| 6054991 | Crane et al. | Apr 2000 | A |
| 6066075 | Poulton | May 2000 | A |
| 6072494 | Nguyen | Jun 2000 | A |
| 6073489 | French et al. | Jun 2000 | A |
| 6077201 | Cheng et al. | Jun 2000 | A |
| 6098458 | French et al. | Aug 2000 | A |
| 6100896 | Strohecker et al. | Aug 2000 | A |
| 6101289 | Kellner | Aug 2000 | A |
| 6128003 | Smith et al. | Oct 2000 | A |
| 6130677 | Kunz | Oct 2000 | A |
| 6141463 | Covell et al. | Oct 2000 | A |
| 6147678 | Kumar et al. | Nov 2000 | A |
| 6152856 | Studor et al. | Nov 2000 | A |
| 6159100 | Smith | Dec 2000 | A |
| 6173066 | Peurach et al. | Jan 2001 | B1 |
| 6181343 | Lyons | Jan 2001 | B1 |
| 6188777 | Darrell et al. | Feb 2001 | B1 |
| 6215890 | Matsuo et al. | Apr 2001 | B1 |
| 6215898 | Woodfill et al. | Apr 2001 | B1 |
| 6226396 | Marugame | May 2001 | B1 |
| 6229913 | Nayar et al. | May 2001 | B1 |
| 6256033 | Nguyen | Jul 2001 | B1 |
| 6256400 | Takata et al. | Jul 2001 | B1 |
| 6283860 | Lyons et al. | Sep 2001 | B1 |
| 6289112 | Jain et al. | Sep 2001 | B1 |
| 6299308 | Voronka et al. | Oct 2001 | B1 |
| 6308565 | French et al. | Oct 2001 | B1 |
| 6316934 | Amorai-Moriya et al. | Nov 2001 | B1 |
| 6363160 | Bradski et al. | Mar 2002 | B1 |
| 6384819 | Hunter | May 2002 | B1 |
| 6411744 | Edwards | Jun 2002 | B1 |
| 6430997 | French et al. | Aug 2002 | B1 |
| 6476834 | Doval et al. | Nov 2002 | B1 |
| 6496598 | Harman | Dec 2002 | B1 |
| 6503195 | Keller et al. | Jan 2003 | B1 |
| 6539931 | Trajkovic et al. | Apr 2003 | B2 |
| 6570555 | Prevost et al. | May 2003 | B1 |
| 6633294 | Rosenthal et al. | Oct 2003 | B1 |
| 6640202 | Dietz et al. | Oct 2003 | B1 |
| 6661918 | Gordon et al. | Dec 2003 | B1 |
| 6681031 | Cohen et al. | Jan 2004 | B2 |
| 6714665 | Hanna et al. | Mar 2004 | B1 |
| 6731799 | Sun et al. | May 2004 | B1 |
| 6738066 | Nguyen | May 2004 | B1 |
| 6765726 | French et al. | Jul 2004 | B2 |
| 6788809 | Grzeszczuk et al. | Sep 2004 | B1 |
| 6801637 | Voronka et al. | Oct 2004 | B2 |
| 6873723 | Aucsmith et al. | Mar 2005 | B1 |
| 6876496 | French et al. | Apr 2005 | B2 |
| 6937742 | Roberts et al. | Aug 2005 | B2 |
| 6950534 | Cohen et al. | Sep 2005 | B2 |
| 7003134 | Covell et al. | Feb 2006 | B1 |
| 7036094 | Cohen et al. | Apr 2006 | B1 |
| 7038855 | French et al. | May 2006 | B2 |
| 7039676 | Day et al. | May 2006 | B1 |
| 7042440 | Pryor et al. | May 2006 | B2 |
| 7050606 | Paul et al. | May 2006 | B2 |
| 7058204 | Hildreth et al. | Jun 2006 | B2 |
| 7060957 | Lange et al. | Jun 2006 | B2 |
| 7113918 | Ahmad et al. | Sep 2006 | B1 |
| 7121946 | Paul et al. | Oct 2006 | B2 |
| 7170492 | Bell | Jan 2007 | B2 |
| 7184048 | Hunter | Feb 2007 | B2 |
| 7202898 | Braun et al. | Apr 2007 | B1 |
| 7222078 | Abelow | May 2007 | B2 |
| 7227526 | Hildreth et al. | Jun 2007 | B2 |
| 7259747 | Bell | Aug 2007 | B2 |
| 7308112 | Fujimura et al. | Dec 2007 | B2 |
| 7317836 | Fujimura et al. | Jan 2008 | B2 |
| 7348963 | Bell | Mar 2008 | B2 |
| 7359121 | French et al. | Apr 2008 | B2 |
| 7367887 | Watabe et al. | May 2008 | B2 |
| 7379563 | Shamaie | May 2008 | B2 |
| 7379566 | Hildreth | May 2008 | B2 |
| 7389591 | Jaiswal et al. | Jun 2008 | B2 |
| 7412077 | Li et al. | Aug 2008 | B2 |
| 7421093 | Hildreth et al. | Sep 2008 | B2 |
| 7430312 | Gu | Sep 2008 | B2 |
| 7436496 | Kawahito | Oct 2008 | B2 |
| 7450736 | Yang et al. | Nov 2008 | B2 |
| 7452275 | Kuraishi | Nov 2008 | B2 |
| 7460690 | Cohen et al. | Dec 2008 | B2 |
| 7489812 | Fox et al. | Feb 2009 | B2 |
| 7536032 | Bell | May 2009 | B2 |
| 7555142 | Hildreth et al. | Jun 2009 | B2 |
| 7560701 | Oggier et al. | Jul 2009 | B2 |
| 7570805 | Gu | Aug 2009 | B2 |
| 7574020 | Shamaie | Aug 2009 | B2 |
| 7576727 | Bell | Aug 2009 | B2 |
| 7590262 | Fujimura et al. | Sep 2009 | B2 |
| 7593552 | Higaki et al. | Sep 2009 | B2 |
| 7598942 | Underkoffler et al. | Oct 2009 | B2 |
| 7607509 | Schmiz et al. | Oct 2009 | B2 |
| 7620202 | Fujimura et al. | Nov 2009 | B2 |
| 7668340 | Cohen et al. | Feb 2010 | B2 |
| 7680298 | Roberts et al. | Mar 2010 | B2 |
| 7683954 | Ichikawa et al. | Mar 2010 | B2 |
| 7684592 | Paul et al. | Mar 2010 | B2 |
| 7701439 | Hillis et al. | Apr 2010 | B2 |
| 7702130 | Im et al. | Apr 2010 | B2 |
| 7704135 | Harrison, Jr. | Apr 2010 | B2 |
| 7710391 | Bell et al. | May 2010 | B2 |
| 7729530 | Antonov et al. | Jun 2010 | B2 |
| 7746345 | Hunter | Jun 2010 | B2 |
| 7760182 | Ahmad et al. | Jul 2010 | B2 |
| 7809167 | Bell | Oct 2010 | B2 |
| 7834846 | Bell | Nov 2010 | B1 |
| 7852262 | Namineni et al. | Dec 2010 | B2 |
| RE42256 | Edwards | Mar 2011 | E |
| 7898522 | Hildreth et al. | Mar 2011 | B2 |
| 8035612 | Bell et al. | Oct 2011 | B2 |
| 8035614 | Bell et al. | Oct 2011 | B2 |
| 8035624 | Bell et al. | Oct 2011 | B2 |
| 8072470 | Marks | Dec 2011 | B2 |
| 8237655 | Yabe et al. | Aug 2012 | B2 |
| 20040161132 | Cohen et al. | Aug 2004 | A1 |
| 20060262116 | Moshiri et al. | Nov 2006 | A1 |
| 20070216661 | Chen et al. | Sep 2007 | A1 |
| 20080026838 | Dunstan et al. | Jan 2008 | A1 |
| 20080134078 | Han | Jun 2008 | A1 |
| 20090125824 | Andrews et al. | May 2009 | A1 |
| 20090231553 | Tanaka et al. | Sep 2009 | A1 |
| 20100031202 | Morris et al. | Feb 2010 | A1 |
| 20100053304 | Underkoffler et al. | Mar 2010 | A1 |
| 20100138785 | Uoi et al. | Jun 2010 | A1 |
| Number | Date | Country |
|---|---|---|
| 101729808 | Jun 2010 | CN |
| 201254344 | Jun 2010 | CN |
| 0583061 | Feb 1994 | EP |
| 08044490 | Feb 1996 | JP |
| 9310708 | Jun 1993 | WO |
| 9717598 | May 1997 | WO |
| 9944698 | Sep 1999 | WO |
| Entry |
|---|
| Frens, Johannes W., “Designing for Rich Interaction: Integrating Form, Interaction, and Function”, Retrieved at <<http://alexandria.tue.nl/extra2/200610381.pdf>>, 2006, pp. 225. |
| Neumann, Andreas., “Navigation in Space, Time and Topic”, Retrieved at << http://www.carto.net/neumann/papers/2005 >>, XXII International Cartographic Conference (ICC2005), Jul. 11-16, 2005, pp. 11. |
| Kim, Ji-Sun., “Tangible User Interface for CAVETM based on Augmented Reality Technique”, Retrieved at << http://scholar.lib.vt.edu/theses/available/etd-01062006-185512/unrestricted/Ji-Sun—MSThesis—rev1.pdf >>, Dec. 2005, pp. 132. |
| “Global and local navigation”, Retrieved at << http://www.webdesignoffice.us/designing—a—web—site/global—local—navigation.html >>, Retrieved Date: Apr. 9, 2010, pp. 2. |
| Wu, Andy., “Tangible Visualization”, Retrieved at << http://people.ischool.berkeley.edu/˜daniela/tei2010/gsc10a.pdf >>, Tangible and embedded interaction, Proceedings of the fourth international conference on Tangible, embedded and embodied interaction, Jan. 24-27, 2010, pp. 2. |
| Satanek, Brandon L., “The Effects of Multidimensional Navigational Aids and Individual Differences on WWW Hypertext Navigation”, Retrieved at << http://scholar.lib.vt.edu/theses/available/etd-42098-213026/unrestricted/thesis2. PDF >>, May 4, 1998, pp. 119. |
| Kanade et al., “A Stereo Machine for Video-rate Dense Depth Mapping and Its New Applications”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1996, pp. 196-202,The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA. |
| Miyagawa et al., “CCD-Based Range Finding Sensor”, Oct. 1997, pp. 1648-1652, vol. 44 No. 10, IEEE Transactions on Electron Devices. |
| Rosenhahn et al., “Automatic Human Model Generation”, 2005, pp. 41-48, University of Auckland (CITR), New Zealand. |
| Aggarwal et al., “Human Motion Analysis: A Review”, IEEE Nonrigid and Articulated Motion Workshop, 1997, University of Texas at Austin, Austin, TX. |
| Shao et al., “An Open System Architecture for a Multimedia and Multimodal User Interface”, Aug. 24, 1998, Japanese Society for Rehabilitation of Persons with Disabilities (JSRPD), Japan. |
| Kohler, “Special Topics of Gesture Recognition Applied in Intelligent Home Environments”, In Proceedings of the Gesture Workshop, 1998, pp. 285-296, Germany. |
| Kohler, “Vision Based Remote Control in Intelligent Home Environments”, University of Erlangen-Nuremberg/Germany, 1996, pp. 147-154, Germany. |
| Kohler, “Technical Details and Ergonomical Aspects of Gesture Recognition applied in Intelligent Home Environments”, 1997, Germany. |
| Hasegawa et al., “Human-Scale Haptic Interaction with a Reactive Virtual Human in a Real-Time Physics Simulator”, Jul. 2006, vol. 4, No. 3, Article 6C, ACM Computers in Entertainment, New York, NY. |
| Qian et al., “A Gesture-Driven Multimodal Interactive Dance System”, Jun. 2004, pp. 1579-1582, IEEE International Conference on Multimedia and Expo (ICME), Taipei, Taiwan. |
| Zhao, “Dressed Human Modeling, Detection, and Parts Localization”, 2001, The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA. |
| He, “Generation of Human Body Models”, Apr. 2005, University of Auckland, New Zealand. |
| Isard et al., “Condensation—Conditional Density Propagation for Visual Tracking”, 1998, pp. 5-28, International Journal of Computer Vision 29(1), Netherlands. |
| Livingston, “Vision-based Tracking with Dynamic Structured Light for Video See-through Augmented Reality”, 1998, University of North Carolina at Chapel Hill, North Carolina, USA. |
| Wren et al., “Pfinder: Real-Time Tracking of the Human Body”, MIT Media Laboratory Perceptual Computing Section Technical Report No. 353, Jul. 1997, vol. 19, No. 7, pp. 780-785, IEEE Transactions on Pattern Analysis and Machine Intelligence, Caimbridge, MA. |
| Breen et al., “Interactive Occlusion and Collusion of Real and Virtual Objects in Augmented Reality”, Technical Report ECRC-95-02, 1995, European Computer-Industry Research Center GmbH, Munich, Germany. |
| Freeman et al., “Television Control by Hand Gestures”, Dec. 1994, Mitsubishi Electric Research Laboratories, TR94-24, Caimbridge, MA. |
| Hongo et al., “Focus of Attention for Face and Hand Gesture Recognition Using Multiple Cameras”, Mar. 2000, pp. 156-161, 4th IEEE International Conference on Automatic Face and Gesture Recognition, Grenoble, France. |
| Pavlovic et al., “Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review”, Jul. 1997, pp. 677-695, vol. 19, No. 7, IEEE Transactions on Pattern Analysis and Machine Intelligence. |
| Azarbayejani et al., “Visually Controlled Graphics”, Jun. 1993, Vo1.15, No. 6, IEEE Transactions on Pattern Analysis and Machine Intelligence. |
| Granieri et al., “Simulating Humans in VR”, The British Computer Society, Oct. 1994, Academic Press. |
| Brogan et al., “Dynamically Simulated Characters in Virtual Environments”, Sep./Oct. 1998, pp. 2-13, vol. 18, Issue 5, IEEE Computer Graphics and Applications. |
| Fisher et al., “Virtual Environment Display System”, ACM Workshop on Interactive 3D Graphics, Oct. 1986, Chapel Hill, NC. |
| “Virtual High Anxiety”, Tech Update, Aug. 1995, pp. 22. |
| Sheridan et al., “Virtual Reality Check”, Technology Review, Oct. 1993, pp. 22-28, vol. 96, No. 7. |
| Stevens, “Flights into Virtual Reality Treating Real World Disorders”, The Washington Post, Mar. 27, 1995, Science Psychology, 2 pages. |
| “Simulation and Training”, 1994, Division Incorporated. |
| The State Intellectual Property Office of the People's Republic of China, Notice of the First Office Action of CN201110185119.7, Nov. 21, 2012, China, 10 pages. |
| Number | Date | Country | |
|---|---|---|---|
| 20110310007 A1 | Dec 2011 | US |