This invention relates generally to user interfaces for computerized systems, and specifically to user interfaces that are based on three-dimensional sensing.
Many different types of user interface devices and methods are currently available. Common tactile interface devices include the computer keyboard, mouse and joystick. Touch screens detect the presence and location of a touch by a finger or other object within the display area. Infrared remote controls are widely used, and “wearable” hardware devices have been developed, as well, for purposes of remote control.
Computer interfaces based on three-dimensional (3D) sensing of parts of the user's body have also been proposed. For example, PCT International Publication WO 03/071410, whose disclosure is incorporated herein by reference, describes a gesture recognition system using depth-perceptive sensors. A 3D sensor provides position information, which is used to identify gestures created by a body part of interest. The gestures are recognized based on a shape of a body part and its position and orientation over an interval. The gesture is classified for determining an input into a related electronic device.
As another example, U.S. Pat. No. 7,348,963, whose disclosure is incorporated herein by reference, describes an interactive video display system, in which a display screen displays a visual image, and a camera captures 3D information regarding an object in an interactive area located in front of the display screen. A computer system directs the display screen to change the visual image in response to changes in the object.
Documents incorporated by reference in the present patent application are to be considered an integral part of the application except that to the extent any terms are defined in these incorporated documents in a manner that conflicts with the definitions made explicitly or implicitly in the present specification, only the definitions in the present specification should be considered.
The description above is presented as a general overview of related art in this field and should not be construed as an admission that any of the information it contains constitutes prior art against the present patent application.
There is provided, in accordance with an embodiment of the present invention a method, including receiving, by a computer executing a non-tactile three dimensional (3D) user interface, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of a sensing device coupled to the computer, the gesture including a first motion in a first direction along a selected axis in space, followed by a second motion in a second direction, opposite to the first direction, along the selected axis, and transitioning the non-tactile 3D user interface from a first state to a second state upon detecting completion of the gesture.
There is also provided, in accordance with an embodiment of the present invention a method, including receiving, by a computer executing a non-tactile three dimensional (3D) user interface, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of a sensing device coupled to the computer, the gesture including a rising motion along a vertical axis in space, and transitioning the non-tactile 3D user interface from a locked state to an unlocked state upon detecting completion of the gesture.
There is additionally provided, in accordance with an embodiment of the present invention a method, including associating, in a computer executing a non-tactile three dimensional (3D) user interface, multiple regions, including at least first and second regions, within a field of view of a sensing device coupled to the computer with respective states of the non-tactile 3D user interface, including at least first and second states associated respectively with the first and second regions, receiving a set of multiple 3D coordinates representing a hand movement from the first region to the second region, and responsively to the movement, transitioning the non-tactile 3D user interface from the first state to the second state.
There is further provided, in accordance with an embodiment of the present invention an apparatus, including a three dimensional (3D) optical sensor having a field of view and coupled to a computer executing a non-tactile three dimensional (3D) user interface, and an illumination element that when illuminated, is configured to be visible to a user when the user is positioned within the field of view.
There is additionally provided, in accordance with an embodiment of the present invention an apparatus, including a sensing device, and a computer executing a non-tactile three dimensional (3D) user interface and configured to receive, from the sensing device, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of the sensing device, the gesture including a first motion in a first direction along a selected axis in space, followed by a second motion in a second direction, opposite to the first direction, along the selected axis, and to transition the non-tactile 3D user interface from a first state to a second state upon detecting completion of the gesture.
There is also provided, in accordance with an embodiment of the present invention an apparatus, including a sensing device, and a computer executing a non-tactile three dimensional (3D) user interface and configured to receive, from the sensing device, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of the sensing device, the gesture including a rising motion along a vertical axis in space, and to transition the non-tactile 3D user interface from a locked state to an unlocked state upon detecting completion of the gesture.
There is alternatively provided, in accordance with an embodiment of the present invention an apparatus, including a sensing device, and a computer executing a non-tactile three dimensional (3D) user interface and configured to associate multiple regions, including at least first and second regions, within a field of view of the sensing device with respective states of the non-tactile 3D user interface, including at least first and second states associated respectively with the first and second regions, to receive a set of multiple 3D coordinates representing a hand movement from the first region to the second region, and responsively to the movement, to transition the non-tactile 3D user interface from the first state to the second state.
There is also provided, in accordance with an embodiment of the present invention a computer software product including a non-transitory computer-readable medium, in which program instructions are stored, which instructions, when read by a computer executing a non-tactile user interface, cause the computer to receive, from a sensing device, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of the sensing device, the gesture including a first motion in a first direction along a selected axis in space, followed by a second motion in a second direction, opposite to the first direction, along the selected axis, and to transition the non-tactile 3D user interface from a first state to a second state upon detecting completion of the gesture.
There is additionally provided, in accordance with an embodiment of the present invention a computer software product including a non-transitory computer-readable medium, in which program instructions are stored, which instructions, when read by a computer executing a non-tactile user interface, cause the computer to receive, from a sensing device, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of the sensing device, the gesture including a rising motion along a vertical axis in space, and to transition the non-tactile 3D user interface from a locked state to an unlocked state upon detecting completion of the gesture.
There is further provided, in accordance with an embodiment of the present invention a computer software product including a non-transitory computer-readable medium, in which program instructions are stored, which instructions, when read by a computer executing a non-tactile user interface, cause the computer to associate multiple regions, including at least first and second regions, within a field of view of a sensing device with respective states of the non-tactile 3D user interface, including at least first and second states associated respectively with the first and second regions, to receiving a set of multiple 3D coordinates representing a hand movement from the first region to the second region, and responsively to the movement, to transition the non-tactile 3D user interface from the first state to the second state.
The disclosure is herein described, by way of example only, with reference to the accompanying drawings, wherein:
When using physical tactile input devices such as buttons, rollers or touch screens, a user typically engages and disengages control of a user interface by touching and/or manipulating the physical device. Embodiments of the present invention describe gestures for engaging and disengaging control of a user interface based on three-dimensional (3D) sensing (referred to herein as a non-tactile 3D user interface), by a 3D sensor, of motion or change of position of one or more body parts, typically a hand, of the user. Gestures described herein include focus gestures and unlock gestures. A focus gesture enables the user to engage (i.e., take control of) an inactive non-tactile 3D user interface. An unlock gesture enables the user to engage a locked non-tactile 3D user interface, as pressing a specific sequence of keys unlocks a locked cellular phone. In some embodiments, the non-tactile 3D user interface conveys visual feedback to the user performing the focus and the unlock gestures.
Embodiments of the present invention also describe methods for conveying visual feedback to the user, when the user's hand disengages from the non-tactile 3D user interface. The visual feedback typically alerts the user in an unobtrusive manner, thereby enhancing the user's experience.
As described supra, a 3D sensor captures 3D information regarding an object, typically a body part such as a hand, in an interactive area located in front of a display screen. Since the 3D sensor typically has a fixed field of view, a computer can track and accept inputs from the user when the body part is positioned within the field of view. Embodiments of the present invention describe methods and systems for conveying visual feedback to the user when the body part is within the field of view, outside the field of view, and when the user is at the periphery of the field of view.
Computer 26, executing 3D user interface 20, processes data generated by device 24 in order to reconstruct a 3D map of user 22. The term “3D map” refers to a set of 3D coordinates measured, by way of example, with reference to a generally horizontal X-axis 32 in space, a generally vertical Y-axis 34 in space and a depth Z-axis 36 in space, based on device 24. The 3D coordinates represent the surface of a given object, in this case the user's body. In one embodiment, device 24 projects a pattern of spots onto the object and captures an image of the projected pattern. Computer 26 then computes the 3D coordinates of points on the surface of the user's body by triangulation, based on transverse shifts of the spots in the pattern. Methods and devices for this sort of triangulation-based 3D mapping using a projected pattern are described, for example, in PCT International Publications WO 2007/043036, WO 2007/105205 and WO 2008/120217, whose disclosures are incorporated herein by reference. Alternatively, interface 20 may use other methods of 3D mapping, using single or multiple cameras or other types of sensors, as are known in the art.
Computer 26 typically comprises a general-purpose computer processor, which is programmed in software to carry out the functions described hereinbelow. The software may be downloaded to the processor in electronic form, over a network, for example, or it may alternatively be provided on non-transitory tangible media, such as optical, magnetic, or electronic memory media. Alternatively or additionally, some or all of the functions of the image processor may be implemented in dedicated hardware, such as a custom or semi-custom integrated circuit or a programmable digital signal processor (DSP). Although computer 26 is shown in
As another alternative, these processing functions may be carried out by a suitable processor that is integrated with display 28 (in a television set, for example) or with any other suitable sort of computerized device, such as a game console or media player. The sensing functions of device 24 may likewise be integrated into the computer or other computerized apparatus that is to be controlled by the sensor output.
In the embodiments described herein, user interface 20 comprises the following individual states:
In embodiments of the present invention, the state of user interface 20 typically comprises a combination of the states described supra. The states of user interface 20 may include:
To engage 3D user interface 20 while positioned in a field of view of sensing device 24, user 22 may perform a focus gesture. A well-designed focus gesture typically strikes a balance between ease of use and a low instance of false positives (i.e., a physical gesture that the computer incorrectly identifies as a focus gesture). On the one hand, a simple focus gesture (for example, pointing an index finger) may be easy to learn, but may be prone to generating excessive false positives. On the other hand, a complex focus gesture may generate few false positives, but may also be difficult for the user to learn. Typically, a well designed focus gesture has a false positive rate of less than 2%.
A focus gesture comprising multiple physical motions can be broken down into a series of steps performed in a specific sequence. In some embodiments, computer 26 conveys feedback to user 22 during and/or upon completion of each of the steps. The focus gesture steps should typically be distinct enough so as not to interfere with the operation of user interface 20 (i.e., by generating false positives). For example, if user interface 20 is configured to show movies from a movie library stored on the computer, the focus gesture steps should be sufficiently different from the gestures used to control the movie library (e.g., gestures that select and control playback of a movie).
A focus gesture, used to engage user interface 20, may include a “push” gesture or a “wave” gesture. As described in detail hereinbelow, the focus gesture may comprise user 22 performing, with hand 30, a first motion in a first direction along a selected axis (in space), followed a second motion in a second direction, opposite to the first direction, along the selected axis. In some embodiments, computer 26 conveys visual feedback to user 22 as the user performs and/or completes each step of the focus gesture. The feedback can help train user 22 to perform the focus gesture correctly.
For example, the minimum focus gesture speed and the focus gesture distance may comprise 10 centimeters per second, and 10 centimeters, respectively. The forward and backward motions of the push gesture are indicated by arrows 40. As user 22 moves hand 30 along Z-axis 36, computer 26 receives, from sensing device 24, a set of multiple 3D coordinates representing the forward and backward motion of the hand (i.e., the push gesture). Upon detecting completion of the push gesture, computer 26 can transition user interface 20 from a first state (e.g., not tracked) to a second state (e.g., tracked).
The side-to-side swiping motions of the wave gesture are indicated by arrows 50. As user 22 moves hand 30 along X-axis 32, computer 26 receives, from sensing device 24, a set of multiple 3D coordinates representing the side-to-side motion of the hand (i.e., the wave gesture). Upon detecting completion of the wave gesture, computer 26 can transition user interface 20 from a first state (e.g., not tracked) to a second state (e.g., tracked).
The visual feedback may comprise a first visual feedback prior to the first gesture of the focus gesture, a second visual feedback subsequent to the first gesture, and a third visual feedback subsequent to the second gesture of the focus gesture. For example, prior to performing the focus gesture, user interface 20 can illuminate LED 60 in a first color, e.g., red. After user 22 performs the first gesture of the focus gesture (e.g., by pushing hand 30 towards sensing device 24 to initiate the push gesture or by swiping the hand from a first side to a second side to initiate the wave gesture), computer 26 can illuminate LED 60 in a second color, e.g., orange. Finally, after user 22 completes the second gesture of the focus gesture (e.g., by pulling hand 30 back from sensing device 24 to complete the push gesture or by swiping the hand back from the second side to the first side to complete the wave gesture), the computer can illuminate LED 60 in a third color, e.g., green, and engage user 22 with user interface 20.
In an additional embodiment, visual feedback device 60 may comprise a single color LED that blinks (i.e., illuminates and darkens) as user 22 performs a focus gesture. During periods between focus gestures, the single LED may be either constantly illuminated or darkened. In an alternative embodiment, visual feedback device 60 may comprise multiple LEDs that convey visual feedback to user 22 before, during and after performing the focus gesture (e.g., separate red, yellow and green LEDs as in a traffic light.
In a further embodiment, visual feedback device 60 may comprise a vertical or a circular array of LEDs. When user interface 20 is inactive, computer 26 darkens the LEDs. As user performs the focus gesture, computer 26 can illuminate an additional LED with each individual gesture (e.g., the side-to-side swipe of hand 30 for the wave gesture or the forward and backward motion of hand 30 for the push gesture). After user 22 completes the focus gesture, computer 26 can illuminate all the LEDs.
In still yet another embodiment, visual feedback device 60 may comprise a horizontal array of LEDs. When user interface 20 is disengaged, computer 26 can illuminate a single LED in the horizontal array. As user 22 performs the focus gesture, computer 26 can toggle the LEDs in the horizontal array to mimic the motion of hand 30.
Additionally or alternatively, computer 26 may alter a feedback item presented on display 28 while user 22 performs the focus gesture. For example, the feedback item may comprise a status icon 62 that either changes its appearance or displays an animation (e.g., a triangular shape within the icon that alters shape) during the focus gesture.
In alternative embodiments, the feedback item may comprise a circle 64 on display 28, and computer 26 can change the size of the feedback item depending on the location of hand 30 during the focus gesture. For example, as user 22 moves hand 30 closer to sensing device 24 to initiate a push gesture, computer 26 may increase the diameter of circle 64, or vice versa. Visual feedback conveyed by computer 26 may also include an indication as to the speed of the gesture (i.e. whether user 22 is moving hand 30 at an appropriate speed or not), and/or an indication when the hand has moved a sufficient distance to complete one of the focus gesture steps.
In further embodiments, the feedback may comprise a text message presented on display 28. For example, after user 22 performs the first gesture of the push gesture (i.e., moving hand 30 forward), computer 28 can present a text message such as “Pull hand back to gain control”.
In embodiments of the invention, states of 3D user interface 20 may include the locked and the unlocked states. The user interface may transition to the locked state either automatically after a defined period of inactivity, or after user 22 explicitly performs a lock gesture. While in the locked state, user 22 is disengaged from user interface 20. In some embodiments, user 22 performs the focus gesture followed by an unlock gesture, thereby unlocking and engaging user interface 20.
Alternatively, user interface 20 may implement a spatial aware gesture lock, where the state of the user interface may be unlocked for a specific region including user 22, but locked for other regions in proximity to the specific region (and therefore locked for any individuals in the other regions).
Examples of unlock gestures include an “up” gesture (e.g., raising hand 30 a specified distance), a sequence of two sequential wave gestures, and a sequence of two sequential push gestures, as described in detail hereinbelow.
As user 22 elevates hand 30 along Y-axis 34, computer 26 receives, from sensing device 24, a set of multiple 3D coordinates representing the rising motion of the hand (i.e., the up gesture). Upon detecting completion of the up gesture, computer 26 can transition user interface 20 from a locked state to an unlocked state.
While locked, the state of user interface 20 is typically not-tracked, locked and inactive. To unlock user interface 20, user 22 typically first performs a focus gesture, which transitions user interface 20 to the tracked, locked and inactive state. Upon detecting the focus gesture, computer 26 may convey feedback (either on display 28 or on device 60) prompting user 22 to elevate hand 30 to unlock the user interface (i.e., to perform the unlock gesture). Performing the unlock gesture engages the user interface, and transitions user interface 20 to the tracked, unlocked and active state.
As described supra, user 22 can unlock user interface 20 by performing two focus gestures sequentially. After detecting the first focus gesture, computer 26 transitions user interface 20 from the not-tracked, locked and inactive state to the tracked, locked and inactive state, and after detecting the second focus gesture, the computer transitions the non-tactile 3D user interface to the tracked, unlocked and active state. Thus, for example, unlocking user interface 20 may comprise user 22 performing either two wave gestures, two push gestures, or a combination of the two.
Computer 26 may also convey a first visual feedback to the user performing the unlock gesture, and a second visual feedback subsequent to the user performing the unlock gesture. For example, visual feedback device 60 may comprise a red LED that illuminates when user interface 20 is the locked state, and a green LED that illuminates when the user interface is in the unlocked state. In an alternative embodiment, visual feedback device 60 may comprise a multi-colored LED that changes color upon computer 26 transitioning user interface 20 to either the locked or the unlocked state.
In an additional embodiment, computer 26 may convey visual feedback via a feedback item presented on display 28. For example, the feedback item may comprise an icon 34 that is configured to show either a closed padlock or a closed eye when user interface 20 is in the locked state, and either an open padlock or an open eye when the user interface is the unlocked state.
As hand 30 interacts with 3D user interface 20, the position of the hand may influence the state of the non-tactile 3D user interface. For example, if user 22 drops hand 30 to the user's lap, then the user may disengage from the non-tactile 3D user interface, with computer 26 transitioning user interface 20 from the tracked, active and unlocked state to the not-tracked, inactive and unlocked state. Upon detecting user 20 performing a focus gesture, computer 26 can transition user interface 20 back to the tracked, active and unlocked state, and reengages the user interface.
In operation, computer 26 defines multiple regions comprising at least a first region and a second region within a field of view of sensing device 24, and associates each of the defined regions with a state of user interface 20. As user 22 moves hand 30 from the first region (e.g., region 80) to the second region (e.g., region 82), computer 26 receives a set of multiple 3D coordinates representing the hand moving from the first region to the second region. Upon detecting hand 30 moving from the first region to the second region, computer 26 responsively transitions 3D user interface 20 from the state associated with the first region to the state associated with the second region.
While hand 30 is within active region 80, user interface 20 may respond to gestures performed by the hand, as the state of the 3D user interface is tracked, active and unlocked. In some embodiments, computer 26 may convey visual feedback to user 22 indicating a current state of 3D user interface 20. For example, while positioned within region 80, hand 30 may interact with user interface 20 via a softbar 82, as shown in
If user 22 lowers hand 30 from region 80 to pre-drop region 84, computer 26 transitions the state of user interface 20 to the tracked, inactive and unlocked state. While hand 30 is in region 84, the hand is disengaged from user interface 20 (i.e., the non-tactile 3D user interface may ignore gestures from the hand), but the non-tactile 3D user interface is still tracking the hand.
In some embodiments, while hand 30 is within region 84, computer 26 moves the vertical position of softbar 82 in synchronization with the hand, as indicated by arrows 88 in
To reengage user interface 20 while hand 30 is within region 84, user 22 can elevate the hand back to region 80, and computer 26 transitions the non-tactile 3D user interface back to the tracked, active and unlocked state. However, since the state of user interface 20 is not-tracked, inactive and unlocked while hand 30 is within region 86, the user may be required to perform a focus gesture in order to reengage the 3D user interface.
In some embodiments, active region 80 comprises a static region whose mid-point has a vertical coordinate where user 22 performed the focus gesture, thereby engaging user interface 20. In alternative embodiments, computer 26 may adjust boundaries of the regions responsively to recent movements of hand 30. For example, computer 26 may employ temporal filtering (or another similar algorithm) to update the mid-point, by periodically averaging the vertical coordinates of hand 30 when the hand performed recent gestures. By updating the mid-point, computer may also update the upper and lower boundaries of active region 80. Computer 26 can also use temporal filtering to assist in defining a horizontal (i.e., a side-to-side) active zone (not shown).
In some instances, hand 30 may engage user interface 20, but user 22 may be physically unable to lower the hand to pre-drop region 84. For example, user 22 may be sitting on a couch with hand 30 resting on an armrest. In response, computer 26 may “compress” regions 80, 84 and 86, thereby repositioning pre-drop region 84 to an appropriate (i.e., a reachable) level. Alternatively, computer 26 may present feedback, prompting user 22 to elevate hand 30 in order to engage the non-tactile 3D user interface. For example, computer 26 may only present the top half of softbar 82 at the bottom of display 28, thereby prompting the user to elevate hand 30 to a higher vertical position (at which point the softbar may be displayed in its entirety).
If user 22 lowers hand 30 from pre-drop region 84 to dropped region 86, computer 26 transitions user interface 20 from state 94 to a not-tracked, unlocked and inactive state 96. In some embodiments, computer 26 may activate a first time-out timer upon transitioning user interface 20 to state 94. If user 22 does not elevate hand 30 back to region 80 during a first specified (time) period, computer 26 transitions user interface 20 to state 96.
Computer 26 transitions user interface 20 from state 96 back to state 92 responsively to detecting user 22 performing a focus gesture as described supra. Upon transitioning to state 96, computer 26 activates a second time-out timer. If computer does not detect a focus gesture within a second specified period (e.g., ten seconds), then the computer transitions user interface 20 from state 96 to a not-tracked, locked and inactive state 98.
Computer 26 transitions user interface 20 from state 98 to state 92 (i.e., unlocking and reengaging the user interface) upon detecting user 22 performing a focus gesture, followed by an unlock gesture. Upon detecting user 22 performing the focus gesture, computer 26 transitions user interface 20 from state 98 to a tracked, locked and inactive state 100. When computer 26 transitions user interface 20 to state 100, the computer activates a third timeout timer. If computer 26 detects user 22 either moving hand 30 from active region 80 (the hand is within region 80 when performing the focus gesture) or not performing a focus gesture within a third specified period, then the computer transition user interface 20 from state 100 back to state 98. Finally, if user 22 performs an unlock gesture within the second specified period of time, then computer 26 transitions user interface 20 from state 100 to state 92.
In the example shown in
Field of view 110 comprises a central field of view 116 bounded by peripheral fields of view 118 and 120. In some embodiments, user 22 sees the entire illumination element (e.g., a circle) when the user is within central field of view 116. As user 22 moves to periphery fields of view 118 or 120, the user may only see part of the illumination element (e.g., a semicircle). In other words, if user 22 can see any part of the illumination element, then optical sensor 111 can see the user.
In some embodiments, conical shaft 114 may include a customized slit (not shown), thereby enabling 3D sensing device 24 to present the illumination emanating from the illumination element as a specific shape (e.g., a company logo). In alternative embodiments, illumination element 112 may comprise multiple (e.g., three) LEDs positioned on 3D sensing device 24, where each of the multiple LEDs has a different field of view. When user 22 sees all the LEDs, the user is within field of view 110.
In an additional embodiment, illumination element 112 may be configured to convey visual feedback to user 20 indicating a current state of 3D user interface 20 to the user. In some embodiments, illumination element 112 may comprise multiple LEDs that are configured to present session indications (e.g., the state of user interface 20) to different individuals within field of view 110. For example, each of the multiple LEDs may comprise mechanical and/or optical elements that restrict each of the LEDs to different fields of view. Embodiments comprising multiple LEDs with different fields of view can also be used to convey feedback to multiple individuals within field of view 110.
In further embodiments, computer 26 may associate each state of user interface 20 with a specific color, and illumination element 112 may be configured to illuminate in different colors, based on the current state of the non-tactile 3D user interface. For example, while user interface 20 is in tracked, unlocked and active state 92 to user 22, computer 26 can illuminate illumination element 112 in green. Likewise, while user interface 20 is in tracked, unlocked and inactive state 94 to user 22, computer 26 can illuminate illumination element 112 in yellow, thereby conveying an indication to the user to raise hand 30 to region 80.
In still yet another embodiment, field of view 110 may comprise multiple regions (no shown), where additional users (not shown) in each region have a different state with user interface 20. For example, a first given user 22 positioned in a first given region can be in the locked state with 3D user interface 20, and a second given user 20 in a second given region can be in the active state with the non-tactile 3D user interface. Additionally, illumination element 112 can be configured to convey different visual feedback (e.g., different colors) to each of the regions, depending on their state with user interface 20. For example, visual feedback conveyed to the first given user a red illumination indicating that the first given user is positioned in a region that is in not tracked, unlocked and inactive state 94. Therefore to engage user interface 20, the first given user may be required to perform an unlock gesture.
It will be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and subcombinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.
This application claims the benefit of U.S. Provisional Patent Application 61/422,239, filed Dec. 13, 2010, which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4550250 | Mueller et al. | Oct 1985 | A |
4789921 | Aho | Dec 1988 | A |
4988981 | Zimmerman et al. | Jan 1991 | A |
5264836 | Rubin | Nov 1993 | A |
5495576 | Ritchey | Feb 1996 | A |
5588139 | Lanier et al. | Dec 1996 | A |
5594469 | Freeman et al. | Jan 1997 | A |
5846134 | Latypov | Dec 1998 | A |
5852672 | Lu | Dec 1998 | A |
5862256 | Zetts et al. | Jan 1999 | A |
5864635 | Zetts et al. | Jan 1999 | A |
5870196 | Lulli et al. | Feb 1999 | A |
5917937 | Szeliski et al. | Jun 1999 | A |
5973700 | Taylor et al. | Oct 1999 | A |
6002808 | Freeman | Dec 1999 | A |
6005548 | Latypov et al. | Dec 1999 | A |
6064387 | Canaday et al. | May 2000 | A |
6072494 | Nguyen | Jun 2000 | A |
6084979 | Kanade et al. | Jul 2000 | A |
6111580 | Kazama et al. | Aug 2000 | A |
6191773 | Maruno et al. | Feb 2001 | B1 |
6215890 | Matsuo et al. | Apr 2001 | B1 |
6243054 | DeLuca | Jun 2001 | B1 |
6252988 | Ho | Jun 2001 | B1 |
6256033 | Nguyen | Jul 2001 | B1 |
6262740 | Lauer et al. | Jul 2001 | B1 |
6345111 | Yamaguchi et al. | Feb 2002 | B1 |
6345893 | Fateh et al. | Feb 2002 | B2 |
6452584 | Walker et al. | Sep 2002 | B1 |
6456262 | Bell | Sep 2002 | B1 |
6507353 | Huard et al. | Jan 2003 | B1 |
6512838 | Rafii et al. | Jan 2003 | B1 |
6519363 | Su et al. | Feb 2003 | B1 |
6559813 | DeLuca et al. | May 2003 | B1 |
6681031 | Cohen et al. | Jan 2004 | B2 |
6686921 | Rushmeier et al. | Feb 2004 | B1 |
6690370 | Ellenby et al. | Feb 2004 | B2 |
6741251 | Malzbender | May 2004 | B2 |
6791540 | Baumberg | Sep 2004 | B1 |
6803928 | Bimber et al. | Oct 2004 | B2 |
6853935 | Satoh et al. | Feb 2005 | B2 |
6857746 | Dyner | Feb 2005 | B2 |
6951515 | Ohshima et al. | Oct 2005 | B2 |
6977654 | Malik et al. | Dec 2005 | B2 |
7003134 | Covell et al. | Feb 2006 | B1 |
7013046 | Kawamura et al. | Mar 2006 | B2 |
7023436 | Segawa et al. | Apr 2006 | B2 |
7042440 | Pryor et al. | May 2006 | B2 |
7042442 | Kanevsky et al. | May 2006 | B1 |
7151530 | Roeber et al. | Dec 2006 | B2 |
7170492 | Bell | Jan 2007 | B2 |
7215815 | Honda | May 2007 | B2 |
7227526 | Hildreth et al. | Jun 2007 | B2 |
7257237 | Luck et al. | Aug 2007 | B1 |
7259747 | Bell | Aug 2007 | B2 |
7264554 | Bentley | Sep 2007 | B2 |
7289227 | Smetak et al. | Oct 2007 | B2 |
7289645 | Yamamoto et al. | Oct 2007 | B2 |
7295697 | Satoh | Nov 2007 | B1 |
7301648 | Foxlin | Nov 2007 | B2 |
7302099 | Zhang et al. | Nov 2007 | B2 |
7333113 | Gordon | Feb 2008 | B2 |
7340077 | Gokturk | Mar 2008 | B2 |
7340399 | Friedrich et al. | Mar 2008 | B2 |
7348963 | Bell | Mar 2008 | B2 |
7358972 | Gordon et al. | Apr 2008 | B2 |
7370883 | Basir et al. | May 2008 | B2 |
7427996 | Yonezawa et al. | Sep 2008 | B2 |
7428542 | Fink et al. | Sep 2008 | B1 |
7474256 | Ohta et al. | Jan 2009 | B2 |
7508377 | Pihlaja et al. | Mar 2009 | B2 |
7526120 | Gokturk et al. | Apr 2009 | B2 |
7536032 | Bell | May 2009 | B2 |
7573480 | Gordon | Aug 2009 | B2 |
7576727 | Bell | Aug 2009 | B2 |
7580572 | Bang et al. | Aug 2009 | B2 |
7590941 | Wee et al. | Sep 2009 | B2 |
7688998 | Tuma et al. | Mar 2010 | B2 |
7696876 | Dimmer et al. | Apr 2010 | B2 |
7724250 | Ishii et al. | May 2010 | B2 |
7762665 | Vertegaal et al. | Jul 2010 | B2 |
7774155 | Sato et al. | Aug 2010 | B2 |
7812842 | Gordon | Oct 2010 | B2 |
7821541 | Delean | Oct 2010 | B2 |
7840031 | Albertson et al. | Nov 2010 | B2 |
7844914 | Andre et al. | Nov 2010 | B2 |
7925549 | Looney et al. | Apr 2011 | B2 |
7971156 | Albertson et al. | Jun 2011 | B2 |
8154781 | Kroll et al. | Apr 2012 | B2 |
8166421 | Magal et al. | Apr 2012 | B2 |
8183977 | Matsumoto | May 2012 | B2 |
8194921 | Kongqiao et al. | Jun 2012 | B2 |
8214098 | Murray et al. | Jul 2012 | B2 |
8218211 | Kroll et al. | Jul 2012 | B2 |
8368647 | Lin | Feb 2013 | B2 |
8405604 | Pryor et al. | Mar 2013 | B2 |
8416276 | Kroll et al. | Apr 2013 | B2 |
8446459 | Fang et al. | May 2013 | B2 |
8448083 | Migos et al. | May 2013 | B1 |
8462199 | Givon | Jun 2013 | B2 |
8514221 | King et al. | Aug 2013 | B2 |
8514251 | Hildreth et al. | Aug 2013 | B2 |
8625882 | Backlund et al. | Jan 2014 | B2 |
20020057383 | Iwamura | May 2002 | A1 |
20020071607 | Kawamura et al. | Jun 2002 | A1 |
20020158873 | Williamson | Oct 2002 | A1 |
20030057972 | Pfaff et al. | Mar 2003 | A1 |
20030063775 | Rafii et al. | Apr 2003 | A1 |
20030088463 | Kanevsky | May 2003 | A1 |
20030156756 | Gokturk et al. | Aug 2003 | A1 |
20030185444 | Honda | Oct 2003 | A1 |
20030227453 | Beier et al. | Dec 2003 | A1 |
20030235341 | Gokturk et al. | Dec 2003 | A1 |
20040046744 | Rafii et al. | Mar 2004 | A1 |
20040104935 | Williamson | Jun 2004 | A1 |
20040135744 | Bimber et al. | Jul 2004 | A1 |
20040155962 | Marks | Aug 2004 | A1 |
20040174770 | Rees | Sep 2004 | A1 |
20040183775 | Bell | Sep 2004 | A1 |
20040184640 | Bang et al. | Sep 2004 | A1 |
20040184659 | Bang et al. | Sep 2004 | A1 |
20040193413 | Wilson et al. | Sep 2004 | A1 |
20040222977 | Bear et al. | Nov 2004 | A1 |
20040258314 | Hashimoto | Dec 2004 | A1 |
20050031166 | Fujimura et al. | Feb 2005 | A1 |
20050088407 | Bell et al. | Apr 2005 | A1 |
20050089194 | Bell | Apr 2005 | A1 |
20050110964 | Bell et al. | May 2005 | A1 |
20050122308 | Bell et al. | Jun 2005 | A1 |
20050162381 | Bell et al. | Jul 2005 | A1 |
20050190972 | Thomas et al. | Sep 2005 | A1 |
20050254726 | Fuchs et al. | Nov 2005 | A1 |
20050265583 | Covell et al. | Dec 2005 | A1 |
20060010400 | Dehlin et al. | Jan 2006 | A1 |
20060092138 | Kim et al. | May 2006 | A1 |
20060110008 | Vertegaal et al. | May 2006 | A1 |
20060115155 | Lui et al. | Jun 2006 | A1 |
20060139314 | Bell | Jun 2006 | A1 |
20060149737 | Du et al. | Jul 2006 | A1 |
20060159344 | Shao et al. | Jul 2006 | A1 |
20060187196 | Underkoffler et al. | Aug 2006 | A1 |
20060239670 | Cleveland | Oct 2006 | A1 |
20060248475 | Abrahamsson | Nov 2006 | A1 |
20070078552 | Rosenberg | Apr 2007 | A1 |
20070130547 | Boillot | Jun 2007 | A1 |
20070154116 | Shieh | Jul 2007 | A1 |
20070230789 | Chang et al. | Oct 2007 | A1 |
20070285554 | Givon | Dec 2007 | A1 |
20080030460 | Hildreth et al. | Feb 2008 | A1 |
20080062123 | Bell | Mar 2008 | A1 |
20080094371 | Forstall et al. | Apr 2008 | A1 |
20080123940 | Kundu et al. | May 2008 | A1 |
20080150890 | Bell et al. | Jun 2008 | A1 |
20080150913 | Bell et al. | Jun 2008 | A1 |
20080170776 | Albertson et al. | Jul 2008 | A1 |
20080236902 | Imaizumi | Oct 2008 | A1 |
20080252596 | Bell et al. | Oct 2008 | A1 |
20080256494 | Greenfield | Oct 2008 | A1 |
20080260250 | Vardi | Oct 2008 | A1 |
20080281583 | Slothouber et al. | Nov 2008 | A1 |
20080287189 | Rabin | Nov 2008 | A1 |
20090009593 | Cameron et al. | Jan 2009 | A1 |
20090027335 | Ye | Jan 2009 | A1 |
20090027337 | Hildreth | Jan 2009 | A1 |
20090031240 | Hildreth | Jan 2009 | A1 |
20090040215 | Afzulpurkar et al. | Feb 2009 | A1 |
20090073117 | Tsurumi et al. | Mar 2009 | A1 |
20090077504 | Bell | Mar 2009 | A1 |
20090078473 | Overgard et al. | Mar 2009 | A1 |
20090083122 | Angell et al. | Mar 2009 | A1 |
20090083622 | Chien et al. | Mar 2009 | A1 |
20090096783 | Shpunt et al. | Apr 2009 | A1 |
20090183125 | Magal et al. | Jul 2009 | A1 |
20090195392 | Zalewski | Aug 2009 | A1 |
20090228841 | Hildreth | Sep 2009 | A1 |
20090256817 | Perlin et al. | Oct 2009 | A1 |
20090284542 | Baar et al. | Nov 2009 | A1 |
20090297028 | De Haan | Dec 2009 | A1 |
20100002936 | Khomo et al. | Jan 2010 | A1 |
20100007717 | Spektor et al. | Jan 2010 | A1 |
20100034457 | Berliner et al. | Feb 2010 | A1 |
20100036717 | Trest | Feb 2010 | A1 |
20100053151 | Marti et al. | Mar 2010 | A1 |
20100071965 | Hu et al. | Mar 2010 | A1 |
20100083189 | Arlein et al. | Apr 2010 | A1 |
20100103106 | Chui | Apr 2010 | A1 |
20100149096 | Migos et al. | Jun 2010 | A1 |
20100164897 | Morin et al. | Jul 2010 | A1 |
20100177933 | Willmann et al. | Jul 2010 | A1 |
20100199228 | Latta et al. | Aug 2010 | A1 |
20100199231 | Markovic et al. | Aug 2010 | A1 |
20100234094 | Gagner et al. | Sep 2010 | A1 |
20100235786 | Meizels et al. | Sep 2010 | A1 |
20100295781 | Alameh et al. | Nov 2010 | A1 |
20110006978 | Yuan | Jan 2011 | A1 |
20110007035 | Shai | Jan 2011 | A1 |
20110018795 | Jang | Jan 2011 | A1 |
20110029918 | Yoo et al. | Feb 2011 | A1 |
20110052006 | Gurman et al. | Mar 2011 | A1 |
20110081072 | Kawasaki et al. | Apr 2011 | A1 |
20110087970 | Swink et al. | Apr 2011 | A1 |
20110144543 | Tsuzuki et al. | Jun 2011 | A1 |
20110164032 | Shadmi | Jul 2011 | A1 |
20110164141 | Tico et al. | Jul 2011 | A1 |
20110193939 | Vassigh et al. | Aug 2011 | A1 |
20110211754 | Litvak et al. | Sep 2011 | A1 |
20110225536 | Shams et al. | Sep 2011 | A1 |
20110227820 | Haddick et al. | Sep 2011 | A1 |
20110231757 | Haddick et al. | Sep 2011 | A1 |
20110242102 | Hess | Oct 2011 | A1 |
20110248914 | Sherr | Oct 2011 | A1 |
20110254765 | Brand | Oct 2011 | A1 |
20110254798 | Adamson et al. | Oct 2011 | A1 |
20110260965 | Kim et al. | Oct 2011 | A1 |
20110261058 | Luo | Oct 2011 | A1 |
20110279397 | Rimon et al. | Nov 2011 | A1 |
20110291926 | Gokturk et al. | Dec 2011 | A1 |
20110292036 | Sali et al. | Dec 2011 | A1 |
20110293137 | Gurman et al. | Dec 2011 | A1 |
20110296353 | Ahmed et al. | Dec 2011 | A1 |
20110310010 | Hoffnung et al. | Dec 2011 | A1 |
20120001875 | Li et al. | Jan 2012 | A1 |
20120038550 | Lemmey et al. | Feb 2012 | A1 |
20120078614 | Galor et al. | Mar 2012 | A1 |
20120117514 | Kim et al. | May 2012 | A1 |
20120169583 | Rippel et al. | Jul 2012 | A1 |
20120202569 | Maizels et al. | Aug 2012 | A1 |
20120204133 | Guendelman et al. | Aug 2012 | A1 |
20120223882 | Galor et al. | Sep 2012 | A1 |
20120249416 | Maciocci et al. | Oct 2012 | A1 |
20120268369 | Kikkeri | Oct 2012 | A1 |
20120275680 | Omi | Nov 2012 | A1 |
20120313848 | Galor et al. | Dec 2012 | A1 |
20120320080 | Giese et al. | Dec 2012 | A1 |
20130002801 | Mock | Jan 2013 | A1 |
20130014052 | Frey et al. | Jan 2013 | A1 |
20130044053 | Galor et al. | Feb 2013 | A1 |
20130055120 | Galor et al. | Feb 2013 | A1 |
20130055150 | Galor | Feb 2013 | A1 |
20130058565 | Rafii et al. | Mar 2013 | A1 |
20130106692 | Maizels et al. | May 2013 | A1 |
20130107021 | Maizels et al. | May 2013 | A1 |
20130155070 | Luo | Jun 2013 | A1 |
20130207920 | McCann et al. | Aug 2013 | A1 |
20140108930 | Asnis | Apr 2014 | A1 |
Number | Date | Country |
---|---|---|
9935633 | Jul 1999 | WO |
03071410 | Aug 2003 | WO |
2004107272 | Dec 2004 | WO |
2005003948 | Jan 2005 | WO |
2005094958 | Oct 2005 | WO |
2007043036 | Apr 2007 | WO |
2007078639 | Jul 2007 | WO |
2007105205 | Sep 2007 | WO |
2007132451 | Nov 2007 | WO |
2007135376 | Nov 2007 | WO |
2008120217 | Oct 2008 | WO |
2012011044 | Jan 2012 | WO |
2012020380 | Feb 2012 | WO |
2012107892 | Aug 2012 | WO |
Entry |
---|
Ross Miller, “Kinect for Xbox 360 review”, Nov. 4, 2010, Engadget. |
International Application PCT/IB2012/050577 Search Report dated Aug. 6, 2012. |
U.S. Appl. No. 12/683,452 Official Action dated Sep. 7, 2012. |
Koutek, M., “Scientific Visualization in Virtual Reality: Interaction Techniques and Application Development”, PhD Thesis, Delft University of Technology, 264 pages, Jan. 2003. |
Azuma et al., “Recent Advances in Augmented Reality”, IEEE Computer Graphics and Applications, vol. 21, issue 6, pp. 34-47, Nov. 2001. |
Breen et al., “Interactive Occlusion and Collision of Real and Virtual Objects in Augmented Reality”, Technical Report ECRC-95-02, ECRC, Munich, Germany, 22 pages, year 1995. |
Burdea et al., “A Distributed Virtual Environment with Dextrous Force Feedback”, Proceedings of Interface to Real and Virtual Worlds Conference, pp. 255-265, Mar. 1992. |
Gargallo et al., “Bayesian 3D Modeling from Images Using Multiple Depth Maps”, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), vol. 2, pp. 885-891, Jun. 20-25, 2005. |
Gobbetti et al., “VB2: an Architecture for Interaction in Synthetic Worlds”, Proceedings of the 6th Annual ACM Symposium on User Interface Software and Technology (UIST'93), pp. 167-178, Nov. 3-5, 1993. |
Ohta et al., “Share-Z: Client/Server Depth Sensing for See-Through Head-Mounted Displays”, Presence: Teleoperators and Virtual Environments, vol. 11, No. 2, pp. 176-188, Apr. 2002. |
Schmalstieg et al., “The Studierstube Augmented Reality Project”, Presence: Teleoperators and Virtual Environments, vol. 11, No. 1, pp. 33-54, Feb. 2002. |
Sun et al., “SRP Based Natural Interaction Between Real and Virtual Worlds in Augmented Reality”, Proceedings of the International Conference on Cyberworlds (CW'08), pp. 117-124, Sep. 22-24, 2008. |
U.S. Appl. No. 13/541,786, filed Jul. 5, 2012. |
U.S. Appl. No. 13/592,352, filed Aug. 23, 2012. |
U.S. Appl. No. 13/584,831, filed Aug. 14, 2012. |
U.S. Appl. No. 13/592,369, filed Aug. 23, 2012. |
Hart, D., U.S. Appl. No. 09/616,606 “Method and System for High Resolution , Ultra Fast 3-D Imaging” filed Jul. 14, 2000. |
International Application PCT/IL2007/000306 Search Report dated Oct. 2, 2008. |
International Application PCT/IL2007/000574 Search Report dated Sep. 10, 2008. |
International Application PCT/IL2006/000335 Preliminary Report on Patentability dated Apr. 24, 2008. |
Avidan et al., “Trajectory triangulation: 3D reconstruction of moving points from amonocular image sequence”, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 22, No. 4, pp. 348-3537, Apr. 2000. |
Leclerc et al., “The direct computation of height from shading”, The Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp. 552-558, USA, Jun. 1991. |
Zhang et al., “Shape from intensity gradient”, IEEE Transactions on Systems, Man and Cybernetics—Part A: Systems and Humans, vol. 29, No. 3, pp. 318-325, May 1999. |
Zhang et al., “Height recovery from intensity gradients”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp. 508-513, Jun. 21-23, 1994. |
Horn, B., “Height and gradient from shading”, International Journal of Computer Vision, vol. 5, No. 1, pp. 37-76, Aug. 1990. |
Bruckstein, A., “On shape from shading”, Computer Vision, Graphics & Image Processing, vol. 44, pp. 139-154, year 1988. |
Zhang et al., “Rapid Shape Acquisition Using Color Structured Light and Multi-Pass Dynamic Programming”, 1st International Symposium on 3D Data Processing Visualization and Transmission (3DPVT), Italy, Jul. 2002. |
Besl, P., “Active, Optical Range Imaging Sensors”, Machine vision and applications, vol. 1, pp. 127-152, year 1988. |
Horn et al., “Toward optimal structured light patterns”, Proceedings of International Conference on Recent Advances in 3D Digital Imaging and Modeling, pp. 28-37, Ottawa, Canada, May 1997. |
Goodman, J.W., “Statistical Properties of Laser Speckle Patterns”, Laser Speckle and Related Phenomena, pp. 9-75, Springer-Verlag, Berlin Heidelberg, 1975. |
Asada et al., “Determining Surface Orientation by Projecting a Stripe Pattern”, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 10, No. 5, pp. 749-754, Sep. 1988. |
Winkelbach et al., “Shape from Single Stripe Pattern Illumination”, Luc Van Gool (Editor), (DAGM 2002), Lecture Notes in Computer Science 2449, p. 240-247, Springer 2002. |
Koninckx et al., “Efficient, Active 3D Acquisition, based on a Pattern-Specific Snake”, Luc Van Gool (Editor), (DAGM 2002), Lecture Notes in Computer Science 2449, pp. 557-565, Springer 2002. |
Kimmel et al., “Analyzing and synthesizing images by evolving curves with the Osher-Sethian method”, International Journal of Computer Vision, vol. 24, No. 1, pp. 37-56, year 1997. |
Zigelman et al., “Texture mapping using surface flattening via multi-dimensional scaling”, IEEE Transactions on Visualization and Computer Graphics, vol. 8, No. 2, pp. 198-207, Apr. 2002. |
Dainty, J.C., “Introduction”, Laser Speckle and Related Phenomena, pp. 1-7, Springer-Verlag, Berlin Heidelberg, 1975. |
Mendlovic et al., “Composite harmonic filters for scale, projection and shift invariant pattern recognition”, Applied Optics Journal, vol. 34, No. 2, Jan. 10, 1995. |
Fua et al., “Human Shape and Motion Recovery Using Animation Models”, 19th Congress, International Society for Photogrammetry and Remote Sensing, Amsterdam, The Netherlands, Jul. 2000. |
Allard et al., “Marker-less Real Time 3D modeling for Virtual Reality”, Immersive Projection Technology, Iowa State University, year 2004. |
Howe et al., “Bayesian Reconstruction of 3D Human Motion from Single-Camera Video”, Advanced in Neural Information Processing Systems, vol. 12, pp. 820-826, USA 1999. |
Li et al., “Real-Time 3D Motion Tracking with Known Geometric Models”, Real-Time Imaging Journal, vol. 5, pp. 167-187, Academic Press 1999. |
Grammalidis et al., “3-D Human Body Tracking from Depth Images Using Analysis by Synthesis”, Proceedings of the IEEE International Conference on Image Processing (ICIP2001), pp. 185-188, Greece, Oct. 7-10, 2001. |
Segen et al., “Shadow gestures: 3D hand pose estimation using a single camera”, Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, pp. 479-485, Fort Collins, USA, 1999. |
Vogler et al., “ASL recognition based on a coupling between HMMs and 3D motion analysis”, Proceedings of IEEE International Conference on Computer Vision, pp. 363-369, Mumbai, India, 1998. |
Nam et al., “Recognition of Hand Gestures with 3D, Nonlinear Arm Movements”, Pattern Recognition Letters, vol. 18, No. 1, pp. 105-113, Elsevier Science B.V. 1997. |
Nesbat, S., “A System for Fast, Full-Text Entry for Small Electronic Devices”, Proceedings of the 5th International Conference on Multimodal Interfaces, ICMI 2003, Vancouver, Nov. 5-7, 2003. |
Ascension Technology Corporation, “Flock of Birds: Real-Time Motion Tracking”, 2008. |
Segen et al., “Human-computer interaction using gesture recognition and 3D hand tracking”, ICIP 98, Proceedings of the IEEE International Conference on Image Processing, vol. 3, pp. 188-192, Oct. 4-7, 1998. |
Dekker, L., “Building Symbolic Information for 3D Human Body Modeling from Range Data”, Proceedings of the Second International Conference on 3D Digital Imaging and Modeling, IEEE computer Society, pp. 388-397, 1999. |
Holte et al., “Gesture Recognition using a Range Camera”, Technical Report CVMT-07-01 ISSN 1601-3646, Feb. 2007. |
Cheng et al., “Articulated Human Body Pose Inference from Voxel Data Using a Kinematically Constrained Gaussian Mixture Model”, CVPR EHuM2: 2nd Workshop on Evaluation of Articulated Human Motion and Pose Estimation, 2007. |
U.S. Appl. No. 61/523,404, filed Aug. 15, 2011. |
U.S. Appl. No. 61/504,339, filed Jul. 5, 2011. |
U.S. Appl. No. 61/521,448, filed Aug. 9, 2011. |
U.S. Appl. No. 61/523,349, filed Aug. 14, 2011. |
Primesense, “Natural Interaction”, YouTube Presentation, Jun. 9, 2010 http://www.youtube.com/watch?v=TzLKsex43z1˜. |
U.S. Appl. No. 13/423,322, filed Mar. 19, 2012. |
U.S. Appl. No. 13/423,314, filed Mar. 19, 2012. |
Tobii Technology, “The World Leader in Eye Tracking and Gaze Interaction”, Mar. 2012. |
Noveron, “Madison video eyewear”, year 2012. |
U.S. Appl. No. 12/762,336 Official Action dated May 15, 2012. |
Manning et al., “Foundations of Statistical Natural Language Processing”, chapters 6,7,9 and 12, MIT Press 1999. |
Commission Regulation (EC) No. 1275/2008, Official Journal of the European Union, Dec. 17, 2008. |
Arm Ltd., “AMBA Specification: AHB”, Version 2, pp. 35-92, year 1999. |
Primesense Corporation, “PrimeSensor NITE 1.1”, USA, year 2010. |
Microvision Inc., “PicoP® Display Engine—How it Works”, 1996-2012. |
Bleiwess et al., “Fusing Time-of-Flight Depth and Color for Real-Time Segmentation and Tracking”, Dyn3D 2009, Lecture Notes in Computer Science 5742, pp. 58-69, Jena, Germany, Sep. 9, 2009. |
Bleiwess et al., “Markerless Motion Capture Using a Single Depth Sensor”, SIGGRAPH Asia 2009, Yokohama, Japan, Dec. 16-19, 2009. |
Bevilacqua et al., “People Tracking Using a Time-Of-Flight Depth Sensor”, Proceedings of the IEEE International Conference on Video and Signal Based Surveillance, Sydney, Australia, Nov. 22-24, 2006. |
Bradski, G., “Computer Vision Face Tracking for Use in a Perceptual User Interface”, Intel Technology Journal, vol. 2, issue 2 (2nd Quarter 2008). |
Comaniciu et al., “Kernel-Based Object Tracking”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, No. 5, pp. 564-577, May 2003. |
Gesturetec Inc., “Gesture Control Solutions for Consumer Devices”, Canada, 2009. |
Gokturk et al., “A Time-Of-Flight Depth Sensor—System Description, Issues and Solutions”, Proceedings of the 2004 Conference on Computer Vision and Patter Recognition Workshop (CVPRW'04), vol. 3, pp. 35, Jun. 27-Jul. 2, 2004. |
Grest et al., “Single View Motion Tracking by Depth and Silhouette Information”, SCIA 2007—Scandinavian Conference on Image Analysis, Lecture Notes in Computer Science 4522, pp. 719-729, Aalborg, Denmark, Jun. 10-14, 2007. |
Haritaoglu et al., “Ghost 3d: Detecting Body Posture and Parts Using Stereo”, Proceedings of the IEEE Workshop on Motion and Video Computing (Motion'02), pp. 175-180, Orlando, USA, Dec. 5-6, 2002. |
Haritaoglu et al., “W4S : A real-time system for detecting and tracking people in 2<½>D”, ECCV 98—5th European conference on computer vision, vol. 1407, pp. 877-892, Freiburg , Germany, Jun. 2-6, 1998. |
Harville, M., “Stereo Person Tracking with Short and Long Term Plan-View Appearance Models of Shape and Color”, Proceedings of the IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSSS-2005), pp. 522-527, Como, Italy, Sep. 15-16, 2005. |
Holte, M., “Fusion of Range and Intensity Information for View Invariant Gesture Recognition”, IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW '08), pp. 1-7, Anchorage, USA, Jun. 23-28, 2008. |
Kaewtrakulpong et al., “An Improved Adaptive Background Mixture Model for Real-Time Tracking with Shadow Detection”, Proceedings of the 2nd European Workshop on Advanced Video Based Surveillance Systems (AVBS'01), Kingston, UK, Sep. 2001. |
Kolb et al., “ToF-Sensors: New Dimensions for Realism and Interactivity”, Proceedings of the IEEE Conference on Computer Vision and Patter Recognition Workshops, pp. 1-6, Anchorage, USA, Jun. 23-28, 2008. |
Kolsch et al., “Fast 2D Hand Tracking with Flocks of Features and Multi-Cue Integration”, IEEE Workshop on Real-Time Vision for Human Computer Interaction (at CVPR'04), Washington, USA, Jun. 27-Jul. 2, 2004. |
Krumm et al., “Multi-Camera Multi-Person Tracking for EasyLiving”, 3rd IEEE International Workshop on Visual Surveillance, Dublin, Ireland, Jul. 1, 2000. |
Leens et al., “Combining Color, Depth, and Motion for Video Segmentation”, ICVS 2009—7th International Conference on Computer Vision Systems, Liege, Belgium Oct. 13-15, 2009. |
MacCormick et al., “Partitioned Sampling, Articulated Objects, and Interface-Quality Hand Tracking”, ECCCV '00—Proceedings of the 6th European Conference on Computer Vision-Part II , pp. 3-19, Dublin, Ireland, Jun. 26-Jul. 1, 2000. |
Malassiotis et al., “Real-Time Hand Posture Recognition Using Range Data”, Image and Vision Computing, vol. 26, No. 7, pp. 1027-1037, Jul. 2, 2008. |
Morano et al., “Structured Light Using Pseudorandom Codes”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, issue 3, pp. 322-327, Mar. 1998. |
Munoz-Salinas et al., “People Detection and Tracking Using Stereo Vision and Color”, Image and Vision Computing, vol. 25, No. 6, pp. 995-1007, Jun. 1, 2007. |
Nanda et al., “Visual Tracking Using Depth Data”, Proceedings of the 2004 Conference on Computer Vision and Patter Recognition Workshop, vol. 3, Washington, USA, Jun. 27-Jul. 2, 2004. |
Scharstein et al., “High-Accuracy Stereo Depth Maps Using Structured Light”, IEEE Conference on Computer Vision and Patter Recognition, vol. 1, pp. 195-2002, Madison, USA, Jun. 2003. |
Shi et al., “Good Features to Track”, IEEE Conference on Computer Vision and Pattern Recognition, pp. 593-600, Seattle, USA, Jun. 21-23, 1994. |
Siddiqui et al., “Robust Real-Time Upper Body Limb Detection and Tracking”, Proceedings of the 4th ACM International Workshop on Video Surveillance and Sensor Networks, Santa Barbara, USA, Oct. 27, 2006. |
Softkinetic S.A., IISU™—3D Gesture Recognition Platform for Developers of 3D Applications, Belgium, Brussels, 2007-2010. |
Sudderth et al., “Visual Hand Tracking Using Nonparametric Belief Propagation”, IEEE Workshop on Generative Model Based Vision at CVPR'04, Washington, USA, Jun. 27-Jul. 2, 2004. |
Tsap, L., “Gesture-Tracking in Real Time with Dynamic Regional Range Computation”, Real-Time Imaging, vol. 8, issue 2, pp. 115-126, Apr. 2002. |
Xu et al., “A Multi-Cue-Based Human Body Tracking System”, Proceedings of the 5ths International Conference on Computer Vision Systems (ICVS 2007), Germany, Mar. 21-24, 2007. |
Xu et al., “Human Detecting Using Depth and Gray Images”, Proceedings of the IEE Conference on Advanced Video and Signal Based Surveillance (AVSS'03), Miami, USA, Jul. 21-22, 2003. |
Yilmaz et al., “Object Tracking: A Survey”, ACM Computing Surveys, vol. 38, No. 4, article 13, Dec. 2006. |
Zhu et al., “Controlled Human Pose Estimation From Depth Image Streams”, IEEE Conference on Computer Vision and Patter Recognition Workshops, pp. 1-8, Anchorage, USA, Jun. 23-27, 2008. |
International Application PCT/IB2010/051055 Search Report dated Sep. 1, 2010. |
La Viola, J. Jr., “Whole-Hand and Speech Input in Virtual Environments”, Computer Science Department, Florida Atlantic University, USA, 1996. |
Martell, C., “Form: An Experiment in the Annotation of the Kinematics of Gesture”, Dissertation, Computer and Information Science, University of Pennsylvania, 2005. |
U.S. Appl. No. 12/352,622 Official Action dated Mar. 31, 2011. |
Prime Sense Inc., “Prime Sensor™ NITE 1.1 Framework Programmer's Guide”, Version 1.2, year 2009. |
PrimeSense Corporation, “PrimeSensor Reference Design 1.08”, USA, year 2010. |
International Application PCT/IB2012/050577 filed on Feb. 9, 2012. |
U.S. Appl. No. 61/615,403, filed Mar. 26, 2012. |
U.S. Appl. No. 61/603,949, filed Feb. 28, 2012. |
U.S. Appl. No. 61/525,771, filed Aug. 21, 2011. |
U.S. Appl. No. 13/295,106, filed Nov. 14, 2011. |
U.S. Appl. No. 61/538,970, filed Sep. 26, 2011. |
U.S. Appl. No. 61/526,696, filed Aug. 24, 2011. |
U.S. Appl. No. 61/526,692, filed Aug. 24, 2011. |
U.S. Appl. No. 13/314,207, filed Dec. 8, 2011. |
U.S. Appl. No. 12/352,622 Official Action dated Sep. 30, 2011. |
International Application PCT/IB2011/053192 Search Report dated Dec. 6, 2011. |
Gordon et al., “The use of Dense Stereo Range Date in Augmented Reality”, Proceedings of the 1st International Symposium on Mixed and Augmented Reality (ISMAR), Darmstadt, Germany, pp. 1-10, Sep. 30-Oct. 1, 2002. |
Agrawala et al., “The two-user Responsive Workbench :support for collaboration through individual views of a shared space”, Proceedings on the 24th conference on computer graphics and interactive techniques (SIGGRAPH 97), Los Angeles, USA, pp. 327-332 , Aug. 3-8, 1997. |
Harman et al., “Rapid 2D-to 3D conversion”, Proceedings of SPIE Conference on Stereoscopic Displays and Virtual Reality Systems, vol. 4660, pp. 78-86, Jan. 21-23, 2002. |
Hoff et al., “Analysis of head pose accuracy in augmented reality”, IEEE Transactions on Visualization and Computer Graphics, vol. 6, No. 4, pp. 319-334, Oct.-Dec. 2000. |
Poupyrev et al., “The go-go interaction technique: non-liner mapping for direct manipulation in VR”, Proceedings of the 9th annual ACM Symposium on User interface software and technology (UIST '96), Washington, USA, pp. 79-80, Nov. 6-8, 1996. |
Wexelblat et al., “Virtual Reality Applications and Explorations”, Academic Press Inc., San Diego, USA, 262 pages, year 1993. |
U.S. Appl. No. 13/161,508 Office Action dated Apr. 10, 2013. |
U.S. Appl. No. 12/683,452 Office Action dated Jun. 7, 2013. |
Galor, M., U.S. Appl. No. 13/778,172 “Asymmetric Mapping in Tactile and Non-Tactile User Interfaces” filed Feb. 27, 2013. |
Berenson et al., U.S. Appl. No. 13/904,050 “Zoom-based gesture user interface” filed May 29, 2013. |
Berenson et al., U.S. Appl. No. 13/904,052 “Gesture-based interface with enhanced features” filed May 29, 2013. |
Bychkov et al., U.S. Appl. No. 13/849,514 “Gaze-enhanced Virtual Touchscreen” filed Mar. 24, 2013. |
Guendelman et al., U.S. Appl. No. 13/849,514 “Enhanced Virtual Touchpad” filed Mar. 24, 2013. |
U.S. Appl. No. 13/244,490 Office Action dated Dec. 6, 2013. |
U.S. Appl. No. 13/423,314 Office Action dated Dec. 4, 2013. |
U.S. Appl. No. 13/423,322 Office Action dated Nov. 1, 2013. |
U.S. Appl. No. 13/314,207 Office Action dated Aug. 5, 2013. |
U.S. Appl. No. 13/161,508 Office Action dated Sep. 9, 2013. |
International Application PCT/IB2013/052332 Search Report dated Aug. 26, 2013. |
U.S. Appl. No. 13/541,786 Office Action dated Feb. 13, 2014. |
U.S. Appl. No. 13/584,831 Office Action dated Mar. 20, 2014. |
U.S. Appl. No. 13/314,207 Office Action dated Apr. 3, 2014. |
U.S. Appl. No. 12/683,452 Office Action dated Jan. 22, 2014. |
U.S. Appl. No. 13/423,322 Office Action dated Apr. 7, 2014. |
U.S. Appl. No. 13/592,352 Office Action dated Feb. 13, 2014. |
Nakamura et al, “Occlusion detectable stereo-occlusion patterns in camera matrix”, Proceedings of the 1996 Conference on Computer Vision and Pattern Recognition (CVPR '96), pp. 371-378, Jun. 1996. |
U.S. Appl. No. 13/592,352 Office Action dated May 7, 2014. |
U.S. Appl. No. 12/721,582 Office Action dated Apr. 17, 2014. |
U.S. Appl. No. 14/055,997 Office Action dated May 28, 2014. |
U.S. Appl. No. 13/584,831 Office Action dated Jul. 8, 2014. |
U.S. Appl. No. 13/423,314 Office Action dated Jul. 31, 2014. |
U.S. Appl. No. 12/683,452 Office Action dated Jul. 16, 2014. |
U.S. Appl. No. 13/423,314 Advisory Action dated Jun. 26, 2014. |
Slinger et al, “Computer-Generated Holography as a Generic Display Technology”, IEEE Computer, vol. 28, Issue 8, pp. 46-53, Aug. 2005. |
Hilliges et al, “Interactions in the air: adding further depth to interactive tabletops”, Proceedings of the 22nd annual ACM symposium on User interface software and technology, ACM, pp. 139-148, Oct. 2009. |
U.S. Appl. No. 12/683,452 Office Action dated Nov. 21, 2014. |
U.S. Appl. No. 14/055,997 Office Action dated Nov. 21, 2014. |
U.S. Appl. No. 13/592,352 Office Action dated Oct. 2, 2014. |
Scharstein, D., “Stereo vision for view synthesis”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 852-858, year 1996. |
Zhu et al., “Generation and Error Characterization of Pararell-Perspective Stereo Mosaics from Real Video”, In-Video Registration, Springer, US, chapter 4,pp. 72-105, year 2003. |
Chai et al., “Parallel Projections for Stereo Reconstruction”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 493-500, year 2000. |
Evers et al., “Image-based rendering of complex scenes from multi-camera rig”, IEEE Proceedings on Vision, Image and Signal Processing, vol. 152, No. 4, pp. 470-480, Aug. 5, 2005. |
Evers et al,. “Image-based Interactive rendering with view dependent geometry”, Computer Graphics Forum, (Eurographics '03), vol. 22, No. 3, pp. 573-582, year 2003. |
Kauff et al., “Depth map creation and image-based rendering for advanced 3DTV Services Providing Interoperability and Scalability”, Signal Processing: Image Communication, vol. 22, No. 2, pp. 217-234, year 2007. |
Number | Date | Country | |
---|---|---|---|
20120313848 A1 | Dec 2012 | US |
Number | Date | Country | |
---|---|---|---|
61422239 | Dec 2010 | US |