This invention relates generally to user interfaces for computerized systems, and specifically to user interfaces that are based on three-dimensional sensing.
Many different types of user interface devices and methods are currently available. Common tactile interface devices include a computer keyboard, a mouse and a joystick. Touch screens detect the presence and location of a touch by a finger or other object within the display area. Infrared remote controls are widely used, and “wearable” hardware devices have been developed, as well, for purposes of remote control.
Computer interfaces based on three-dimensional (3D) sensing of parts of a user's body have also been proposed. For example, PCT International Publication WO 03/071410, whose disclosure is incorporated herein by reference, describes a gesture recognition system using depth-perceptive sensors. A 3D sensor, typically positioned in a room in proximity to the user, provides position information, which is used to identify gestures created by a body part of interest. The gestures are recognized based on the shape of the body part and its position and orientation over an interval. The gesture is classified for determining an input into a related electronic device.
Documents incorporated by reference in the present patent application are to be considered an integral part of the application except that to the extent any terms are defined in these incorporated documents in a manner that conflicts with the definitions made explicitly or implicitly in the present specification, only the definitions in the present specification should be considered.
As another example, U.S. Pat. No. 7,348,963, whose disclosure is incorporated herein by reference, describes an interactive video display system, in which a display screen displays a visual image, and a camera captures 3D information regarding an object in an interactive area located in front of the display screen. A computer system directs the display screen to change the visual image in response to changes in the object.
Three-dimensional human interface systems may identify not only the user's hands, but also other parts of the body, including the head, torso and limbs. For example, U.S. Patent Application Publication 2010/0034457, whose disclosure is incorporated herein by reference, describes a method for modeling humanoid forms from depth maps. The depth map is segmented so as to find a contour of the body. The contour is processed in order to identify a torso and one or more limbs of the subject. An input is generated to control an application program running on a computer by analyzing a disposition of at least one of the identified limbs in the depth map.
Some user interface systems track the direction of the user's gaze. For example, U.S. Pat. No. 7,762,665, whose disclosure is incorporated herein by reference, describes a method of modulating operation of a device, comprising: providing an attentive user interface for obtaining information about an attentive state of a user; and modulating operation of a device on the basis of the obtained information, wherein the operation that is modulated is initiated by the device. Preferably, the information about the user's attentive state is eye contact of the user with the device that is sensed by the attentive user interface.
There is provided, in accordance with an embodiment of the present invention a method, including presenting, by a computer, multiple interactive items on a display coupled to the computer, receiving an input indicating a direction of a gaze of a user of the computer, selecting, in response to the gaze direction, one of the multiple interactive items, subsequent to selecting the one of the interactive items, receiving a sequence of three-dimensional (3D) maps containing at least a hand of the user, analyzing the 3D maps to detect a gesture performed by the user, and performing an operation on the selected interactive item in response to the gesture.
There is also provided, in accordance with an embodiment of the present invention an apparatus, including a sensing device configured to receive three dimensional (3D) maps containing at least a head and a hand of a user, and to receive a two dimensional (2D) image containing at least an eye of the user, a display, and a computer coupled to the sensing device and the display, and configured to present, on the display, multiple interactive items, to receive an input indicating a direction of a gaze performed by a user of the computer, to select, in response to the gaze direction, one of the multiple interactive items, to receive, subsequent to selecting the one of the interactive items, a sequence of the 3D maps containing at least the hand of the user, to analyze the 3D maps to detect a gesture performed by the user, and to perform an operation on the selected one of the interactive items in response to the gesture.
There is further provided, in accordance with an embodiment of the present invention a computer software product, including a non-transitory computer-readable medium, in which program instructions are stored, which instructions, when read by a computer, cause the computer to present multiple interactive items on a display coupled to the computer, to receive an input indicating a direction of a gaze performed by a user of the computer, to select, in response to the gaze direction, one of the multiple interactive items, to receive, subsequent to selecting the one of the interactive items, a sequence of three-dimensional (3D) maps containing at least a hand of the user, to analyze the 3D maps to detect a gesture performed by the user, and to perform an operation on the selected one of the interactive items in response to the gesture.
The disclosure is herein described, by way of example only, with reference to the accompanying drawings, wherein:
When using physical tactile input devices such as buttons, rollers or touch screens, a user typically engages and disengages control of a user interface by touching and/or manipulating the physical device. Embodiments of the present invention describe gestures that can be performed by a user in order to engage interactive items presented on a display coupled to a computer executing a user interface that includes three-dimensional (3D) sensing.
As explained hereinbelow, a user can select a given one of the interactive items by gazing at the given interactive item, and manipulate the given interactive item by performing two-dimensional (2D) gestures on a tactile input device, such as a touchscreen or a touchpad. In some embodiments the computer can defines a virtual surface that emulates a touchpad or a touchscreen. The virtual surface can be implemented on a physical surface such as a book or a desktop, and the user can interact with the user interface by performing 2D gestures on the physical surface. In alternative embodiments, the virtual surface can be implemented in space in proximity to the user, and the user can interact with the computer by performing 3D gestures, as described hereinbelow.
In further embodiments, when configuring the physical surface as a virtual surface, the physical surface can be configured as a single input device, such as a touchpad. Alternatively, the physical surface can be divided into physical regions, and a respective functionality can be assigned to each of the physical regions. For example, a first physical region can be configured as a keyboard, a second physical region can be configured as a mouse, and a third physical region can be configured as a touchpad.
In additional embodiments, as described hereinbelow, a projector can be configured to project graphical images onto the physical surface, thereby enabling the physical surface to function as an interactive touchscreen on which visual elements can be drawn and manipulated in response to gestures performed by the user.
While
Computer 26 processes data generated by device 24 in order to reconstruct a 3D map of user 22. The term “3D map” (or equivalently, “depth map”) refers to a set of 3D coordinates representing a surface of a given object, in this case the user's body. In one embodiment, device 24 projects a pattern of spots onto the object and captures an image of the projected pattern. Computer 26 then computes the 3D coordinates of points on the surface of the user's body by triangulation, based on transverse shifts of the spots in the imaged pattern. The 3D coordinates are measured, by way of example, with reference to a generally horizontal X-axis 40, a generally vertical Y-axis 42 and a depth Z-axis 44, based on device 24. Methods and devices for this sort of triangulation-based 3D mapping using a projected pattern are described, for example, in PCT International Publications WO 2007/043036, WO 2007/105205 and WO 2008/120217, whose disclosures are incorporated herein by reference. Alternatively, system 20 may use other methods of 3D mapping, using single or multiple cameras or other types of sensors, as are known in the art.
In some embodiments, device 24 detects the location and direction of eyes 34 of user 22, typically by processing and analyzing an image comprising light (typically infrared and/or a color produced by the red-green-blue additive color model) reflecting from one or both eyes 34, in order to find a direction of the user's gaze. In alternative embodiments, computer 26 (either by itself or in combination with device 24) detects the location and direction of the eyes 34 of the user. The reflected light may originate from a light projecting source of device 24, or any other natural (e.g., sunlight) or artificial (e.g., a lamp) source. Using techniques that are known in the art such as detecting pupil center and corneal reflections (PCCR), device 24 may process and analyze an image comprising light reflecting from an element of eye 34, such as a pupil 38, an iris 39 or a cornea 41, in order to find the direction of the user's gaze. Additionally, device 24 may convey (to computer 26) the light reflecting from the cornea as a glint effect.
The location and features of the user's head (e.g., an edge of the eye, a nose or a nostril) that are extracted by computer from the 3D map may be used in finding coarse location coordinates of the user's eyes, thus simplifying the determination of precise eye position and gaze direction, and making the gaze measurement more reliable and robust. Furthermore, computer 26 can readily combine the 3D location of parts of head 32 (e.g., eye 34) that are provided by the 3D map with gaze angle information obtained via eye part image analysis in order to identify a given on-screen object 36 at which the user is looking at any given time. This use of 3D mapping in conjunction with gaze tracking allows user 22 to move head 32 freely while alleviating the need to actively track the head using sensors or emitters on the head, as in some eye tracking systems that are known in the art.
By tracking eye 34, embodiments of the present invention may reduce the need to re-calibrate user 22 after the user moves head 32. In some embodiments, computer 26 may use depth information for head 32, eye 34 and pupil 38, in order to track the head's movement, thereby enabling a reliable gaze angle to be calculated based on a single calibration of user 22. Utilizing techniques that are known in the art such as PCCR, pupil tracking, and pupil shape, computer 26 may calculate a gaze angle of eye 34 from a fixed point of head 32, and use the head's location information in order to re-calculate the gaze angle and enhance the accuracy of the aforementioned techniques. In addition to reduced recalibrations, further benefits of tracking the head may include reducing the number of light projecting sources and reducing the number of cameras used to track eye 34.
In addition to processing data generated by device 24, computer 26 can process signals from tactile input devices such as a keyboard 45 and a touchpad 46 that rest on a physical surface 47 (e.g., a desktop). Touchpad 46 (also referred to as a gesture pad) comprises a specialized surface that can translate the motion and position of fingers 30 to a relative position on display 28. In some embodiments, as user 22 moves a given finger 30 along the touchpad, the computer can responsively present a cursor (not shown) at locations corresponding to the finger's motion. For example, as user 22 moves a given finger 30 from right to left along touchpad 46, computer 26 can move a cursor from right to left on display 28.
In some embodiments, display 28 may be configured as a touchscreen comprising an electronic visual display that can detect the presence and location of a touch, typically by one or more fingers 30 or a stylus (not shown) within the display area. When interacting with the touchscreen, user 22 can interact directly with interactive items 36 presented on the touchscreen, rather than indirectly via a cursor controlled by touchpad 46.
In additional embodiments a projector 48 may be coupled to computer 26 and positioned above physical surface 47. As explained hereinbelow projector 48 can be configured to project an image on physical surface 47.
Computer 26 typically comprises a general-purpose computer processor, which is programmed in software to carry out the functions described hereinbelow. The software may be downloaded to the processor in electronic form, over a network, for example, or it may alternatively be provided on non-transitory tangible computer-readable media, such as optical, magnetic, or electronic memory media. Alternatively or additionally, some or all of the functions of the computer processor may be implemented in dedicated hardware, such as a custom or semi-custom integrated circuit or a programmable digital signal processor (DSP). Although computer 26 is shown in
As another alternative, these processing functions may be carried out by a suitable processor that is integrated with display 28 (in a television set, for example) or with any other suitable sort of computerized device, such as a game console or a media player. The sensing functions of device 24 may likewise be integrated into the computer or other computerized apparatus that is to be controlled by the sensor output.
Various techniques may be used to reconstruct the 3D map of the body of user 22. In one embodiment, computer 26 extracts 3D connected components corresponding to the parts of the body from the depth data generated by device 24. Techniques that may be used for this purpose are described, for example, in U.S. patent application Ser. No. 12/854,187, filed Aug. 11, 2010, whose disclosure is incorporated herein by reference. The computer analyzes these extracted components in order to reconstruct a “skeleton” of the user's body, as described in the above-mentioned U.S. Patent Application Publication 2010/0034457, or in U.S. patent application Ser. No. 12/854,188, filed Aug. 11, 2010, whose disclosure is also incorporated herein by reference. In alternative embodiments, other techniques may be used to identify certain parts of the user's body, and there is no need for the entire body to be visible to device 24 or for the skeleton to be reconstructed, in whole or even in part.
Using the reconstructed skeleton, computer 26 can assume a position of a body part such as a tip of finger 30, even though the body part (e.g., the fingertip) may not be detected by the depth map due to issues such as minimal object size and reduced resolution at greater distances from device 24. In some embodiments, computer 26 can auto-complete a body part based on an expected shape of the human part from an earlier detection of the body part, or from tracking the body part along several (previously) received depth maps. In some embodiments, computer 26 can use a 2D color image captured by an optional color video camera (not shown) to locate a body part not detected by the depth map.
In some embodiments, the information generated by computer as a result of this skeleton reconstruction includes the location and direction of the user's head, as well as of the arms, torso, and possibly legs, hands and other features, as well. Changes in these features from frame to frame (i.e. depth maps) or in postures of the user can provide an indication of gestures and other motions made by the user. User posture, gestures and other motions may provide a control input for user interaction with interface 20. These body motions may be combined with other interaction modalities that are sensed by device 24, including user eye movements, as described above, as well as voice commands and other sounds. Interface 20 thus enables user 22 to perform various remote control functions and to interact with applications, interfaces, video programs, images, games and other multimedia content appearing on display 28.
A processor 56 receives the images from subassembly 52 and compares the pattern in each image to a reference pattern stored in a memory 58. The reference pattern is typically captured in advance by projecting the pattern onto a reference plane at a known distance from device 24. Processor 56 computes local shifts of parts of the pattern over the area of the 3D map and translates these shifts into depth coordinates. Details of this process are described, for example, in PCT International Publication WO 2010/004542, whose disclosure is incorporated herein by reference. Alternatively, as noted earlier, device 24 may be configured to generate 3D maps by other means that are known in the art, such as stereoscopic imaging, sonar-like devices (sound based/acoustic), wearable implements, lasers, or time-of-flight measurements.
Processor 56 typically comprises an embedded microprocessor, which is programmed in software (or firmware) to carry out the processing functions that are described hereinbelow. The software may be provided to the processor in electronic form, over a network, for example; alternatively or additionally, the software may be stored on non-transitory tangible computer-readable media, such as optical, magnetic, or electronic memory media. Processor 56 also comprises suitable input and output interfaces and may comprise dedicated and/or programmable hardware logic circuits for carrying out some or all of its functions. Details of some of these processing functions and circuits that may be used to carry them out are presented in the above-mentioned Publication WO 2010/004542.
In some embodiments, a gaze sensor 60 detects the gaze direction of eyes 34 of user 22 by capturing and processing two dimensional images of user 22. In alternative embodiments, computer 26 detects the gaze direction by processing a sequence of 3D maps conveyed by device 24. Sensor 60 may use any suitable method of eye tracking that is known in the art, such as the method described in the above-mentioned U.S. Pat. No. 7,762,665 or in U.S. Pat. No. 7,809,160, whose disclosure is incorporated herein by reference, or the alternative methods described in references cited in these patents. For example, sensor 60 may capture an image of light (typically infrared light) that is reflected from the fundus and/or the cornea of the user's eye or eyes. This light may be projected toward the eyes by illumination subassembly 50 or by another projection element (not shown) that is associated with sensor 60. Sensor 60 may capture its image with high resolution over the entire region of interest of user interface 20 and may then locate the reflections from the eye within this region of interest. Alternatively, imaging subassembly 52 may capture the reflections from the user's eyes (ambient light, reflection from monitor) in addition to capturing the pattern images for 3D mapping.
As another alternative, processor 56 may drive a scan control 62 to direct the field of view of gaze sensor 60 toward the location of the user's face or eye 34. This location may be determined by processor 60 or by computer 26 on the basis of a depth map or on the basis of the skeleton reconstructed from the 3D map, as described above, or using methods of image-based face recognition that are known in the art. Scan control 62 may comprise, for example, an electromechanical gimbal, or a scanning optical or optoelectronic element, or any other suitable type of scanner that is known in the art, such as a microelectromechanical system (MEMS) based mirror that is configured to reflect the scene to gaze sensor 60.
In some embodiments, scan control 62 may also comprise an optical or electronic zoom, which adjusts the magnification of sensor 60 depending on the distance from device 24 to the user's head, as provided by the 3D map. The above techniques, implemented by scan control 62, enable a gaze sensor 60 of only moderate resolution to capture images of the user's eyes with high precision, and thus give precise gaze direction information.
In alternative embodiments, computer 26 may calculate the gaze angle using an angle (i.e., relative to Z-axis 44) of the scan control. In additional embodiments, computer 26 may compare scenery captured by the gaze sensor 60, and scenery identified in 3D depth maps. In further embodiments, computer may compare scenery captured by the gaze sensor 60 with scenery captured by a 2D camera having a wide field of view that includes the entire scene of interest. Additionally or alternatively, scan control 62 may comprise sensors (typically either optical or electrical) configured to verify an angle of the eye movement.
Processor 56 processes the images captured by gaze sensor 60 in order to extract the user's gaze angle. By combining the angular measurements made by sensor 60 with the 3D location of the user's head provided by depth imaging subassembly 52, the processor is able to derive accurately the user's true line of sight in 3D space. The combination of 3D mapping with gaze direction sensing reduces or eliminates the need for precise calibration and comparing multiple reflection signals in order to extract the true gaze direction. The line-of-sight information extracted by processor 56 enables computer 26 to identify reliably the interactive item at which the user is looking.
The combination of the two modalities can allow gaze detection without using an active projecting device (i.e., illumination subassembly 50) since there is no need for detecting a glint point (as used, for example, in the PCCR method). Using the combination can solve the glasses reflection problem of other gaze methods that are known in the art. Using information derived from natural light reflection, the 2D image (i.e. to detect the pupil position), and the 3D depth map (i.e., to identify the head's position by detecting the head's features), computer 26 can calculate the gaze angle and identify a given interactive item 36 at which the user is looking.
As noted earlier, gaze sensor 60 and processor 56 may track either one or both of the user's eyes. If both eyes 34 are tracked with sufficient accuracy, the processor may be able to provide an individual gaze angle measurement for each of the eyes. When the eyes are looking at a distant object, the gaze angles of both eyes will be parallel; but for nearby objects, the gaze angles will typically converge on a point in proximity to an object of interest. This point may be used, together with depth information, in extracting 3D coordinates of the point on which the user's gaze is fixed at any given moment.
As mentioned above, device 24 may create 3D maps of multiple users who are in its field of view at the same time. Gaze sensor 60 may similarly find the gaze direction of each of these users, either by providing a single high-resolution image of the entire field of view, or by scanning of scan control 62 to the location of the head of each user.
Processor 56 outputs the 3D maps and gaze information via a communication link 64, such as a Universal Serial Bus (USB) connection, to a suitable interface 66 of computer 26. The computer comprises a central processing unit (CPU) 68 with a memory 70 and a user interface 72, which drives display 28 and may include other components, as well. As noted above, device 24 may alternatively output only raw images, and the 3D map and gaze computations described above may be performed in software by CPU 68. Middleware for extracting higher-level information from the 3D maps and gaze information may run on processor 56, CPU 68, or both. CPU 68 runs one or more application programs, which drive user interface 72 based on information provided by the middleware, typically via an application program interface (API). Such applications may include, for example, games, entertainment, Web surfing, and/or office applications.
Although processor 56 and CPU 68 are shown in
In some embodiments, receiving the input may comprise receiving, from depth imaging subassembly 52, a 3D map containing at least head 32, and receiving, from gaze sensor 60, a 2D image containing at least eye 34. Computer 26 can then analyze the received 3D depth map and the 2D image in order to identify a gaze direction of user 22. Gaze detection is described in PCT Patent Application PCT/IB2012/050577, filed Feb. 9, 2012, whose disclosure is incorporated herein by reference.
As described supra, illumination subassembly 50 may project a light toward user 22, and the received 2D image may comprise light reflected off the fundus and/or the cornea of eye(s) 34. In some embodiments, computer 26 can extract 3D coordinates of head 32 by identifying, from the 3D map, a position of the head along X-axis 40, Y-axis 42 and Z-axis 44. In alternative embodiments, computer 26 extracts the 3D coordinates of head 32 by identifying, from the 2D image a first position of the head along X-axis 40 and Y-axis 42, and identifying, from the 3D map, a second position of the head along Z-axis 44.
In a selection step 84, computer 26 identifies and selects a given interactive item 36 that the computer is presenting, on display 28, in the gaze direction. Subsequent to selecting the given interactive item, in a second receive step 86, computer 26 receives, from depth imaging subassembly 52, a sequence of 3D maps containing at least hand 31.
In an analysis step 88, computer 26 analyzes the 3D maps to identify a gesture performed by user 22. As described hereinbelow, examples of gestures include, but are not limited to a Press and Hold gesture, a Tap gesture, a Slide to Hold gesture, a Swipe gesture, a Select gesture, a Pinch gesture, a Swipe From Edge gesture, a Select gesture, a Grab gesture and a Rotate gesture. To identify the gesture, computer 26 can analyze the sequence of 3D maps to identify initial and subsequent positions of hand 31 (and/or fingers 30) while performing the gesture.
In a perform step 90, the computer performs an operation on the selected interactive item in response to the gesture, and the method ends. Examples of operations performed in response to a given gesture when a single item is selected include, but are not limited to:
In some embodiments user 22 can select the given interactive item using a gaze related pointing gesture. A gaze related pointing gesture typically comprises user 22 pointing finger 30 toward display 28 to select a given interactive item 36. As the user points finger 30 toward display 28, computer 26 can define a line segment between one of the user's eyes 34 (or a point between eyes 34) and the finger, and identify a target point where the line segment intersects the display. Computer 26 can then select a given interactive item 36 that is presented in proximity to the target point. Gaze related pointing gestures are described in PCT Patent Application PCT/IB2012/050577, filed Feb. 9, 2012, whose disclosure is incorporated herein by reference.
In additional embodiments, computer 26 can select the given interactive item 36 using gaze detection in response to a first input (as described supra in step 82), receive a second input, from touchpad 46, indicating a (tactile) gesture performed on the touchpad, and perform an operation in response to the second input received from the touchpad.
In further embodiments, user 22 can perform a given gesture while finger 30 is in contact with physical surface 47 (e.g., the desktop shown in
As described supra, embodiments of the present invention enable computer 26 to emulate touchpads and touchscreens by presenting interactive items 36 on display 28 and identifying three-dimensional non-tactile gestures performed by user 22. For example, computer 26 can configure the Windows 8™ operating system produced by Microsoft Corporation (Redmond, Wash.), to respond to three-dimensional gestures performed by user 22.
As described supra, user 22 can select a given interactive item 36 using a gaze related pointing gesture, or perform a tactile gesture on gesture pad 46. To interact with computer 26 using a gaze related pointing gesture and the Press and Hold gesture, user 22 can push finger 30 toward a given interactive item 36 (“Press”), and hold the finger relatively steady for at least the specified time period (“Hold”). To interact with computer 26 using a gaze and gesture pad 46, user 22 can gaze toward a given interactive 36, touch gesture pad 46 with finger 30, and keep the finger on the gesture pad for at least the specified time period.
To interact with computer 26 using a gaze related pointing gesture and the Tap gesture, user 22 can push finger 30 toward a given interactive item 36 (“Press”), and pull the finger back (“Release”). To interact with computer 26 using a gaze and gesture pad 46, user 22 can gaze toward a given interactive 36, touch gesture pad 46 with finger 30, and lift the finger off the gesture pad.
In some embodiments, user 22 can control the direction of the scrolling by gazing left or right, wherein the gesture performed by finger 30 only indicates the scrolling action and not the scrolling direction. In additional embodiments, computer 26 can control the scrolling using real-world coordinates, where the computer measures the finger's motion in distance units such as centimeters and not in pixels. When using real-world coordinates, the computer can apply a constant or a variant factor to the detected movement. For example, the computer can translate one centimeter of finger motion to 10 pixels of scrolling on the display.
Alternatively, the computer may apply a formula with a constant or a variable factor that compensates a distance between the user and the display. For example, to compensate for the distance, computer 26 can calculate the formula P=D*F, where P=a number of pixels to scroll on display 28, D=a distance of user 22 from display 28 (in centimeters), and F=a factor.
There may be instances in which computer 26 identifies that user 22 is gazing in a first direction and moving finger 30 in a second direction. For example, user 22 may be directing his gaze from left to right, but moving finger 30 from right to left. In these instances, computer 26 can stop any scrolling due to the conflicting gestures. However, if the gaze and the Slide to Drag gesture performed by the finger indicate the same direction but different scrolling speeds (e.g., the user moves his eyes quickly to the side while moving finger 30 more slowly), the computer can apply an interpolation to the indicated scrolling speeds while scrolling the interactive items.
To interact with computer 26 using a gaze related pointing gesture and the Slide to Drag gesture, user 22 can push finger toward display 28 (“Press”), move the finger from side to side (“Drag”), and pull the finger back (“Release”). To interact with computer 26 using a gaze and gesture pad 46, user 22 can gaze toward display 28, touch gesture pad 46 with finger 30, move the finger side to side, and lift the finger off the gesture pad.
To interact with computer 26 using a gaze related pointing gesture and the Swipe gesture, user 22 can push finger 30 toward a given interactive item 36 (“Press”), move the finger at a 90° angle to the direction that the given interactive object is sliding (“Drag”), and pull the finger back (“Release”). To interact with computer 26 using a gaze and gesture pad 46, user 22 can gaze toward a given interactive 36, touch gesture pad 46 with finger 30, move the finger at a 90° angle to the direction that the given interactive object is sliding (e.g., up or down if the interactive items are sliding left or right) and lift the finger off the gesture pad.
In an alternative embodiment, user 22 can select an interactive item sliding on display 28 by performing the Select Gesture. To perform the Select gesture, user 22 gazes toward an interactive item 36 sliding on display 28 and swipe finger 30 in a downward motion (i.e., on the virtual surface). To interact with computer 26 using a gaze related pointing gesture and the Select gesture, user 22 can push finger 30 toward a given interactive item 36 sliding on display 28, and swipe the finger in a downward motion.
To interact with computer 26 using a gaze related pointing gesture and the Pinch gesture, user 22 can push two fingers 30 toward a given interactive item 36 (“Press”), move the fingers toward each other (“Pinch”), and pull the finger back (“Release”). To interact with computer 26 using a gaze and gesture pad 46, user 22 can gaze toward a given interactive 36, touch gesture pad 46 with two or more fingers 30, move the fingers towards or away from each other, and lift the finger off the gesture pad.
The Grab gesture has the same functionality as the Swipe gesture. To perform the Grab gesture, user 22 gazes toward a given interactive item 36, folds one or more fingers 30 toward the palm, either pushes hand 31 toward display 28 or pulls the hand back away from the display, and performs a Release gesture. To interact with computer 26 using a gaze related pointing gesture and the Grab gesture, user 22 can perform the Grab gesture toward a given interactive item 36, either push hand 31 toward display 28 or pull the hand back away from the display, and then perform a Release gesture. The Release gesture is described in U.S. patent application Ser. No. 13/423,314, referenced above.
To interact with computer 26 using a gaze related pointing gesture and the Swipe from Edge gesture, user 22 can push finger 30 toward an edge of display 28, and move the finger into the display. Alternatively, user 22 can perform the Swipe gesture away from an edge of display 28. To interact with computer 26 using a gaze and gesture pad 46, user 22 can gaze toward an edge of display 28, touch gesture pad 46, move the finger in a direction corresponding to moving into the display, and lift the finger off the gesture pad.
Upon identifying a Swipe From Edge gesture, computer 26 can perform an operation such as presenting a “hidden” menu on the “touched” edge.
In additional embodiments, computer 26 can present the hidden menu solely on identifying the user's gaze directed at the specific edge (the right edge in the example shown in
To perform the Rotate gesture, user 22 gazes toward a given interactive item 36 presented on display 28, pushes two or more fingers 30 toward the display (“Press”), rotates the fingers in a circular (i.e., clockwise/counterclockwise) motion (“Rotate”), and pulls the fingers back (“Release”). In some embodiments, computer 26 may allow the user to pinch together two or more fingers 30 from different hands 31 while performing the Rotate gesture.
To interact with computer 26 using a gaze related pointing gesture and the Rotate gesture, user 22 can push two or more fingers 30 toward a given interactive item 36 (“Press”), rotate the fingers (“Rotate”), and pull the finger back (“Release”). To interact with computer 26 using a gaze and gesture pad 46, user 22 can gaze toward a given interactive 36, touch gesture pad 46 with two or more fingers 30, move the fingers in a circular motion on the gesture pad, and lift the finger off the gesture pad.
In addition to manipulating interactive items 36 via the virtual surface, user 22 may also interact with other types of items presented on display 28, such as an on-screen virtual keyboard as described hereinbelow.
In some embodiments, computer 62 may present interactive items 36 (i.e., the virtual surface) and keyboard 120 simultaneously on display 28. Computer 26 can differentiate between gestures directed toward the virtual surface and the keyboard as follows:
In addition to pressing single keys with a single finger, the computer can identify, using a language model, words that the user can input by swiping a single finger over the appropriate keys on the virtual keyboard.
Additional features that can be included in the virtual surface, using the depth maps and/or color images provided by device 24, for example, include:
While the embodiments described herein have computer 26 processing a series of 3D maps that indicate gestures performed by a limb of user 22 (e.g., finger 30 or hand 31), other methods of gesture recognition are considered to be within the spirit and scope of the present invention. For example, user 22 may use input devices such as lasers that include motion sensors, such as a glove controller or a game controller such as Nintendo's Wii Remote™ (also known as a Wiimote), produced by Nintendo Co., Ltd (KYOTO-SHI, KYT 601-8501, Japan). Additionally or alternatively, computer 26 may receive and process signals indicating a gesture performed by the user from other types of sensing devices such as ultrasonic sensors and/or lasers.
As described supra, embodiments of the present invention can be used to implement a virtual touchscreen on computer 26 executing user interface 20. In some embodiments, the touchpad gestures described hereinabove (as well as the pointing gesture and gaze detection) can be implemented on the virtual touchscreen as well. In operation, the user's hand “hovers above” the virtual touchscreen until the user performs one of the gestures described herein.
For example, the user can perform the Swipe From Edge gesture in order to view hidden menus (also referred to as “Charms Menus”) or the Pinch gesture can be used to “grab” a given interactive item 36 presented on the virtual touchscreen.
In addition to detecting three-dimensional gestures performed by user 22 in space, computer 26 can be configured to detect user 22 performing two-dimensional gestures on physical surface 47, thereby transforming the physical surface into a virtual tactile input device such as a virtual keyboard, a virtual mouse, a virtual touchpad or a virtual touchscreen.
In some embodiments, the 2D image received from sensing device 24 contains at least physical surface 47, and the computer 26 can be configured to segment the physical surface into one or more physical regions. In operation, computer 26 can assign a functionality to each of the one or more physical regions, each of the functionalities corresponding to a tactile input device, and upon receiving a sequence of three-dimensional maps containing at least hand 31 positioned on one of the physical regions, the computer can analyze the 3D maps to detect a gesture performed by the user, and simulate, based on the gesture, an input for the tactile input device corresponding to the one of the physical regions.
In
In the example shown in
Therefore, user 22 can pick up an object (e.g., a colored pen, as described supra), and perform a gesture while holding the object. In some embodiments, the received sequence of 3D maps contain at least the object, since hand 31 may not be within the field of view of sensing device 24. Alternatively, hand 31 may be within the field of view of sensing device 24, but the hand may be occluded so that the sequence of 3D maps does not include the hand. In other words, the sequence of 3D maps can indicate a gesture performed by the object held by hand 31. All the features of the embodiments described above may likewise be implemented, mutatis mutandis, on the basis of sensing movements of a handheld object of this sort, rather than of the hand itself.
In order to enrich the set of interactions available to user 22 in this paint application, it is also possible to add menus and other user interface elements as part of the application's usage.
In some embodiments, one or more physical objects can be positioned on physical surface 47, and upon computer 26 receiving, from sensing device 24, a sequence of three-dimensional maps containing at least the physical surface, the one or more physical objects, and hand 31 positioned in proximity to (or on) physical surface 47, the computer can analyze the 3D maps to detect a gesture performed by the user, project an animation onto the physical surface in response to the gesture, and incorporate the one or more physical objects into the animation.
In operation, 3D maps captured from depth imaging subassembly 52 can be used to identify each physical object's location and shape, while 2D images captured from sensor 60 can contain additional appearance data for each of the physical objects. The captured 3D maps and 2D images can be used to identify each of the physical objects from a pre-trained set of physical objects. An example described in
In
In
It will be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and subcombinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.
This application claims the benefit of U.S. Provisional Patent Application 61/615,403, filed Mar. 26, 2012, and U.S. Provisional Patent Application 61/663,638, filed Jun. 25, 2012, which are incorporated herein by reference. This application is related to another U.S. patent application, filed on even date, entitled, “Enhanced Virtual Touchpad.”
Number | Name | Date | Kind |
---|---|---|---|
4550250 | Mueller et al. | Oct 1985 | A |
4789921 | Aho | Dec 1988 | A |
4836670 | Hutchinson | Jun 1989 | A |
4988981 | Zimmerman et al. | Jan 1991 | A |
5434370 | Wilson et al. | Jul 1995 | A |
5454043 | Freeman | Sep 1995 | A |
5495576 | Ritchey | Feb 1996 | A |
5588139 | Lanier et al. | Dec 1996 | A |
5594469 | Freeman et al. | Jan 1997 | A |
5805167 | Van Cruyningen | Sep 1998 | A |
5846134 | Latypov | Dec 1998 | A |
5852672 | Lu | Dec 1998 | A |
5862256 | Zetts et al. | Jan 1999 | A |
5864635 | Zetts et al. | Jan 1999 | A |
5870196 | Lulli et al. | Feb 1999 | A |
5917937 | Szeliski et al. | Jun 1999 | A |
5973700 | Taylor et al. | Oct 1999 | A |
6002808 | Freeman | Dec 1999 | A |
6084979 | Kanade et al. | Jul 2000 | A |
6243054 | DeLuca | Jun 2001 | B1 |
6246779 | Fukui et al. | Jun 2001 | B1 |
6252988 | Ho | Jun 2001 | B1 |
6256033 | Nguyen | Jul 2001 | B1 |
6262740 | Lauer et al. | Jul 2001 | B1 |
6345111 | Yamaguchi et al. | Feb 2002 | B1 |
6345893 | Fateh et al. | Feb 2002 | B2 |
6452584 | Walker et al. | Sep 2002 | B1 |
6456262 | Bell | Sep 2002 | B1 |
6483499 | Li et al. | Nov 2002 | B1 |
6507353 | Huard et al. | Jan 2003 | B1 |
6512838 | Rafii et al. | Jan 2003 | B1 |
6519363 | Su et al. | Feb 2003 | B1 |
6559813 | DeLuca et al. | May 2003 | B1 |
6681031 | Cohen et al. | Jan 2004 | B2 |
6686921 | Rushmeier | Feb 2004 | B1 |
6690370 | Ellenby et al. | Feb 2004 | B2 |
6741251 | Malzbender | May 2004 | B2 |
6791540 | Baumberg | Sep 2004 | B1 |
6803928 | Bimber et al. | Oct 2004 | B2 |
6853935 | Satoh et al. | Feb 2005 | B2 |
6857746 | Dyner | Feb 2005 | B2 |
6977654 | Malik et al. | Dec 2005 | B2 |
7003134 | Covell et al. | Feb 2006 | B1 |
7013046 | Kawamura et al. | Mar 2006 | B2 |
7023436 | Segawa et al. | Apr 2006 | B2 |
7042440 | Pryor et al. | May 2006 | B2 |
7042442 | Kanevsky et al. | May 2006 | B1 |
7151530 | Roeber et al. | Dec 2006 | B2 |
7170492 | Bell | Jan 2007 | B2 |
7215815 | Honda | May 2007 | B2 |
7227526 | Hildreth et al. | Jun 2007 | B2 |
7259747 | Bell | Aug 2007 | B2 |
7264554 | Bentley | Sep 2007 | B2 |
7289227 | Smetak et al. | Oct 2007 | B2 |
7301648 | Foxlin | Nov 2007 | B2 |
7302099 | Zhang et al. | Nov 2007 | B2 |
7333113 | Gordon | Feb 2008 | B2 |
7340077 | Gokturk | Mar 2008 | B2 |
7348963 | Bell | Mar 2008 | B2 |
7358972 | Gordon et al. | Apr 2008 | B2 |
7370883 | Basir et al. | May 2008 | B2 |
7427996 | Yonezawa et al. | Sep 2008 | B2 |
7428542 | Fink et al. | Sep 2008 | B1 |
7474256 | Ohta et al. | Jan 2009 | B2 |
7526120 | Gokturk et al. | Apr 2009 | B2 |
7536032 | Bell | May 2009 | B2 |
7573480 | Gordon | Aug 2009 | B2 |
7576727 | Bell | Aug 2009 | B2 |
7580572 | Bang et al. | Aug 2009 | B2 |
7590941 | Wee et al. | Sep 2009 | B2 |
7688998 | Tuma et al. | Mar 2010 | B2 |
7696876 | Dimmer et al. | Apr 2010 | B2 |
7724250 | Ishii et al. | May 2010 | B2 |
7762665 | Vertegaal et al. | Jul 2010 | B2 |
7774155 | Sato et al. | Aug 2010 | B2 |
7809160 | Vertegaal et al. | Oct 2010 | B2 |
7812842 | Gordon | Oct 2010 | B2 |
7821541 | Delean | Oct 2010 | B2 |
7840031 | Albertson et al. | Nov 2010 | B2 |
7844914 | Andre et al. | Nov 2010 | B2 |
7925549 | Looney et al. | Apr 2011 | B2 |
7971156 | Albertson et al. | Jun 2011 | B2 |
8150142 | Freedman et al. | Apr 2012 | B2 |
8166421 | Magal et al. | Apr 2012 | B2 |
8249334 | Berliner et al. | Aug 2012 | B2 |
8368647 | Lin | Feb 2013 | B2 |
8390821 | Shpunt et al. | Mar 2013 | B2 |
8400494 | Zalevsky et al. | Mar 2013 | B2 |
8493496 | Freedman et al. | Jul 2013 | B2 |
8514221 | King et al. | Aug 2013 | B2 |
8565479 | Gurman et al. | Oct 2013 | B2 |
20020057383 | Iwamura | May 2002 | A1 |
20020071607 | Kawamura et al. | Jun 2002 | A1 |
20020158873 | Williamson | Oct 2002 | A1 |
20030057972 | Pfaff et al. | Mar 2003 | A1 |
20030063775 | Rafii et al. | Apr 2003 | A1 |
20030088463 | Kanevsky | May 2003 | A1 |
20030146901 | Ryan | Aug 2003 | A1 |
20030156756 | Gokturk et al. | Aug 2003 | A1 |
20030185444 | Honda | Oct 2003 | A1 |
20030234346 | Kao | Dec 2003 | A1 |
20030235341 | Gokturk et al. | Dec 2003 | A1 |
20040046744 | Rafii et al. | Mar 2004 | A1 |
20040104935 | Williamson | Jun 2004 | A1 |
20040135744 | Bimber et al. | Jul 2004 | A1 |
20040165099 | Stavely et al. | Aug 2004 | A1 |
20040174770 | Rees | Sep 2004 | A1 |
20040183775 | Bell | Sep 2004 | A1 |
20040184640 | Bang et al. | Sep 2004 | A1 |
20040184659 | Bang et al. | Sep 2004 | A1 |
20040258314 | Hashimoto | Dec 2004 | A1 |
20050031166 | Fujimura et al. | Feb 2005 | A1 |
20050062684 | Geng | Mar 2005 | A1 |
20050088407 | Bell et al. | Apr 2005 | A1 |
20050089194 | Bell | Apr 2005 | A1 |
20050110964 | Bell et al. | May 2005 | A1 |
20050122308 | Bell et al. | Jun 2005 | A1 |
20050162381 | Bell et al. | Jul 2005 | A1 |
20050190972 | Thomas et al. | Sep 2005 | A1 |
20050254726 | Fuchs et al. | Nov 2005 | A1 |
20050265583 | Covell et al. | Dec 2005 | A1 |
20060010400 | Dehlin et al. | Jan 2006 | A1 |
20060092138 | Kim et al. | May 2006 | A1 |
20060110008 | Vertegaal et al. | May 2006 | A1 |
20060115155 | Lui et al. | Jun 2006 | A1 |
20060139314 | Bell | Jun 2006 | A1 |
20060149737 | Du et al. | Jul 2006 | A1 |
20060159344 | Shao et al. | Jul 2006 | A1 |
20060187196 | Underkoffler et al. | Aug 2006 | A1 |
20060239670 | Cleveland | Oct 2006 | A1 |
20060248475 | Abrahamsson | Nov 2006 | A1 |
20070078552 | Rosenberg | Apr 2007 | A1 |
20070154116 | Shieh | Jul 2007 | A1 |
20070230789 | Chang et al. | Oct 2007 | A1 |
20080019589 | Yoon et al. | Jan 2008 | A1 |
20080062123 | Bell | Mar 2008 | A1 |
20080094371 | Forstall et al. | Apr 2008 | A1 |
20080123940 | Kundu et al. | May 2008 | A1 |
20080150890 | Bell et al. | Jun 2008 | A1 |
20080150913 | Bell et al. | Jun 2008 | A1 |
20080170776 | Albertson et al. | Jul 2008 | A1 |
20080236902 | Imaizumi | Oct 2008 | A1 |
20080252596 | Bell et al. | Oct 2008 | A1 |
20080256494 | Greenfield | Oct 2008 | A1 |
20080260250 | Vardi | Oct 2008 | A1 |
20080273755 | Hildreth | Nov 2008 | A1 |
20080287189 | Rabin | Nov 2008 | A1 |
20090009593 | Cameron et al. | Jan 2009 | A1 |
20090027335 | Ye | Jan 2009 | A1 |
20090027337 | Hildreth | Jan 2009 | A1 |
20090031240 | Hildreth | Jan 2009 | A1 |
20090040215 | Afzulpurkar et al. | Feb 2009 | A1 |
20090077504 | Bell | Mar 2009 | A1 |
20090078473 | Overgard et al. | Mar 2009 | A1 |
20090083122 | Angell et al. | Mar 2009 | A1 |
20090083622 | Chien et al. | Mar 2009 | A1 |
20090096783 | Shpunt et al. | Apr 2009 | A1 |
20090103780 | Nishihara et al. | Apr 2009 | A1 |
20090183125 | Magal et al. | Jul 2009 | A1 |
20090195392 | Zalewski | Aug 2009 | A1 |
20090228841 | Hildreth | Sep 2009 | A1 |
20090256817 | Perlin et al. | Oct 2009 | A1 |
20090284608 | Hong et al. | Nov 2009 | A1 |
20090297028 | De Haan | Dec 2009 | A1 |
20100002936 | Khomo et al. | Jan 2010 | A1 |
20100007717 | Spektor et al. | Jan 2010 | A1 |
20100034457 | Berliner et al. | Feb 2010 | A1 |
20100036717 | Trest | Feb 2010 | A1 |
20100053151 | Marti et al. | Mar 2010 | A1 |
20100071965 | Hu et al. | Mar 2010 | A1 |
20100149096 | Migos et al. | Jun 2010 | A1 |
20100162181 | Shiplacoff et al. | Jun 2010 | A1 |
20100164897 | Morin et al. | Jul 2010 | A1 |
20100177933 | Willmann et al. | Jul 2010 | A1 |
20100194863 | Lopes et al. | Aug 2010 | A1 |
20100199228 | Latta et al. | Aug 2010 | A1 |
20100231504 | Bloem et al. | Sep 2010 | A1 |
20100234094 | Gagner et al. | Sep 2010 | A1 |
20100235786 | Maizels et al. | Sep 2010 | A1 |
20110006978 | Yuan | Jan 2011 | A1 |
20110018795 | Jang | Jan 2011 | A1 |
20110029918 | Yoo et al. | Feb 2011 | A1 |
20110052006 | Gurman et al. | Mar 2011 | A1 |
20110081072 | Kawasaki et al. | Apr 2011 | A1 |
20110164032 | Shadmi | Jul 2011 | A1 |
20110164141 | Tico et al. | Jul 2011 | A1 |
20110175932 | Yu et al. | Jul 2011 | A1 |
20110178784 | Sato et al. | Jul 2011 | A1 |
20110193939 | Vassigh et al. | Aug 2011 | A1 |
20110211754 | Litvak et al. | Sep 2011 | A1 |
20110225492 | Boettcher et al. | Sep 2011 | A1 |
20110225536 | Shams et al. | Sep 2011 | A1 |
20110227820 | Haddick et al. | Sep 2011 | A1 |
20110231757 | Haddick et al. | Sep 2011 | A1 |
20110248914 | Sherr | Oct 2011 | A1 |
20110254765 | Brand | Oct 2011 | A1 |
20110254798 | Adamson et al. | Oct 2011 | A1 |
20110260965 | Kim et al. | Oct 2011 | A1 |
20110261058 | Luo | Oct 2011 | A1 |
20110279397 | Rimon et al. | Nov 2011 | A1 |
20110291926 | Gokturk et al. | Dec 2011 | A1 |
20110292036 | Sali et al. | Dec 2011 | A1 |
20110293137 | Gurman et al. | Dec 2011 | A1 |
20110310010 | Hoffnung et al. | Dec 2011 | A1 |
20120001875 | Li et al. | Jan 2012 | A1 |
20120019703 | Thorn | Jan 2012 | A1 |
20120078614 | Galor et al. | Mar 2012 | A1 |
20120113104 | Jung et al. | May 2012 | A1 |
20120169583 | Rippel et al. | Jul 2012 | A1 |
20120184335 | Kim et al. | Jul 2012 | A1 |
20120202569 | Maizels et al. | Aug 2012 | A1 |
20120204133 | Guendelman et al. | Aug 2012 | A1 |
20120216151 | Sarkar et al. | Aug 2012 | A1 |
20120223882 | Galor et al. | Sep 2012 | A1 |
20120313848 | Galor et al. | Dec 2012 | A1 |
20130014052 | Frey et al. | Jan 2013 | A1 |
20130044053 | Galor et al. | Feb 2013 | A1 |
20130055120 | Galor et al. | Feb 2013 | A1 |
20130055150 | Galor | Feb 2013 | A1 |
20130058565 | Rafii et al. | Mar 2013 | A1 |
20130106692 | Maizels et al. | May 2013 | A1 |
20130107021 | Maizels et al. | May 2013 | A1 |
20130127854 | Shpunt et al. | May 2013 | A1 |
20130154913 | Genc et al. | Jun 2013 | A1 |
20130155070 | Luo | Jun 2013 | A1 |
20130201108 | Hirsch | Aug 2013 | A1 |
20130207920 | McCann et al. | Aug 2013 | A1 |
20130222239 | Galor | Aug 2013 | A1 |
20130263036 | Berenson et al. | Oct 2013 | A1 |
20130265222 | Berenson et al. | Oct 2013 | A1 |
20130283208 | Bychkov et al. | Oct 2013 | A1 |
20130283213 | Bychkov et al. | Oct 2013 | A1 |
Number | Date | Country |
---|---|---|
9935633 | Jul 1999 | WO |
03071410 | Aug 2003 | WO |
2004107272 | Dec 2004 | WO |
2005003948 | Jan 2005 | WO |
2005094958 | Oct 2005 | WO |
2007078639 | Jul 2007 | WO |
2007135376 | Nov 2007 | WO |
2009083984 | Jul 2009 | WO |
2012107892 | Aug 2012 | WO |
Entry |
---|
Hart, D., U.S. Appl. No. 09/616,606 “Method and System for High Resolution , Ultra Fast 3-D Imaging” filed Jul. 14, 2000. |
International Application PCT/IL2007/000306 Search Report dated Oct. 2, 2008. |
International Application PCT/IL2007/000574 Search Report dated Sep. 10, 2008. |
International Application PCT/IL2006/000335 Preliminary Report on Patentability dated Apr. 24, 2008. |
Avidan et al., “Trajectory triangulation: 3D reconstruction of moving points from amonocular image sequence”, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 22, No. 4, pp. 348-3537, Apr. 2000. |
LeClerc et al., “The direct computation of height from shading”, The Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp. 552-558, USA, Jun. 1991. |
Zhang et al., “Shape from intensity gradient”, IEEE Transactions on Systems, Man and Cybernetics—Part A: Systems and Humans, vol. 29, No. 3, pp. 318-325, May 1999. |
Zhang et al., “Height recovery from intensity gradients”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp. 508-513, Jun. 21-23, 1994. |
Horn, B., “Height and gradient from shading”, International Journal of Computer Vision, vol. 5, No. 1, pp. 37-76, Aug. 1990. |
Bruckstein, A., “On shape from shading”, Computer Vision, Graphics & Image Processing, vol. 44, pp. 139-154, year 1988. |
Zhang et al., “Rapid Shape Acquisition Using Color Structured Light and Multi-Pass Dynamic Programming”, 1st International Symposium on 3D Data Processing Visualization and Transmission (3DPVT), Italy, Jul. 2002. |
Besl, P., “Active, Optical Range Imaging Sensors”, Machine vision and applications, vol. 1, pp. 127-152, year 1988. |
Horn et al., “Toward optimal structured light patterns”, Proceedings of International Conference on Recent Advances in 3D Digital Imaging and Modeling, pp. 28-37, Ottawa, Canada, May 1997. |
Goodman, J.W., “Statistical Properties of Laser Speckle Patterns”, Laser Speckle and Related Phenomena, pp. 9-75, Springer-Verlag, Berlin Heidelberg, 1975. |
Asada et al., “Determining Surface Orientation by Projecting a Stripe Pattern”, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), vol. 10, No. 5, pp. 749-754, Sep. 1988. |
Winkelbach et al., “Shape from Single Stripe Pattern Illumination”, Luc Van Gool (Editor), (DAGM 2002), Lecture Notes in Computer Science 2449, p. 240-247, Springer 2002. |
Koninckx et al., “Efficient, Active 3D Acquisition, based on a Pattern-Specific Snake”, Luc Van Gool (Editor), (DAGM 2002), Lecture Notes in Computer Science 2449, pp. 557-565, Springer 2002. |
Kimmel et al., “Analyzing and synthesizing images by evolving curves with the Osher-Sethian method”, International Journal of Computer Vision, vol. 24, No. 1, pp. 37-56, year 1997. |
Zigelman et al., “Texture mapping using surface flattening via multi-dimensional scaling”, IEEE Transactions on Visualization and Computer Graphics, vol. 8, No. 2, pp. 198-207, Apr. 2002. |
Dainty, J.C., “Introduction”, Laser Speckle and Related Phenomena, pp. 1-7, Springer-Verlag, Berlin Heidelberg, 1975. |
Mendlovic et al., “Composite harmonic filters for scale, projection and shift invariant pattern recognition”, Applied Optics Journal, vol. 34, No. 2, Jan. 10, 1995. |
Fua et al., “Human Shape and Motion Recovery Using Animation Models”, 19th Congress, International Society for Photogrammetry and Remote Sensing, Amsterdam, The Netherlands, Jul. 2000. |
Allard et al., “Marker-less Real Time 3D modeling for Virtual Reality”, Immersive Projection Technology, Iowa State University, year 2004. |
Howe et al., “Bayesian Reconstruction of 3D Human Motion from Single-Camera Video”, Advanced in Neural Information Processing Systems, vol. 12, pp. 820-826, USA 1999. |
Li et al., “Real-Time 3D Motion Tracking with Known Geometric Models”, Real-Time Imaging Journal, vol. 5, pp. 167-187, Academic Press 1999. |
Grammalidis et al., “3-D Human Body Tracking from Depth Images Using Analysis by Synthesis”, Proceedings of the IEEE International Conference on Image Processing (ICIP2001), pp. 185-188, Greece, Oct 7-10, 2001. |
Segen et al., “Shadow gestures: 3D hand pose estimation using a single camera”, Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, pp. 479-485, Fort Collins, USA, 1999. |
Vogler et al., “ASL recognition based on a coupling between HMMs and 3D motion analysis”, Proceedings of IEEE International Conference on Computer Vision, pp. 363-369, Mumbai, India, 1998. |
Nam et al., “Recognition of Hand Gestures with 3D, Nonlinear Arm Movements”, Pattern Recognition Letters, vol. 18, No. 1, pp. 105-113, Elsevier Science B.V. 1997. |
Nesbat, S., “A System for Fast, Full-Text Entry for Small Electronic Devices”, Proceedings of the 5th International Conference on Multimodal Interfaces, ICMI 2003, Vancouver, Nov. 5-7, 2003. |
Ascension Technology Corporation, “Flock of Birds: Real-Time Motion Tracking”, 2008. |
Segen et al., “Human-computer interaction using gesture recognition and 3D hand tracking”, ICIP 98, Proceedings of the IEEE International Conference on Image Processing, vol. 3, pp. 188-192, Oct. 4-7, 1998. |
Dekker, L., “Building Symbolic Information for 3D Human Body Modeling from Range Data”, Proceedings of the Second International Conference on 3D Digital Imaging and Modeling, IEEE computer Society, pp. 388-397, 1999. |
Holte et al., “Gesture Recognition using a Range Camera”, Technical Report CVMT-07-01 ISSN 1601-3646, Feb. 2007. |
Cheng et al., “Articulated Human Body Pose Inference from Voxel Data Using a Kinematically Constrained Gaussian Mixture Model”, CVPR EHuM2: 2nd Workshop on Evaluation of Articulated Human Motion and Pose Estimation, 2007. |
Microvision Inc., “PicoP® Display Engine—How it Works”, 1996-2012. |
Primesense Corporation, “PrimeSensor NITE 1.1”, USA, year 2010. |
ARM Ltd., “AMBA Specification: AHB”, Version 2, pp. 35-92, year 1999. |
Commission Regulation (EC) No. 1275/2008, Official Journal of the European Union, Dec. 17, 2008. |
Primesense, “Natural Interaction”, YouTube Presentation, Jun. 9, 2010 http://www.youtube.com/watch?v=TzLKsex43zl˜. |
Manning et al., “Foundations of Statistical Natural Language Processing”, chapters 6,7,9 and 12, MIT Press 1999. |
U.S. Appl. No. 12/762,336 Official Action dated May 15, 2012. |
Tobii Technology, “The World Leader in Eye Tracking and Gaze Interaction”, Mar. 2012. |
Noveron, “Madison video eyewear”, year 2012. |
International Application PCT/IB2012/050577 Search Report dated Aug. 6, 2012. |
U.S. Appl. No. 12/683,452 Official Action dated Sep. 7, 2012. |
Koutek, M., “Scientific Visualization in Virtual Reality: Interaction Techniques and Application Development”, PhD Thesis, Delft University of Technology, 264 pages, Jan. 2003. |
Azuma et al., “Recent Advances in Augmented Reality”, IEEE Computer Graphics and Applications, vol. 21, issue 6, pp. 34-47, Nov. 2001. |
Breen et al., “Interactive Occlusion and Collision of Real and Virtual Objects in Augmented Reality”, Technical Report ECRC-95-02, ECRC, Munich, Germany, 22 pages, year 1995. |
Burdea et al., “A Distributed Virtual Environment with Dextrous Force Feedback”, Proceedings of Interface to Real and Virtual Worlds Conference, pp. 255-265, Mar. 1992. |
Bleiwess et al., “Fusing Time-of-Flight Depth and Color for Real-Time Segmentation and Tracking”, Dyn3D 2009, Lecture Notes in Computer Science 5742, pp. 58-69, Jena, Germany, Sep. 9, 2009. |
Bleiwess et al., “Markerless Motion Capture Using a Single Depth Sensor”, SIGGRAPH Asia 2009, Yokohama, Japan, Dec. 16-19, 2009. |
Bevilacqua et al., “People Tracking Using a Time-Of-Flight Depth Sensor”, Proceedings of the IEEE International Conference on Video and Signal Based Surveillance, Sydney, Australia, Nov. 22-24, 2006. |
Bradski, G., “Computer Vision Face Tracking for Use in a Perceptual User Interface”, Intel Technology Journal, vol. 2, issue 2 (2nd Quarter 2008). |
Comaniciu et al., “Kernel-Based Object Tracking”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, No. 5, pp. 564-577, May 2003. |
Gesturetec Inc., “Gesture Control Solutions for Consumer Devices”, Canada, 2009. |
Gokturk et al., “A Time-Of-Flight Depth Sensor—System Description, Issues and Solutions”, Proceedings of the 2004 Conference on Computer Vision and Patter Recognition Workshop (CVPRW'04), vol. 3, pp. 35, Jun. 27-Jul. 2, 2004. |
Grest et al., “Single View Motion Tracking by Depth and Silhouette Information”, SCIA 2007—Scandinavian Conference on Image Analysis, Lecture Notes in Computer Science 4522, pp. 719-729, Aalborg, Denmark, Jun. 10-14, 2007. |
Haritaoglu et al., “Ghost 3d: Detecting Body Posture and Parts Using Stereo”, Proceedings of the IEEE Workshop on Motion and Video Computing (MOTION'02), pp. 175-180, Orlando, USA, Dec. 5-6, 2002. |
Haritaoglu et al., “W4S : A real-time system for detecting and tracking people in 2<1/2>D”, ECCV 98—5th European conference on computer vision, vol. 1407, pp. 877-892, Freiburg , Germany, Jun. 2-6, 1998. |
Harville, M., “Stereo Person Tracking with Short and Long Term Plan—View Appearance Models of Shape and Color”, Proceedings of the IEEE International Conference on Advanced Video and Signal-Based Surveillance (AVSSS-2005), pp. 522-527, Como, Italy, Sep. 15-16, 2005. |
Holte, M., “Fusion of Range and Intensity Information for View Invariant Gesture Recognition”, IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW '08), pp. 1-7, Anchorage, USA, Jun. 23-28, 2008. |
Kaewtrakulpong et al., “An Improved Adaptive Background Mixture Model for Real-Time Tracking with Shadow Detection”, Proceedings of the 2nd European Workshop on Advanced Video Based Surveillance Systems (AVBS'01), Kingston, UK, Sep. 2001. |
Kolb et al., “ToF-Sensors: New Dimensions for Realism and Interactivity”, Proceedings of the IEEE Conference on Computer Vision and Patter Recognition Workshops, pp. 1-6, Anchorage, USA, Jun. 23-28, 2008. |
Kolsch et al., “Fast 2D Hand Tracking with Flocks of Features and Multi-Cue Integration”, IEEE Workshop on Real-Time Vision for Human Computer Interaction (at CVPR'04), Washington, USA, Jun. 27-Jul. 2, 2004. |
Krumm et al., “Multi-Camera Multi-Person Tracking for EasyLiving”, 3rd IEEE International Workshop on Visual Surveillance, Dublin, Ireland, Jul. 1, 2000. |
Leens et al., “Combining Color, Depth, and Motion for Video Segmentation”, ICVS 2009—7th International Conference on Computer Vision Systems, Liege, Belgium Oct. 13-15, 2009. |
MacCormick et al., “Partitioned Sampling, Articulated Objects, and Interface-Quality Hand Tracking”, ECCV '00—Proceedings of the 6th European Conference on Computer Vision—Part II , pp. 3-19, Dublin, Ireland, Jun. 26-Jul. 1, 2000. |
Malassiotis et al., “Real-Time Hand Posture Recognition Using Range Data”, Image and Vision Computing, vol. 26, No. 7, pp. 1027-1037, Jul. 2, 2008. |
Morano et al., “Structured Light Using Pseudorandom Codes”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, issue 3, pp. 322-327, Mar. 1998. |
Munoz-Salinas et al., “People Detection and Tracking Using Stereo Vision and Color”, Image and Vision Computing, vol. 25, No. 6, pp. 995-1007, Jun. 1, 2007. |
Nanda et al., “Visual Tracking Using Depth Data”, Proceedings of the 2004 Conference on Computer Vision and Patter Recognition Workshop, vol. 3, Washington, USA, Jun. 27-Jul. 2, 2004. |
Scharstein et al., “High-Accuracy Stereo Depth Maps Using Structured Light”, IEEE Conference on Computer Vision and Patter Recognition, vol. 1, pp. 195-2002, Madison, USA, Jun. 2003. |
Shi et al., “Good Features to Track”, IEEE Conference on Computer Vision and Pattern Recognition, pp. 593-600, Seattle, USA, Jun. 21-23, 1994. |
Siddiqui et al., “Robust Real-Time Upper Body Limb Detection and Tracking”, Proceedings of the 4th ACM International Workshop on Video Surveillance and Sensor Networks, Santa Barbara, USA, Oct. 27, 2006. |
Softkinetic S.A., IISU™—3D Gesture Recognition Platform for Developers of 3D Applications, Belgium, Brussels, 2007-2010. |
Sudderth et al., “Visual Hand Tracking Using Nonparametric Belief Propagation”, IEEE Workshop on Generative Model Based Vision at CVPR'04, Washington, USA, Jun. 27-Jul. 2, 2004. |
Tsap, L. “Gesture-Tracking in Real Time with Dynamic Regional Range Computation”, Real-Time Imaging, vol. 8, issue 2, pp. 115-126, Apr. 2002. |
Xu et al., “A Multi-Cue-Based Human Body Tracking System”, Proceedings of the 5ths International Conference on Computer Vision Systems (ICVS 2007), Germany, Mar. 21-24, 2007. |
Xu et al., “Human Detecting Using Depth and Gray Images”, Proceedings of the IEE Conference on Advanced Video and Signal Based Surveillance (AVSS'03), Miami, USA, Jul. 21-22, 2003. |
Yilmaz et al., “Object Tracking: A Survey”, ACM Computing Surveys, vol. 38, No. 4, article 13, Dec. 2006. |
Zhu et al., “Controlled Human Pose Estimation From Depth Image Streams”, IEEE Conference on Computer Vision and Patter Recognition Workshops, pp. 1-8, Anchorage, USA, Jun. 23-27, 2008. |
International Application PCT/IB2010/051055 Search Report dated Sep. 1, 2010. |
La Viola, J. Jr., “Whole-Hand and Speech Input in Virtual Environments”, Computer Science Department, Florida Atlantic University, USA, 1996. |
Martell, C., “Form: An Experiment in the Annotation of the Kinematics of Gesture”, Dissertation, Computer and Information Science, University of Pennsylvania, 2005. |
U.S. Appl. No. 12/352,622 Official Action dated Mar. 31, 2011. |
Prime Sense Inc., “Prime Sensor™ NITE 1.1 Framework Programmer's Guide”, Version 1.2, year 2009. |
PrimeSense Corporation, “PrimeSensor Reference Design 1.08”, USA, year 2010. |
International Application PCT/IB2011/053192 Search Report dated Dec. 6, 2011. |
U.S. Appl. No. 12/352,622 Official Action dated Sep. 30, 2011. |
Gordon et al., “The use of Dense Stereo Range Date in Augmented Reality”, Proceedings of the 1st International Symposium on Mixed and Augmented Reality (ISMAR), Darmstadt, Germany, pp. 1-10, Sep. 30-Oct. 1, 2002. |
Agrawala et al., “The two-user Responsive Workbench :support for collaboration through individual views of a shared space”, Proceedings on the 24th conference on computer graphics and interactive techniques (SIGGRAPH 97), Los Angeles, USA, pp. 327-332 , Aug. 3-8, 1997. |
Harman et al., “Rapid 2D-to 3D conversion”, Proceedings of SPIE Conference on Stereoscopic Displays and Virtual Reality Systems, vol. 4660, pp. 78-86, Jan. 21-23, 2002. |
Hoff et al., “Analysis of head pose accuracy in augmented reality”, IEEE Transactions on Visualization and Computer Graphics, vol. 6, No. 4, pp. 319-334, Oct.-Dec. 2000. |
Poupyrev et al., “The go-go interaction technique: non-liner mapping for direct manipulation in VR”, Proceedings of the 9th annual ACM Symposium on User interface software and technology (UIST '96), Washington, USA, pp. 79-80, Nov. 6-8, 1996. |
Wexelblat et al., “Virtual Reality Applications and Explorations”, Academic Press Inc., San Diego, USA, 262 pages, year 1993. |
U.S. Appl. No. 13/161,508 Office Action dated Apr. 10, 2013. |
U.S. Appl. No. 12/683,452 Office Action dated Jun. 7, 2013. |
Miller, R., “Kinect for XBox 360 Review”, Engadget, Nov. 4, 2010. |
U.S. Appl. No. 13/161,508 Office Action dated Sep. 9, 2013. |
International Application PCT/IB2013/052332 Search Report dated Aug. 26, 2013. |
U.S. Appl. No. 13/314,210 Office Action dated Jul. 19, 2013. |
U.S. Appl. No. 13/314,207 Office Action dated Aug. 5, 2013. |
Sun et al., “SRP Based Natural Interaction Between Real and Virtual Worlds in Augmented Reality”, Proceedings of the International Conference on Cyberworlds (CW'08), pp. 117-124, Sep. 22-24, 2008. |
Schmalstieg et al., “The Studierstube Augmented Reality Project”, Presence: Teleoperators and Virtual Environments, vol. 11, No. 1, pp. 33-54, Feb. 2002. |
Ohta et al., “Share-Z: Client/Server Depth Sensing for See-Through Head-Mounted Displays”, Presence: Teleoperators and Virtual Environments, vol. 11, No. 2, pp. 176-188, Apr. 2002. |
Gobbetti et al., “VB2: an Architecture for Interaction in Synthetic Worlds”, Proceedings of the 6th Annual ACM Symposium on User Interface Software and Technology (UIST'93), pp. 167-178, Nov. 3-5, 1993. |
Gargallo et al., “Bayesian 3D Modeling from Images Using Multiple Depth Maps”, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), vol. 2, pp. 885-891, Jun. 20-25, 2005. |
ZRRO Ltd., “TeleTouch Device”, year 2011 (http://www.zrro.com/products.html). |
Berliner et al., U.S. Appl. No. 61/732,354, filed Dec. 12, 2012. |
Shpunt et al., U.S. Appl. No. 61/764,554, filed Feb. 14, 2013. |
U.S. Appl. No. 13/244,490 Office Action dated Dec. 6, 2013. |
U.S. Appl. No. 13/423,314 Office Action dated Dec. 4, 2013. |
U.S. Appl. No. 13/423,322 Office Action dated Nov. 1, 2013. |
Yip et al., “Pose Determination and Viewpoint Determination of Human Head in Video Conferencing Based on Head Movement”, IEEE Proceedings of the 10th International Multimedia Modelling Conference, 6 pages, Jan. 2004. |
Bohme et al., “Head tracking with combined face and nose detection”, IEEE International Symposium on Signals, Circuits and Systems, 4 pages, year 2009. |
Breitenstein et al., “Real-time face pose estimation from single range images”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 8 pages, Jun. 2008. |
Feldmann et al., “Immersive Multi-User 3d Video Communication”, Proceedings of International Broadcast Conference (IBC), 8 pages, Sep. 2009. |
Haker et al., “Geometric Invariants for Facial Feature Tracking with 3D TOF cameras”, IEEE International Symposium on Signals, Circuits and Systems, vol. 1, 4 pages, year 2007. |
Malassiotis et al., “Robust Real-Time 3D Head Pose Estimation from Range Date”, Pattern Recognition 38.8, 31 pages, year 2005. |
Stipes et al., “4D Scan Registration with the SR-3000 LIDAR”, IEEE International Conference on Robotics and Automation, pp. 2988-2993, May 19-23, 2008. |
U.S. Appl. No. 13/984,031 Office Action dated Jul. 2, 2015. |
U.S. Appl. No. 13/960,822 Office Action dated May 29, 2015. |
CN Application # 201280007484.1 Office Action dated Jul. 8, 2015. |
U.S. Appl. No. 13/960,823 Office Action dated Jun. 19, 2015. |
U.S. Appl. No. 13/849,517 Office Action dated Oct. 7, 2015. |
Ridden, P., “Microsoft HoloDesk lets Users Handle Virtual 3D objects”, 7 pages, Oct. 24, 2011 Published at http://www.gizmag.com/holodesk-lets-users-handle-virtual-3d-objects/20257/. |
KR Application # 10-2014-7029627 Office Action dated Aug. 28, 2015. |
U.S. Appl. No. 13/960,822 Office Action dated Aug. 13, 2015. |
U.S. Appl. No. 13/960,823 Office Action dated Nov. 20, 2015. |
U.S. Appl. No. 13/960,822 Office Action dated Nov. 24, 2015. |
Irie et al., “Construction of an intelligent room based on gesture recognition: operation of electric appliances with hand gestures”, Proceedings of 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 1, pp. 193-198, Sep. 28-Oct. 2, 2004. |
CN Application # 201280007484.1 Office Action dated Dec. 15, 2015. |
Number | Date | Country | |
---|---|---|---|
20130283208 A1 | Oct 2013 | US |
Number | Date | Country | |
---|---|---|---|
61615403 | Mar 2012 | US | |
61663638 | Jun 2012 | US |