Embodiments relate generally to machine-user interfaces, and more specifically to the interpretation of free-space user movements as control inputs.
Current computer systems typically include a graphic user interface that can be navigated by a cursor, i.e., a graphic element displayed on the screen and movable relative to other screen content, and which serves to indicate a position on the screen. The cursor is usually controlled by the user via a computer mouse or touch pad. In some systems, the screen itself doubles as an input device, allowing the user to select and manipulate graphic user interface components by touching the screen where they are located. While touch may be convenient and relatively intuitive for many users, touch is not that accurate. Fingers are fat. The user's fingers can easily cover multiple links on a crowded display leading to erroneous selection. Touch is also unforgiving—it requires the user's motions to be confined to specific areas of space. For example, move one's hand merely one key-width to the right or left and type. Nonsense appears on the screen.
Mice, touch pads, and touch screens can be cumbersome and inconvenient to use. Touch pads and touch screens require the user to be in close physical proximity to the pad (which is often integrated into a keyboard) or screen so as to be able to reach them, which significantly restricts users' range of motion while providing input to the system. Touch is, moreover, not always reliably detected, sometimes necessitating repeated motions across the pad or screen to effect the input. Mice facilitate user input at some distance from the computer and screen (determined by the length of the connection cable or the range of the wireless connection between computer and mouse), but require a flat surface with suitable surface properties, or even a special mouse pad, to function properly. Furthermore, prolonged use of a mouse, in particular if it is positioned sub-optimally relative to the user, can result in discomfort or even pain.
Accordingly, alternative input mechanisms that provide users with the advantages of intuitive controls but free the user from the many disadvantages of touch based control are highly desirable.
Aspects of the system and methods described herein provide for improved machine interface and/or control by interpreting the positions, configurations, and/or motions of one or more control objects (or portions thereof) in free space within a field of view of an image-capture device. The control object(s) may be or include a user's body part(s) such as, e.g., the user's hand(s), finger(s), thumb(s), head, etc.; a suitable hand-held pointing device such as a stylus, wand, or some other inanimate object; or generally any animate or inanimate object or object portion (or combinations thereof) manipulated by the user for the purpose of conveying information to the machine. In various embodiments, the shapes, positions, and configurations of one or more control objects are reconstructed in three dimensions (e.g., based on a collection of two-dimensional images corresponding to a set of cross-sections of the object), and tracked as a function of time to discern motion. The shape, configuration, position(s), and motion(s) of the control object(s), when constituting user input to the machine, are herein referred to as “gestures.”
In embodiments, the position, orientation, and/or motion of one or more control objects are tracked relative to one or more virtual control constructs (e.g., virtual control surfaces) defined in space (e.g., programmatically) to facilitate determining whether an engagement gesture has occurred. Engagement gestures can include engaging with a control (e.g., selecting a button or switch), disengaging with a control (e.g., releasing a button or switch), motions that do not involve engagement with any control (e.g., motion that is tracked by the system, possibly followed by a cursor, and/or a single object in an application or the like), environmental interactions (i.e., gestures to direct an environment rather than a specific control, such as scroll up/down), special-purpose gestures (e.g., brighten/darken screen, volume control, etc.), as well as others or combinations thereof.
Engagement gestures can be mapped to one or more controls of a machine or application executing on a machine, or a control-less screen location, of a display device associated with the machine under control. Embodiments provide for mapping of movements in three-dimensional (3D) space conveying control and/or other information to zero, one, or more controls. Controls can include imbedded controls (e.g., sliders, buttons, and other control objects in an application) or environmental-level controls (e.g., windowing controls, scrolls within a window, and other controls affecting the control environment). In embodiments, controls may be displayable using two-dimensional (2D) presentations (e.g., a traditional cursor symbol, cross-hairs, icon, graphical representation of the control object, or other displayable object) on, e.g., one or more display screens, and/or 3D presentations using holography, projectors, or other mechanisms for creating 3D presentations. Presentations may also be audible (e.g., mapped to sounds, or other mechanisms for conveying audible information) and/or haptic.
In an embodiment, determining whether motion information defines an engagement gesture can include finding an intersection (also referred to as a contact, pierce, or a “virtual touch”) of motion of a control object with a virtual control surface, whether actually detected or determined to be imminent; dis-intersection (also referred to as a “pull back” or “withdrawal”) of the control object with a virtual control surface; a non-intersection—i.e., motion relative to a virtual control surface (e.g., wave of a hand approximately parallel to the virtual surface to “erase” a virtual chalk board); or other types of identified motions relative to the virtual control surface suited to defining gestures conveying information to the machine. In an embodiment, determining whether motion information defines an engagement gesture can include determining one or more engagement attributes from the motion information about the control object. In an embodiment, engagement attributes include motion attributes (e.g., speed, acceleration, duration, distance, etc.), gesture attributes (e.g., hand, two hands, tools, type, precision, etc.), other attributes and/or combinations thereof. In an embodiment, determining whether motion information defines an engagement gesture can include filtering motion information to determine whether motion comprises an engagement gesture. Filtering may be applied based upon engagement attributes, characteristics of motion, position in space, other criteria, and/or combinations thereof. Filtering can enable identification of engagement gestures, discrimination of engagement gestures from extraneous motions, discrimination of engagement gestures of differing types or meanings, and so forth.
Various embodiments provide high detection sensitivity for the user's gestures to allow the user to accurately and quickly (i.e., without any unnecessary delay time) control an electronic device using gestures of a variety of types and sensitivities (e.g., motions of from a few millimeters to over a meter) and, in some embodiments, to control the relationship between the physical span of a gesture and the resulting displayed response. The user's intent may be identified by, for example, comparing the detected gesture against a set of gesture primitives or other definitions that can be stored in a database. Each gesture primitive relates to a detected characteristic or feature of one or more gestures. Primitives can be coded, for example, as one or more vectors, scalars, tensors, and so forth indicating information about an action, command or other input, which is processed by the currently running application—e.g., to invoke a corresponding instruction or instruction sequence, which is thereupon executed, or to provide a parameter value or other input data. Because some gesture-recognition embodiments can provide high detection sensitivity, fine distinctions such as relatively small movements, accelerations, decelerations, velocities, and combinations thereof of a user's body part (e.g., a finger) or other control object can be accurately detected and recognized, thereby allowing the user to accurately interact with an electronic device and/or the applications executed and/or displayed thereon using a comparatively rich vocabulary of gestures.
In some embodiments, the gesture-recognition system provides functionality for the user to statically or dynamically adjust the relationship between the user's actual motion and a resulting response, e.g., object movement displayed on the electronic device's screen. In static operation, the user manually sets this sensitivity level by manipulating a displayed slide switch or other icon using, for example, the gesture-recognition system described herein. In dynamic operation, the system automatically responds to the distance between the user and the device, the nature of the activity being displayed, the available physical space, and/or the user's own pattern of response (e.g., scaling the response based on the volume of space in which the user's gestures appear to be confined). For example, when limited space is available, the relationship may be adjusted, automatically or manually by the user, to a ratio smaller than one (e.g., 1:10), such that each unit (e.g., one millimeter) of the user's actual movement results in ten units (e.g., 10 pixels or 10 millimeters) of object movement displayed on the screen. Similarly, when the user is relatively close to the electronic device, the user may adjust (or the device, sensing the user's distance, may autonomously adjust) the relationship to a ratio larger than one (e.g., 10:1) to compensate. Accordingly, adjusting the ratio of the user's actual motion to the resulting action (e.g., object movement) displayed on the screen provides extra flexibility for the user to remotely command the electric device and/or control the virtual environment displayed thereon.
In some embodiments, the system enables or provides an on-screen indicator showing in real time the degree of gesture completion, providing feedback letting the user know when a particular action is accomplished (e.g., a control is selected or a certain control manipulation effected). For example, the gesture-recognition system may recognize the gesture by matching it to a database record that includes multiple images, each of which is associated with a degree (e.g., from 1% to 100%) of completion of the performed gesture. The degree of completion of the performed gesture is then rendered on the screen. For example, as the user moves a finger closer to an electronic device to perform a clicking or touching gesture, the device display may show a hollow circular icon that a rendering application gradually fills in with a color indicating how close the user's motion is to completing the gesture. When the user has fully performed the clicking or touching gesture, the circle is entirely filled in; this may result in, for example, labeling the desired virtual object as a chosen object. The degree-of-completion indicator thus enables the user to recognize the exact moment when the virtual object is selected.
Some embodiments discern, in real time, a dominant gesture from unrelated movements that may each qualify as a gesture, and may output a signal indicative of the dominant gesture. In various embodiments, the gesture-recognition system identifies a user's dominant gesture when more than one gesture (e.g., an arm-waving gesture and a finger-flexing gesture) is detected. For example, the gesture-recognition system may computationally represent the waving gesture as a waving trajectory and the finger-flexing gestures as five separate (and smaller) trajectories. Each trajectory may be converted into a vector along, for example, six Euler degrees of freedom in Euler space. The vector with the largest magnitude represents the dominant component of the motion (e.g., waving in this case) and the rest of vectors may be ignored. In some embodiments, a vector filter that can be implemented using conventional filtering techniques is applied to the multiple vectors to filter out the small vectors and identify the dominant vector. This process may be repetitive, iterating until one vector—the dominant component of the motion—is identified. The identified dominant component can then be used to manipulate the electronic device or the applications thereof.
Accordingly, in one aspect, embodiments provide a method of controlling a machine. The method includes sensing a variation of position of at least one control object using an imaging system; determining from the variation one or more primitives describing at least one of a motion made by the control object and the character of the control object; comparing the primitive(s) to one or more templates in a library of gesture templates; selecting from a result of the comparing a set of templates of possible gestures corresponding to the one or more primitives; and providing at least one of the set of templates of possible gestures as an indication of a command to issue to a machine under control responsive to the variation. The one or more control objects may include a body part of a user.
In some embodiments, sensing a variation of position of at least one control object using an imaging system comprises capturing a plurality of temporally sequential images of one or more control objects manipulated by the user. Determining from the variation one or more primitives describing a motion made by the control object and/or the character of the control object may involve computationally analyzing the images of the control object(s) to recognize a gesture primitive including at least a portion of a trajectory (trajectory portion) describing motion made by the control object. The analysis may include identifying a scale associated with the gesture primitive, the scale being indicative of an actual distance traversed by the control object; the scale may be identified, for instance, by comparing the recognized gesture with records in a gesture database, which may include a series of electronically stored records each relating a gesture to an input parameter. The gestures may be stored in the records as vectors. The analysis may further include computationally determining a ratio between the scale and a displayed movement corresponding to an action to be displayed on a presentation device. The action may then be displayed based on the ratio. The ratio may be adjusted based on an external parameter such as, e.g., the actual gesture distance, or the ratio of a pixel distance in the captured images corresponding to performance of the gesture to the size, in pixels, of the display screen. Analyzing the images of the control object(s) may also include identifying a shape and position of the control object(s) in the images, and reconstructing the position and the shape of the control object(s) in 3D space based on correlations between the identified shapes and positions of the control object(s) in the images. The method may also involve defining a 3D model of the control object(s), the position and shape of the control object(s) may be reconstructed in 3D space based on the 3D model. In some embodiments, analyzing the images of the control object(s) further includes temporally combining the reconstructed positions and shapes of the control object(s) in 3D space. In certain embodiments, determining from the variation one or more primitives describing a motion made by the control object and/or the character of the control object comprises determining a position or motion of the control object(s) relative to a virtual control construct.
Comparing the primitive(s) to one or more templates in a library of gesture templates may include disassembling at least a portion of a trajectory into a set of frequency components (e.g., by applying Fourier analysis to the trajectory portion as a signal over time to determine the set of frequency components), and searching for the set of frequency components among the template(s) stored in the library. Alternatively or additionally, comparing the primitive(s) to one or more templates in a library of gesture templates may include disassembling at least a portion of a trajectory into a set of frequency components, fitting a set of one or more functions to a set of frequency components representing at least a portion of a trajectory (e.g., fitting a Gaussian function to the set of frequency components), and searching for the set of functions among the template(s) stored in the library. In yet another alternative implementation, comparing the primitive(s) to one or more templates in a library of gesture templates may include disassembling at least a portion of a trajectory into a set of time dependent frequency components (e.g., by applying wavelet analysis to the trajectory portion as a signal over time), and searching for the set of time dependent frequency components among the template(s) stored in the library. In yet another embodiment, comparing the primitive(s) to one or more templates in a library of gesture templates includes distorting at least a portion of a trajectory based at least in part upon frequency of motion components, and searching for the distorted trajectory among the template(s) stored in the library.
In some embodiments, selecting from a result of the comparison a set of templates of possible gestures corresponding to the primitive(s) involves determining a similarity between the one or more primitives and the set of templates by applying at least one similarity determiner (such as a correlation, a convolution, and/or a dot product), and providing the similarity as an indication of quality of correspondence between the primitives and the set of templates. Selecting a set of templates may also include performing at least one of scaling and shifting to at least one of the primitives and the set of templates. Further, selecting a set of templates may involve disassembling at least a portion of a trajectory into a set of frequency components, filtering the set of frequency components to remove motions associated with jitter (e.g., by applying a Frenet-Serret filter), and searching for the filtered set of frequency components among the template(s) stored in the library.
In various embodiments, the method further includes computationally determining a degree of completion of at least one gesture, and modifying contents of a display in accordance with the determined degree of completion; the contents may include, e.g., an icon, a bar, a color gradient, or a color brightness. Further, the degree of completion may be compared to a threshold value, and a command to be performed upon the degree of completion may be indicated. Further, an action responsive to the gesture may be displayed based on the degree of gesture completion and in accordance with a physics simulation model and/or a motion model (which may be constructed, e.g., based on a simulated physical force, gravity, and/or a friction force).
In various embodiments, the method further includes computationally determining a dominant gesture (e.g., by filtering the plurality of gestures); and presenting an action on a presentation device based on the dominant gesture. For instance, each of the gestures may be computationally represented as a trajectory, and each trajectory may be computationally represented as a vector along six Euler degrees of freedom in Euler space, the vector having a largest magnitude being determined to be the dominant gesture.
In some embodiments, providing at least one of the set of templates of possible gestures as an indication of a command to issue to a machine under control responsive to the variation comprises filtering one or more gestures based at least in part upon one or more characteristics to determine a set of gestures of interest, and providing the set of gestures of interest (e.g., via an API). The characteristics may include the configuration, shape, and/or position of an object making the gesture. Gestures may be associated with primitives in a data structure.
In some embodiments, providing at least one of the set of templates of possible gestures as an indication of a command to issue to a machine under control responsive to the variation further includes detecting a conflict between a template corresponding to a user-defined gesture and a template corresponding to a predetermined gesture; and applying a resolution determiner to resolve the conflict, e.g., by ignoring a predetermined gesture when the conflict is between a predetermined gesture and a user-defined gesture and/or by providing the user-defined gesture when the conflict is between a predetermined gesture and a user-defined gesture.
In another aspect, embodiments relate to a system enabling dynamic user interactions with a device having a display screen. The system includes at least one camera oriented toward a field of view and at least one source to direct illumination onto at least one control object in the field of view. Further, the system includes a gesture database comprising a series of electronically stored records, each of the records relating a gesture to an input parameter, and an image analyzer coupled to the camera and the database. The image analyzer is generally any suitable combination of hardware and/or software for performing the functions of the methods described above (including, e.g., image analysis and gesture recognition). The image analyzer is configured to operate the camera to capture a plurality of temporally sequential images of the control object(s); analyze the images of the control object(s) to recognize a gesture performed by the user; compare the recognized gesture with records in the gesture database to identify an input parameter associated therewith, the input parameter corresponding to an action for display on the display screen in accordance with a ratio between an actual gesture distance traversed in performance of the gesture and a displayed movement corresponding to the action; and adjust the ratio based on an external parameter. The external parameter may be the actual gesture distance, or a ratio of a pixel distance in the captured images corresponding to performance of the gesture to a size, in pixels, of the display screen. The ratio may be local to each gesture and may be stored in each gesture record in the database, or the ratio may be global across all gestures in the gesture database.
The image analyzer may be further configured to (i) identify shapes and positions of the at least one control object in the images and (ii) reconstruct a position and a shape of the at least one control object in 3D space based on correlations between the identified shapes and positions of the at least one control object in the images. Further, the image analyzer may be configured to define a 3D model of the control object(s) and reconstruct the position and shape of the control object(s) in 3D space based on the 3D model, and/or to estimate a trajectory of the at least one control object in 3D space. In some embodiment, the image analyzer is further configured to determine a position or motion of the control object(s) relative to a virtual control construct.
In various embodiments, a system enabling dynamic user interactions with a device includes one or more cameras and sources (e.g., light sources or sonic source) for direct illumination (broadly understood, e.g., so as to include irradiation with ultrasound) of one or more control objects; a gesture database comprising a series of electronically stored records, each specifying a gesture; and an image analyzer coupled to the camera and the database and configured to operate the camera to capture a plurality of images of the control object(s); analyze the images to recognize a gesture; compare the recognized gesture records in a gesture database to identify the gesture; determine a degree of completion of the recognized gesture; and display an indicator (such as an icon, a bar, a color gradient, or a color brightness) on a screen of the device reflecting the determined degree of completion. The image analyzer may be further configured to determine whether the degree of completion is above a predetermined threshold value and, if so, to cause the device to take a completion-triggered action. Further, the image analyzer may be further configured to display an action responsive to the gesture in accordance with a physics simulation model and based on the degree of gesture completion. The displayed action may be further based on a motion model. The image analyzer may be further configured to determine a position or motion of the control object(s) relative to a virtual control construct.
In various embodiments, a system of controlling dynamic user interactions with a device one or more cameras and (e.g., light or sonic) sources for direct illumination (again, broadly understood) of one or more control object(s) manipulated by the user in the field of view; a gesture database comprising a series of electronically stored records each specifying a gesture; and an image analyzer coupled to the camera and the database and configured to operate the camera to capture a plurality of temporally sequential images of the control object(s), analyze the images of the at control object(s) to recognize a plurality of user gestures; determine a dominant gesture; and display an action on the device based on the dominant gesture.
The image analyzer may be further configured to determine the dominant gesture by filtering the plurality of gestures (e.g., iteratively), and/or to represent each of the gestures as a trajectory (e.g., as a vector along six Euler degrees of freedom in Euler space, whose largest magnitude may be determined by the dominant gesture). The image analyzer may be further configured to determine a position or motion of the at least one control object relative to a virtual control construct.
Reference throughout this specification to “one example,” “an example,” “one embodiment,” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the example is included in at least one example of the present technology. Thus, the occurrences of the phrases “in one example,” “in an example,” “one embodiment,” or “an embodiment” in various places throughout this specification are not necessarily all referring to the same example. Furthermore, the particular features, structures, routines, steps, or characteristics may be combined in any suitable manner in one or more examples of the technology. The headings provided herein are for convenience only and are not intended to limit or interpret the scope or meaning of the claimed technology.
In the drawings, like reference characters generally refer to like parts throughout the different views. Also, the drawings are not necessarily to scale, with an emphasis instead generally being placed upon illustrating the principles disclosed herein. In the following description, various embodiments are described with reference to the following drawings, in which:
System and methods in accordance herewith generally utilize information about the motion of a control object, such as a user's finger or a stylus, in three-dimensional space to operate a user interface and/or components thereof based on the motion information. A “control object” as used herein with reference to an embodiment is generally any three-dimensionally movable object or appendage with an associated position and/or orientation (e.g., the orientation of its longest axis) suitable for pointing at a certain location and/or in a certain direction. Control objects include, e.g., hands, fingers, feet, or other anatomical parts, as well as inanimate objects such as pens, styluses, handheld controls, portions thereof, and/or combinations thereof. Where a specific type of control object, such as the user's finger, is used hereinafter for ease of illustration, it is to be understood that, unless otherwise indicated or clear from context, any other type of control object may be used as well.
Various embodiments take advantage of motion-capture technology to track the motions of the control object in real time (or near real time, i.e., sufficiently fast that any residual lag between the control object and the system's response is unnoticeable or practically insignificant). Other embodiments may use synthetic motion data (e.g., generated by a computer game) or stored motion data (e.g., previously captured or generated). References to motions in “free space” or “touchless” motions are used herein with reference to an embodiment to distinguish motions tied to and/or requiring physical contact of the moving object with a physical surface to effect input; however, in some applications, the control object may contact a physical surface ancillary to providing input, in such case the motion is still considered a “free-space” motion. Further, in some embodiments, the motion is tracked and analyzed relative to a virtual control construct, such as a virtual surface, programmatically defined in space and not necessarily corresponding to a physical surface or object; intersection of the control object with that virtual control construct defines a “virtual touch.” The virtual surface may, in some instances, be defined to co-reside with or be placed near a physical surface (e.g., a virtual touch screen may be created by defining a (substantially planar) virtual surface at or very near the screen of a display (e.g., television, monitor, or the like); or a virtual active table top may be created by defining a (substantially planar) virtual surface at or very near a table top convenient to the machine receiving the input).
In more detail, the system 110 may include an image-analysis module 114 that reconstructs the shapes and positions of the user's hand in 3D space and in real time; suitable systems and methods are described, e.g., U.S. Ser. Nos. 61/587,554, 13/414,485, and 61/724,091, filed on Jan. 17, 2012, Mar. 7, 2012, and Nov. 8, 2012, respectively, the entire disclosures of which are hereby incorporated by reference. Based on the reconstructed shape, configuration, position, and orientation of the control object as a function of time, object and motion attributes may be derived. For example, the configuration of the user's hand (or other control object) may be characterized by a three-dimensional surface model or simply the position of a few key points (e.g., the finger tips) or other key parameters; and the trajectory of a gesture may be characterized with one or more vectors and/or scaling parameters (e.g., a normalized vector from the start to the end point of the motion, a parameter indicating the overall scale of the motion, and a parameter indicating any rotation of the control object during the motion). Other parameters that can be associated with gesture primitives include an acceleration, a deceleration, a velocity, a rotational velocity, rotational acceleration, other parameters of motion, parameters of appearance of the control object such as color, apparent surface texture, temperature, other qualities or quantities capable of being sensed and/or various combinations thereof. In some embodiments, the raw motion data is filtered prior to ascertaining motion attributes, e.g., in order to eliminate unintended jitter.
A gesture-recognition module 116 takes the object and motion attributes, or other information from the image-analysis module, as input to identify gestures. In one embodiment, the gesture-recognition module 116 compares attributes of motion or character detected from imaging or sensing a control object to gestures of a library of gesture templates electronically stored in a database 120 (e.g., a relational database, an object-oriented database, or any other kind of database), which is implemented in the system 110, the electronic device 104, or on an external storage system. (As used herein, the term “electronically stored” includes storage in volatile or non-volatile storage, the latter including disks, Flash memory, etc., and extends to any computationally addressable storage media (including, for example, optical storage).) For example, gesture primitives may be stored as vectors, i.e., mathematically specified spatial trajectories, and the gesture information recorded may include the relevant part of the user's body making the gesture; thus, similar trajectories executed by a user's hand and head may be stored in the database as different gestures, so that an application can interpret them differently. In one embodiment, one or more components of trajectory information about a sensed gesture—and potentially other gesture primitives—are mathematically compared against the stored trajectories to find potential matches from which a best match (or best matches) may be selected, and the gesture is recognized as corresponding to the located database entry based upon qualitative, statistical confidence factors or other quantitative criteria indicating a degree of match. For example, a confidence factor that exceeds a threshold can indicate a potential match.
Accordingly, as illustrated in
One technique for comparison (154) comprises dynamic time warping in which an observed trajectory information is temporally distorted and the distortions compared against stored gesture information (in a database for example). One type of distortion comprises frequency distortion in which the trajectory information is distorted for frequencies of motions to yield a set of distorted trajectories. The set of distorted trajectories can be searched for matches in the database. Such frequency distortions enable finding gestures made at different frequencies of motion than the template or templates stored in the database.
Another technique employs Fourier analysis to disassemble a portion of a trajectory (viewed as a signal over time) into frequency components. The set of frequencies can be searched for among the template(s) stored in the database.
A further technique employs wavelet analysis to disassemble a portion of a trajectory (viewed as a signal over time) into time dependent frequency components. The set of frequencies can be searched for among the template(s) stored in the database.
In a yet further embodiment, Gaussian (or other) functions can be fit to the set of frequencies representing the trajectory portion to form a set of Gaussian functions at the frequencies of the trajectory. The functions can be cepstra envelopes in some embodiments. The functions fit to the frequencies can be searched for among the template(s) stored in the database.
In a still yet further embodiments, techniques for finding similarity between two or more signal portions can facilitate locating template(s) in the database corresponding to the trajectory. For example, without limitation, correlation, convolution, sliding dot product, fixed dot product or combinations thereof can be determined from the trajectory information and one or more template(s) in the database to determine a quality of match.
Of course, frequency components may be scaled and/or shifted to facilitate finding appropriate templates in the database corresponding to the gesture(s) to be recognized. Further, in some embodiments, frequency filtering can be applied to frequency components to facilitate finding template(s) stored in the database. For example, filtering can be used to eliminate jitter from shaking hands by eliminating high frequency components from the trajectory spectrum. In an embodiment, trajectories can be smoothed by applying Frenet-Serret filtering techniques described in U.S. Provisional Application No. 61/856,976, filed on Jul. 22, 2013 and entitled “Filtering Motion Using Frenet-Serret Frames,” the entire disclosure of which is hereby incorporated herein by reference.
In brief, as is known in the art, Frenet-Serret formulas describe the kinematic properties of a particle moving along a continuous, differentiable curve in 3D space. This representation of motion is better tailored to gestural movements than the conventional Cartesian (x,y,z) representation. Accordingly, embodiments convert captured motion from Cartesian space to Frenet-Serret space by attaching Frenet-Serret references frames to a plurality of locations on the control object's path. The Frenet-Serret frame consists of (i) a tangent unit vector (T) that is tangent to the path, (ii) a normal unit vector (N) that is the derivative of T with respect to an arclength parameter of the path divided by its length, and (iii) a binomial unit vector (B) that is the cross-product of T and N. Alternatively, the tangent vector may be determined by normalizing a velocity vector if it is known at a given location on the path. These unit vectors T, N, B collectively form the orthonormal basis of the Frenet-Serret frame in 3D space. The Frenet-Serret coordinate system is constantly rotating as the object traverses the path, and so may provide a more natural coordinate system for an object's trajectory than a strictly Cartesian system.
Once converted to Frenet-Serret space, the object's motions is filtered. The filtered data may then be converted back to Cartesian space or another desired reference frame. In one embodiment, filtering includes applying a smoothing filter to a set of sequential unit vectors corresponding to the tangent, normal, and/or binomial direction of the Frenet-Serret frame. To some filters, each unit vector is specified by one scalar value per dimension (i.e., by three scalar values in 3D) and filtered separately. The smoothing filter may be applied to each set of scalar values, and the direction of the vector may thereafter be reconstructed from its filtered values, and the other two vectors of the frame at each point may be recalculated accordingly. A 3D curve interpolation method may then be applied to generate a 3D curve that passes through the points in the given order, matching the filtered Frenet-Serret frame at each point and representing the object's path of motion.
In various alternative embodiments, noise filtering may be achieved by determining the rotation between consecutive Frenet-Serret frames along the path using the Frenet-Serret formulas describing curvature and torsion. The total rotation of the Frenet-Serret frame is the combination of the rotations of each of the three Frenet vectors described by the formulas
is the derivative with respect to arclength, κ is the curvature, and τ is the torsion of the curve. The two scalars κ and τ may define the curvature and torsion of a 3D curve, in that the curvature measures how sharply a curve is turning while torsion measures the extent of its twist in 3D space. Alternatively, the curvature and torsion parameters may be calculated directly from the derivative of best-fit curve functions (i.e., velocity) using, for example, the equations
The curvature and torsion parameters describing the twists and turns of the Frenet-Serret frames in 3D space may be filtered, and a smooth path depicting the object's motion may be constructed therefrom.
In some embodiments, additional filtering, modification or smoothing may be applied to the resulting path, e.g., utilizing the principles of an Euler spiral (or similar construct), to create aesthetically pleasing curves and transitions before converting the coordinates back to Cartesian coordinates. In one embodiment, the filtered Frenet-Serret path (with or without modification by, for example, application of the Euler spiral) may be used to better predict future motion of the object. By removing or reducing any noise, inconsistencies, or unintended motion in the path, the filtered path may better predict a user's intent in executing a gestural motion. The predicted future motion along the Frenet-Serret path is therefore based on past-detected motion and a kinematic estimate of the user's intent behind the motion.
Returning to the discussion of gestures stored in the database, gesture templates can comprise one or more frequencies, combinations of frequency and motion information and/or characteristics of control objects (e.g., apparent texture, color, size, combinations thereof). Templates can be created to embody one or more components from taught gestures using techniques described in U.S. Provisional Application No. 61/872,538, filed on Nov. 20, 2013 and entitled “Interactive Training Recognition of Free Space Gestures for Interface and Control,” the entire disclosure of which is hereby incorporated herein by reference. In brief, a (typically computer-implemented) gesture training system may help application developers and/or end-users to define their own gestures and/or customize gestures to their needs and preferences—in other words, to go outside the realm of pre-programmed, or “canned,” gestures. The gesture training system may interact with the user through normal language, e.g., a series of questions, to better define the action the user wants the system to be able to recognize. By answering these questions in a pre-described setup process, the user defines parameters and/or parameter ranges for the respective gesture, thereby resolving ambiguities. Advantageously, this approach affords reliable gesture recognition without the algorithmic complexity normally associated with the need for the computer to guess the answers; thus, it helps reduce software complexity and cost. In one embodiment, once the system has been trained to recognize a particular gesture or action, it may create an object (e.g., a file, data structure, etc.) for this gesture or action, facilitating recognition of the gesture or action thereafter. The object may be used by an application programming interface (API), and may be employed by both developers and non-developer users. In some embodiments, the data is shared or shareable between developers and non-developer users, facilitating collaboration and the like.
In some embodiment, gesture training is conversational, interactive, and dynamic; based on the responses the user gives, the next question, or the next parameter to be specified, may be selected. The questions may be presented to the user in visual or audio format, e.g., as text displayed on the computer screen or via speaker output. User responses may likewise be given in various modes, e.g., via text input through a keyboard, selection of graphic user-interface elements (e.g., using a mouse), voice commands, or, in some instances, via basic gestures that the system is already familiar to recognize. (For example, a “thumbs-up” or “thumbs-down” gesture may be used to answer any yes-no question.) Furthermore, as illustrated by way of example below, certain questions elicit an action—specifically, performance of an exemplary gesture (e.g., a typical gesture or the extremes of a range of gestures)—rather than a verbal response. In this case, the system may utilize, e.g., machine learning approaches, as are well-known to persons of skill in the art, to distill the relevant information from the camera images or video stream capturing the action.
In one embodiment, vector(s) or other mathematical constructs representing portions of gesture(s) may be scaled so that, for example, large and small arcs traced by a user's hand will be recognized as the same gesture (i.e., corresponding to the same database record) but the gesture recognition module will return both the identity and a value, reflecting the scaling, for the gesture. The scale may correspond to an actual gesture distance traversed in performance of the gesture, or may be normalized to some canonical distance. Comparison of a tracked motion against a gesture template stored in the library facilitates determining a degree of completion of the gesture (discussed in more detail below), and can enable some embodiments to provide increased accuracy with which detected motions are interpreted as control input.
In various embodiments, stored information about a gesture may contain an input parameter corresponding to the gesture (which may be scaled using the scaling value). If the gesture-recognition module 116 is implemented as part of a specific application (such as a game or controller logic for a television), the stored gesture information may also contain an input parameter corresponding to the gesture (which may be scaled using the scaling value); in some systems where the gesture-recognition module 116 is implemented as a utility available to multiple applications, this application-specific parameter is omitted: when an application invokes the gesture-recognition module 116, it interprets the identified gesture according in accordance with its own programming.
In some embodiments, the gesture-recognition module 116 detects more than one gesture. Referring to
With renewed reference to
Gesture recognition and/or interpretation as control input may be context-dependent, i.e., the same motion may correspond to different control inputs, even for the same electronic device 104 under control, depending, e.g., on the application, application environment, window, or menu that is currently active; user settings and preferences; the presence or absence and the configuration or state of motion of one or more additional control objects; the motion relative to one or more virtual constructs (as discussed in detail below); and/or the recent history of control input. For example, a particular gesture performed with one hand may affect the interpretation of a gesture performed simultaneously with another hand; a finger swipe parallel to the screen may have different meanings in different operational modes as distinguished based on whether the finger pierces a virtual control surface; and a clicking gesture that normally causes selection of a virtual control may have a different effect if made during the course of a video game.
Of course, the functionality of the image-analysis module 114, gesture-recognition module 116, and device and user-interface control module 118 may be organized, grouped, and distributed among various devices and between the electronic device 104 and the gesture-based machine-control system 110 in many different ways, and the depiction of
To further illustrate gesture-based machine control in accordance herewith, consider the following exemplary user interaction with an electronic device 104: To initiate communication with the electronic device 104, the user may first move a hand in a repetitive or distinctive way (e.g., performing a waving hand gesture). Upon detecting and recognizing this hand gesture, the gesture-recognition module 116 transmits a signal indicative thereof to the electronic device 104, which, in response, renders an appropriate display (e.g., a control panel 126). The user then performs another gesture (e.g., moving her hand in an “up” or “down” direction). The gesture-recognition module 116 detects and identifies the gesture and a scale associated therewith, and transmits this data to the electronic device 104; the device 104, in turn, interprets this information as an input parameter (as if the user had pressed a button on a remote control device) indicative of a desired action, enabling the user to manipulate the data displayed on the control panel 126 (such as selecting a channel of interest, adjusting the audio sound, or varying the brightness of the screen). In various embodiments, the device 104 connects to a source of video games (e.g., a video game console or CD or web-based video game); the user can perform various gestures to remotely interact with the virtual objects 112 in the virtual environment (video game). The detected gestures and scales are provided as input parameters to the currently running game, which interprets them and takes context-appropriate action, i.e., generates screen displays responsive to the gestures.
In various embodiments, after the user successfully initiates communications with the electronic device 104 via the gesture-based machine-control system 110, the system 110 generates a form of feedback (e.g., visual, aural, haptic or other sensory feedback or combinations thereof) for presentation on appropriate presentation mechanism(s). In the example embodiment illustrated by
Thus, mapping movements of the control object to those of the cursor on-screen can be accomplished in different ways. In some embodiments, the position and orientation of the control object—e.g., a stretched-out index finger—relative to the screen are used to compute the intersection of a straight line through the axis of the finger with the screen, and a cursor symbol is displayed at the point of intersection. If the range of motion causes the intersection point to move outside the boundaries of the screen, the intersection with a (virtual) plane through the screen may be used, and the cursor motions may be re-scaled or translated, relative to the finger motions, to remain within the screen boundaries. Alternatively to extrapolating the finger towards the screen, the position of the finger (or control object) tip may be projected perpendicularly onto the screen; in this embodiment, the control object orientation may be disregarded. As will be readily apparent to one of skill in the art, many other ways of mapping the control object position and/or orientation onto a screen location may, in principle, be used; a particular mapping may be selected based on considerations such as, without limitation, the requisite amount of information about the control object, the intuitiveness of the mapping to the user, and the complexity of the computation. For example, in some embodiments, the mapping is based on intersections with or projections onto a (virtual) plane defined relative to the camera or other image-capture hardware, under the assumption that the screen is located within that plane (which is correct, at least approximately, if the camera is correctly aligned relative to the screen), whereas, in other embodiments, the screen location relative to the camera is established via explicit calibration (e.g., based on camera images including the screen).
In various embodiments, certain gestures have an associated threshold of completion that needs to be exceeded before the gesture is recognized as such; this completion requirement may serve to enhance the reliability of gesture recognition, in particular, the elimination of false positives in gesture detection. As an example, consider the selection by the user of an on-screen virtual object, using a “finger click” in free space. With reference to
The degree of completion of the performed gesture (e.g., how much the user has moved her finger or hand) may be rendered on the screen, and indeed, the assessment of gestural completion may be handled by the rendering application running on the device 316 rather than by the gesture-recognition system 314. For example, the electronic device 316 may display a hollow circular icon 318 that the rendering application gradually fills in with a color or multiple colors as the device receives simple motion (position-change) signals from the gesture-recognition system 314 as the user moves a finger closer to the device 316, while performing a clicking or “touching” gesture. The degree to which the circle is filled indicates how close the user's motion is to completing the gesture (or how far the user's finger has moved away from its original location). When the user fully performs the clicking or touching gesture, the circle is entirely filled in; this may result in, for example, labeling the virtual object 312 as a chosen object.
In some embodiments, the device temporarily displays a second indication (e.g., changing the shape, color or brightness of the indicator) to confirm the object selection. The indication of the degree of gesture completion and/or the confirming indication of object selection thus enable the user to easily predict the exact moment when the virtual object is selected; accordingly, the user can subsequently manipulate the selected object on-screen in an intuitive fashion. Although the discussion herein focuses on filling of the hollow circle 318, embodiments can include virtually any type of representation displayed on the screen that can indicate the completion of the performed gesture. For example, a hollow bar 320 progressively filled in by color, a gradient of color 322, the brightness of a color or any suitable indicator may be used to illustrate a degree of gesture completion performed by the user.
The gesture-recognition system 314 detects and identifies the user's gestures based on the shapes and positions of the gesturing part of the user's body in the captured 2D images. A 3D image of the gesture can be reconstructed by analyzing the temporal correlations of the identified shapes and positions of the user's gesturing body part in consecutively acquired images. Because the reconstructed 3D image can accurately detect and recognize all types of gestures (e.g., moving a finger a distance of less than one centimeter to greater than a meter) in real time, embodiments of the gesture-recognition system 314 provides high detection sensitivity as well as selectivity. In various embodiments, once the gesture is recognized and the instruction associated therewith is identified, the gesture-recognition system 314 transmits signals to the device 316 to activate an on-screen indicator displaying a degree of completion of the user's gesture. The on-screen indicator provides feedback that allows the user to control the electronic device 316 and/or manipulate the displayed virtual objects 312 using various degrees of movement. For example, the user gesture may be as large as a body length jump or as small as a finger clicking.
In one embodiment, once the object 312 is labeled as a chosen object, the gesture-recognition system 314 locks the object 312 together with the cursor 310 on the screen to reflect the user's subsequently performed movement. For example, when the user moves a hand in the downward direction, the displayed cursor 310 and the selected virtual object 312 also move downward together on the display screen in response. Again, this allows the user to accurately manipulate the virtual objects 312 in the virtual environment.
In another embodiment, when a virtual object is labeled as a chosen item, the user's subsequent movement is converted computationally to a simulated physical force applied to the selected object. Referring to
It should be stressed that the foregoing functional division between the gesture-recognition system 314 and the rendering application running on the device 316 is exemplary only; in some embodiments the two entities are more tightly coupled or even unified so that, rather than simply passing generic force data to the application, the gesture-recognition system 314 has world knowledge of the environment as rendered on the device 316. In this way, the gesture-recognition system 314 can apply object-specific knowledge (e.g., friction forces and inertia) to the force data so that the physical effects of user movements on the rendered objects are computed directly (rather than based on generic force data generated by the gesture-recognition system 314 and processed on an object-by-object basis by the device 316). Moreover, in various embodiments, the motion-capture and gesture-recognition functionality is implemented on the device 316, e.g., as a separate application that provides gesture information to the rendering application (such as a game) running on the device 316, or, as discussed above, as a module integrated within the rendering application (e.g., a game application may be provided with suitable motion-capture and gesture-recognition functionality). The division of computational responsibility between different hardware devices as well as between hardware and software represents a design choice.
A representative method 350 for supporting a user's interaction with an electronic device by means of free-space gestures, and particularly to monitor the degree of gesture completion so that on-screen action can be deferred until the gesture is finished, is shown in
Referring to
In still other embodiments, the ratio adjustment is achieved using a conventional remote-control device, which the user controls by pushing buttons, or using a wireless device such as a tablet or smart phone. A different scaling ratio may be associated with each gesture and stored in association therewith e.g., as part of the specific gesture record in the database (i.e., the scaling ration may be local and potentially differ between gestures). Alternatively, the scaling ratio may be applicable to several or all gestures stored in the gesture database (i.e., the scaling ratio may be global and shared among several or all of the gestures).
Alternatively, the relationship between physical and on-screen movements may be determined, at least in part, based on the characteristics of the display and/or the rendered environment. For example, with reference to
The scaling relationship between the user's actual movement and the resulting action taking place on the display screen may result in performance challenges, especially when limited space is available to the user. For example, when two family members sit together on a couch playing a video game displayed on a TV, each user's effective range of motion is limited by the presence of the other user. Accordingly, the scaling factor may be altered to reflect a restricted range of motion, so that small physical movements correspond to larger on-screen movements. This can take place automatically upon detection, by the machine-control system, of multiple adjacent users. The scaling ratio may also depend, in various embodiments, on the rendered content of the screen. For example, in a busy rendered environment with many objects, a small scaling ratio may be desired to allow the user to navigate with precision; whereas for simpler or more open environments, such as where the user pretends to throw a ball or swing a golf club and the detected action is rendered on the screen, a large scaling ratio may be preferred.
As noted above, the proper relationship between the user's movement and the corresponding motion displayed on the screen may depend on the user's position relative to the recording camera. For example, the ratio of the user's actual movement m to the pixel size Min the captured image may depend on the viewing angle of the camera as well as the distance between the camera and the user. If the viewing angle is wide or the user is at a distance far away from the camera, the detected relative movement of the user's gesture (i.e., m/M) is smaller than it would be if the viewing angle was not so wide or the user was closer to the camera. Accordingly, in the former case, the virtual object moves too little on the display in response to a gesture, whereas in the latter case the virtual object moves too far. In various embodiments, the ratio of the user's actual movement to the corresponding movement displayed on the screen is automatically coarsely adjusted based on, for example, the distance between the user and the camera (which may be tracked by ranging); this allows the user to move toward or away from the camera without disrupting the intuitive feel that the user has acquired for the relationship between actual and rendered movements.
In various embodiments, when the gesture is recognized but the detected user movement is minuscule (i.e., below a predetermined threshold), the gesture-based machine-control system switches from a low-sensitivity detection mode to a high-sensitivity mode where a 3D image of the hand gesture is accurately reconstructed based on the acquired 2D images and/or a 3D model. Because the high-sensitivity system can accurately detect small movements (e.g., less than a few millimeters) performed by a small part of the body, e.g., a finger, the ratio of the user's actual movement to the resulting movement displayed on the screen may be adjusted within a large range, for example, between 1000:1 and 1:1000.
A representative method 450 for a user to dynamically adjust the relationship between her actual motion and the resulting object movement displayed on the electronic device's screen in accordance with embodiments is shown in
As discussed above with respect to
In one embodiment, only a subset of the gestures captured by the gesture-recognition system is sent to the application running on the electronic device. The recognized gestures may be sent from the gesture-recognition module 116 to a gesture filter 130, as illustrated in
The characteristics of the filter 130 may be defined to suit a particular application or group of applications. In various embodiments, the features may be received from a menu interface, read from a command file or configuration file, communicated via an API, or any other similar method. The filter 130 may include sets of preconfigured characteristics and allow a user or application to select one of the sets. Examples of filter characteristics include the path that a gesture makes (the filter 130 may pass gestures having only relatively straight paths, for example, and block gestures having curvilinear paths); the velocity of a gesture (the filter 130 may pass gestures having high velocities, for example, and block gestures having low velocities); and/or the direction of a gesture (the filter may pass gestures having left-right motions, for example, and block gestures having forward-back motions). Further filter characteristics may be based on the configuration, shape, or disposition of the object making the gesture; for example, the filter 130 may pass only gestures made using a hand pointing with a certain finger (e.g., a third finger), a hand making a fist, or an open hand. The filter 130 may further pass only gestures made using a thumbs-up or thumbs-down gesture, for example for a voting application.
The filtering performed by the filter 130 may be implemented in accordance with any method known in the art. In one embodiment, gestures detected by the gesture-recognition module 116 are assigned a set of one or more characteristics (e.g., velocity or path) and the gestures and characteristics are maintained in a data structure. The filter 130 detects which of the assigned characteristics meet its filter characteristics and passes the gestures associated with those characteristics. The gestures that pass the filter 130 may be returned to one or more applications via an API or via a similar method. The gestures may, instead or in addition, be displayed on the display 106 and/or shown in a menu (for, e.g., a live teaching IF application).
As described above, the gesture-recognition module 116 compares a detected motion of an object to a library of known gestures and, if there is a match, returns the matching gesture. In one embodiment, a user, programmer, application developer, or other person supplements, changes, or replaces the known gestures with user-defined gestures. If the gesture-recognition module 116 recognizes a user-defined gesture, it returns the gesture to one or more programs via an API (or similar method). In one embodiment, still with reference again to
The user-defined characteristics may include any number of a variety of different attributes of a gesture. For example, the characteristics may include a path of a gesture (e.g., relatively straight, curvilinear; circle vs. swipe); parameters of a gesture (e.g., a minimum or maximum length); spatial properties of the gesture (e.g., a region of space in which the gesture occurs); temporal properties of the gesture (e.g., a minimum or maximum duration of the gesture); and/or a velocity of the gesture (e.g., a minimum or maximum velocity). Embodiments are not limited to only these attributes, however.
A conflict between a user-defined gesture and a predetermined gesture may be resolved in any number of ways. A programmer may, for example, specify that a predetermined gesture should be ignored. In another embodiment, a user-defined gesture is given precedence over a predetermined gesture such that, if a gesture matches both, the user-defined gesture is returned.
In various embodiments, gestures are interpreted based on their location and orientation relative to a virtual control construct. A “virtual control construct” as used herein with reference to an embodiment denotes a geometric locus defined (e.g., programmatically) in space and useful in conjunction with a control object, but not corresponding to a physical object; its purpose is to discriminate between different operational modes of the control object (and/or a user-interface element controlled therewith, such as a cursor) based on whether the control object intersects the virtual control construct. The virtual control construct, in turn, may be, e.g., a virtual surface construct (a plane oriented relative to a tracked orientation of the control object or an orientation of a screen displaying the user interface) or a point along a line or line segment extending from the tip of the control object. The term “intersect” is herein used broadly with reference to an embodiment to denote any instance in which the control object, which is an extended object, has at least one point in common with the virtual control construct and, in the case of an extended virtual control construct such as a line or two-dimensional surface, is not parallel thereto. This includes “touching” as an extreme case, but typically involves that portions of the control object fall on both sides of the virtual control construct.
In an embodiment and by way of example, one or more virtual control constructs can be defined computationally (e.g., programmatically using a computer or other intelligent machinery) based upon one or more geometric constructs to facilitate determining occurrence of engagement gestures from information about one or more control objects. Virtual control constructs in an embodiment can include virtual surface constructs, virtual linear or curvilinear constructs, virtual point constructs, virtual solid constructs, and complex virtual constructs comprising combinations thereof. Virtual surface constructs can comprise one or more surfaces, e.g., a plane, curved open surface, closed surface, bounded open surface, or generally any multi-dimensional virtual surface definable in two or three dimensions. Virtual linear or curvilinear constructs can comprise any one-dimensional virtual line, curve, line segment or curve segment definable in one, two, or three dimensions. Virtual point constructs can comprise any zero-dimensional virtual point definable in one, two, or three dimensions. Virtual solids can comprise one or more solids, e.g., spheres, cylinders, cubes, or generally any three-dimensional virtual solid definable in three dimensions.
In an embodiment, an engagement target can be defined using one or more virtual construct(s) coupled with a virtual control (e.g., slider, button, rotatable knob, or any graphical user interface component) for presentation to user(s) by a presentation system (e.g., displays, 3D projections, holographic presentation devices, non-visual presentation systems such as haptics, audio, and the like, any other devices for presenting information to users, or combinations thereof). Coupling a virtual control with a virtual construct enables the control object to “aim” for, or move relative to, the virtual control—and therefore the virtual control construct. Engagement targets in an embodiment can include engagement volumes, engagement surfaces, engagement lines, engagement points, or the like, as well as complex engagement targets comprising combinations thereof. An engagement target can be associated with an application or non-application (e.g., OS, systems software, etc.) so that virtual control managers (i.e., program routines, classes, objects, etc. that manage the virtual control) can trigger differences in interpretation of engagement gestures including presence, position and/or shape of control objects, control object motions, or combinations thereof to conduct machine control.
Engagement targets can be used to determine engagement gestures by providing the capability to discriminate between engagement and non-engagement (e.g., virtual touches, moves in relation to, and/or virtual pierces) of the engagement target by the control object. Thus, the user can, for example, operate a cursor in at least two modes: a disengaged mode in which it merely indicates a position on the screen, typically without otherwise affecting the screen content; and one or more engaged modes, which allow the user to manipulate the screen content. In the engaged mode, the user may, for example, drag graphical user-interface elements (such as icons representing files or applications, controls such as scroll bars, or displayed objects) across the screen, or draw or write on a virtual canvas. Further, transient operation in the engaged mode may be interpreted as a click event. Thus, operation in the engaged mode may correspond to, or emulate, touching a touch screen or touch pad, or controlling a mouse with a mouse button held down. Different or additional operational modes may also be defined, and may go beyond the modes available with traditional contact-based user input devices. The disengaged mode may simulate contact with a virtual control, and/or a hover in which the control is selected but not actuated). Other modes useful in various embodiments include an “idle,” in which no control is selected nor virtually touched, and a “lock,” in which the last control to be engaged with remains engaged until disengaged. Yet further, hybrid modes can be created from the definitions of the foregoing modes in embodiments.
The term “cursor,” as used in this discussion, refers generally to the cursor functionality rather than the visual element; in other words, the cursor is a control element operable to select a screen position—whether or not the control element is actually displayed and manipulate screen content via movement across the screen, i.e., changes in the selected position. The cursor need not always be visible in the engaged mode. In some instances, a cursor symbol still appears, e.g., overlaid onto another graphical element that is moved across the screen, whereas in other instances, cursor motion is implicit in the motion of other screen elements or in newly created screen content (such as a line that appears on the screen as the control object moves), obviating the need for a special symbol. In the disengaged mode, a cursor symbol is typically used to visualize the current cursor location. Alternatively or additionally, a screen element or portion presently co-located with the cursor (and thus the selected screen location) may change brightness, color, or some other property to indicate that it is being pointed at. However, in certain embodiments, the symbol or other visual indication of the cursor location may be omitted so that the user has to rely on his own observation of the control object relative to the screen to estimate the screen location pointed at. (For example, in a shooter game, the player may have the option to shoot with or without a “virtual sight” indicating a pointed-to screen location.)
In various embodiments, to trigger an engaged mode—corresponding to, e.g., touching an object or a virtual object displayed on a screen—the control object's motion toward an engagement target such as a virtual surface construct (i.e., a plane, plane portion, or other (non-planar or curved) surface computationally or programmatically defined in space, but not necessarily corresponding to any physical surface) may be tracked; the motion may be, e.g., a forward motion starting from a disengaged mode, or a backward retreating motion. When the control object reaches a spatial location corresponding to this virtual surface construct—i.e., when the control object intersects “touches” or “pierces” the virtual surface construct—the user interface (or a component thereof, such as a cursor, user-interface control, or user-interface environment) is operated in the engaged mode; as the control object retracts from the virtual surface construct, user-interface operation switches back to the disengaged mode.
In embodiments, the virtual surface construct may be fixed in space, e.g., relative to the screen; for example, it may be defined as a plane (or portion of a plane) parallel to and located several inches in front of the screen in one application, or as a curved surface defined in free space convenient to one or more users and optionally proximately to display(s) associated with one or more machines under control. The user can engage this plane while remaining at a comfortable distance from the screen (e.g., without needing to lean forward to reach the screen). The position of the plane may be adjusted by the user from time to time. In embodiments, however, the user is relieved of the need to explicitly change the plane's position; instead, the plane (or other virtual surface construct) automatically moves along with, as if tethered to, the user's control object. For example, a virtual plane may be computationally defined as perpendicular to the orientation of the control object and located a certain distance, e.g., 3-4 millimeters, in front of its tip when the control object is at rest or moving with constant velocity. As the control object moves, the plane follows it, but with a certain time lag (e.g., 0.2 second). As a result, as the control object accelerates, the distance between its tip and the virtual touch plane changes, allowing the control object, when moving towards the plane, to eventually “catch” the plane—that is, the tip of the control object to touch or pierce the plane. Alternatively, instead of being based on a fixed time lag, updates to the position of the virtual plane may be computed based on a virtual energy potential defined to accelerate the plane towards (or away from) the control object tip depending on the plane-to-tip distance, likewise allowing the control object to touch or pierce the plane. Either way, such virtual touching or piercing can be interpreted as engagement events. Further, in some embodiments, the degree of piercing (i.e., the distance beyond the plane that the control object reaches) is interpreted as an intensity level. To guide the user as she engages with or disengages from the virtual plane (or other virtual surface construct), the cursor symbol may encode the distance from the virtual surface visually, e.g., by changing in size with varying distance.
In an embodiment, once engaged, further movements of the control object may serve to move graphical components across the screen (e.g., drag an icon, shift a scroll bar, etc.), change perceived “depth” of the object to the viewer (e.g., resize and/or change shape of objects displayed on the screen in connection, alone, or coupled with other visual effects) to create perception of “pulling” objects into the foreground of the display or “pushing” objects into the background of the display, create new screen content (e.g., draw a line), or otherwise manipulate screen content until the control object disengages (e.g., by pulling away from the virtual surface, indicating disengagement with some other gesture of the control object (e.g., curling the forefinger backward); and/or with some other movement of a second control object (e.g., waving the other hand, etc.)). Advantageously, tying the virtual surface construct to the control object (e.g., the user's finger), rather than fixing it relative to the screen or other stationary objects, allows the user to consistently use the same motions and gestures to engage and manipulate screen content regardless of his precise location relative to the screen. To eliminate the inevitable jitter typically accompanying the control object's movements and which might otherwise result in switching back and forth between the modes unintentionally, the control object's movements may be filtered and the cursor position thereby stabilized. Since faster movements will generally result in more jitter, the strength of the filter may depend on the speed of motion.
In an embodiment and by way of example, as illustrated in
Transitions between the different operational modes may, but need not, be visually indicated by a change in the shape, color (as in
Of course, the system under control need not be a desktop computer.
The virtual surface construct need not be planar, but may be curved in space, e.g., to conform to the user's range of movements.
The location and/or orientation of the virtual surface construct (or other virtual control construct) may be defined relative to the room and/or stationary objects (e.g., a screen) therein, relative to the user, relative to the device 514 or relative to some combination. For example, a planar virtual surface construct may be oriented parallel to the screen, perpendicular to the direction of the control object, or at some angle in between. The location of the virtual surface construct can, in some embodiments, be set by the user, e.g., by means of a particular gesture recognized by the motion-capture system. To give just one example, the user may, with her index finger stretched out, have her thumb and middle finger touch so as to pin the virtual surface construct at a certain location relative to the current position of the index-finger-tip. Once set in this manner, the virtual surface construct may be stationary until reset by the user via performance of the same gesture in a different location.
In some embodiments, the virtual surface construct is tied to and moves along with the control object, i.e., the position and/or orientation of the virtual surface construct are updated based on the tracked control object motion. This affords the user maximum freedom of motion by allowing the user to control the user interface from anywhere (or almost anywhere) within the space monitored by the motion-capture system. To enable the relative motion between the control object and virtual surface construct that is necessary for piercing the surface, the virtual surface construct follows the control object's movements with some delay. Thus, starting from a steady-state distance between the virtual surface construct and the control object tip in the disengaged mode, the distance generally decreases as the control object accelerates towards the virtual surface construct, and increases as the control object accelerates away from the virtual surface construct. If the control object's forward acceleration (i.e., towards the virtual surface construct) is sufficiently fast and/or prolonged, the control object eventually pierces the virtual surface construct. Once pierced, the virtual surface construct again follows the control object's movements. However, whereas, in the disengaged mode, the virtual surface construct is “pushed” ahead of the control object (i.e., is located in front of the control object tip), it is “pulled” behind the control object in the engaged mode (i.e., is located behind the control object tip). To disengage, the control object generally needs to be pulled back through the virtual surface construct with sufficient acceleration to exceed the surface's responsive movement.
In an embodiment, an engagement target can be defined as merely the point where the user touches or pierces a virtual control construct. For example, a virtual point construct may be defined along a line extending from or through the control object tip, or any other point or points on the control object, located a certain distance from the control object tip in the steady state, and moving along the line to follow the control object. The line may, e.g., be oriented in the direction of the control object's motion, perpendicularly project the control object tip onto the screen, extend in the direction of the control object's axis, or connect the control object tip to a fixed location, e.g., a point on the display screen. Irrespective of how the line and virtual point construct are defined, the control object can, when moving sufficiently fast and in a certain manner, “catch” the virtual point construct. Similarly, a virtual line construct (straight or curved) may be defined as a line within a surface intersecting the control object at its tip, e.g., as a line lying in the same plane as the control object and oriented perpendicular (or at some other non-zero angle) to the control object. Defining the virtual line construct within a surface tied to and intersecting the control object tip ensures that the control object can eventually intersect the virtual line construct.
In an embodiment, engagement targets defined by one or more virtual point constructs or virtual line (i.e., linear or curvilinear) constructs can be mapped onto engagement targets defined as virtual surface constructs, in the sense that the different mathematical descriptions are functionally equivalent. For example, a virtual point construct may correspond to the point of a virtual surface construct that is pierced by the control object (and a virtual line construct may correspond to a line in the virtual surface construct going through the virtual point construct). If the virtual point construct is defined on a line projecting the control object tip onto the screen, control object motions perpendicular to that line move the virtual point construct in a plane parallel to the screen, and if the virtual point construct is defined along a line extending in the direction of the control object's axis, control object motions perpendicular to that line move the virtual point construct in a plane perpendicular to that axis; in either case, control object motions along the line move the control object tip towards or away from the virtual point construct and, thus, the respective plane. Thus, the user's experience interacting with a virtual point construct may be little (or no) different from interacting with a virtual surface construct. Hereinafter, the description will, for ease of illustration, focus on virtual surface constructs. A person of skill in the art will appreciate, however, that the approaches, methods, and systems described can be straightforwardly modified and applied to other virtual control constructs (e.g., virtual point constructs or virtual linear/curvilinear constructs).
The position and/or orientation of the virtual surface construct (or other virtual control construct) are typically updated continuously or quasi-continuously, i.e., as often as the motion-capture system determines the control object location and/or direction (which, in visual systems, corresponds to the frame rate of image acquisition and/or image processing). However, embodiments in which the virtual surface construct is updated less frequently (e.g., only every other frame, to save computational resources) or more frequently (e.g., based on interpolations between the measured control object positions) can be provided for in embodiments.
In some embodiments, the virtual surface construct follows the control object with a fixed time lag, e.g., between 0.1 and 1.0 second. In other words, the location of the virtual surface construct is updated, for each frame, based on where the control object tip was a certain amount of time (e.g., 0.2 second) in the past. This is illustrated in
At a first point t=t0 in time, when the control object is at rest, the virtual plane is located at its steady-state distance d in front of the control object tip; this distance may be, e.g., a few millimeters. At a second point t=t1 in time—after the control object has started moving towards the virtual plane, but before the lag period has passed—the virtual plane is still in the same location, but its distance from the control object tip has decreased due to the control object's movement. One lag period later, at t=t1+Δtlag, the virtual plane is positioned the steady-state distance away from the location of the control object tip at the second point in time, but due to the control object's continued forward motion, the distance between the control object tip and the virtual plane has further decreased. Finally, at a fourth point in time t=t2, the control object has pierced the virtual plane. One lag time after the control object has come to a halt, at t=t2+Δtlag, the virtual plane is again a steady-state distance away from the control object tip—but now on the other side. When the control object is subsequently pulled backwards, the distance between its tip and the virtual plane decreases again (t=t3 and t=t4), until the control object tip emerges at the first side of the virtual plane (t=t5). The control object may stop at a different position than where it started, and the virtual plane will eventually follow it and be, once more, a steady-state distance away from the control object tip (t=t6). Even if the control object continues moving, if it does so at a constant speed, the virtual plane will, after an initial lag period to “catch up,” follow the control object at a constant distance.
The steady-state distances in the disengaged mode and the engaged mode may, but need not be the same. In some embodiments, for instance, the steady-state distance in the engaged mode is larger, such that disengaging from the virtual plane (i.e., “unclicking”) appears harder to the user than engaging (i.e., “clicking”) because it requires a larger motion. Alternatively or additionally, to achieve a similar result, the lag times may differ between the engaged and disengaged modes. Further, in some embodiments, the steady-state distance is not fixed, but adjustable based on the control object's speed of motion, generally being greater for higher control object speeds. As a result, when the control object moves very fast, motions toward the plane are “buffered” by the rather long distance that the control object has to traverse relative to the virtual plane before an engagement event is recognized (and, similarly, backwards motions for disengagement are buffered by a long disengagement steady-state distance). A similar effect can also be achieved by decreasing the lag time, i.e., increasing the responsiveness of touch-surface position updates, as the control object speed increases. Such speed-based adjustments may serve to avoid undesired switching between the modes that may otherwise be incidental to fast control object movements.
In various embodiments, the position of the virtual plane (or other virtual surface construct) is updated not based on a time lag, but based on its current distance from the control object tip. That is, for any image frame, the distance between the current control object tip position and the virtual plane is computed (e.g., with the virtual-plane position being taken from the previous frame), and, based thereon, a displacement or shift to be applied to the virtual plane is determined. In some embodiments, the update rate as a function of distance may be defined in terms of a virtual “potential-energy surface” or “potential-energy curve.” In
The potential-energy curve need not be symmetric, or course.
Furthermore, the potential piercing energy need not, or not only, be a function of the distance from the control object tip to the virtual surface construct, but may depend on other factors. For example, in some embodiments, a stylus with a pressure-sensitive grip is used as the control object. In this case, the pressure with which the user squeezes the stylus may be mapped to the piercing energy.
Whichever way the virtual surface construct is updated, jitter in the control object's motions may result in unintentional transitions between the engaged and disengaged modes. While such modal instability may be combatted by increasing the steady-state distance (i.e., the “buffer zone” between control object and virtual surface construct), this comes at the cost of requiring the user, when she intends to switch modes, to perform larger movements that may feel unnatural. The trade-off between modal stability and user convenience may be improved by filtering the tracked control object movements. Specifically, jitter may be filtered out, based on the generally more frequent changes in direction associated with it, with some form of time averaging. Accordingly, in one embodiment, a moving-average filter spanning, e.g., a few frames, is applied to the tracked movements, such that only a net movement within each time window is used as input for cursor control. Since jitter generally increases with faster movements, the time-averaging window may be chosen to likewise increase as a function of control object velocity (such as a function of overall control object speed or of a velocity component, e.g., perpendicular to the virtual plane). In another embodiment, the control object's previous and newly measured position are averaged with weighting factors that depend, e.g., on velocity, frame rate, and/or other factors. For example, the old and new positions may be weighted with multipliers of x and (1−x), respectively, where x varies between 0 and land increases with velocity. In one extreme, for x=1, the cursor remains completely still, whereas for the other extreme, x=0, no filtering is performed at all.
In some embodiments, temporary piercing of the virtual surface construct—i.e., a clicking motion including penetration of the virtual surface construct immediately followed by withdrawal from the virtual surface construct—switches between modes and locks in the new mode. For example, starting in the disengaged mode, a first click event may switch the control object into the engaged mode, where it may then remain until the virtual surface construct is clicked at again.
Further, in some embodiments, the degree of piercing (i.e., the distance beyond the virtual surface construct that the control object initially reaches, before the virtual surface construct catches up) is interpreted as an intensity level that can be used to refine the control input. For example, the intensity (of engagement) in a swiping gesture for scrolling through screen content may determine the speed of scrolling. Further, in a gaming environment or other virtual world, different intensity levels when touching a virtual object (by penetrating the virtual surface construct while the cursor is positioned on the object as displayed on the screen) may correspond to merely touching the object versus pushing the object over. As another example, when hitting the keys of a virtual piano displayed on the screen, the intensity level may translate into the volume of the sound created. Thus, touching or engagement of a virtual surface construct (or other virtual control construct) may provide user input beyond the binary discrimination between engaged and disengaged modes.
As will be readily apparent to those of skill in the art, the methods described above can be readily extended to the control of a user interface with multiple simultaneously tracked control objects. For instance, both left and right index fingers of a user may be tracked, each relative to its own associated virtual touch surface, to operate to cursors simultaneously and independently. As another example, the user's hand may be tracked to determine the positions and orientations of all fingers; each finger may have its own associated virtual surface construct (or other virtual control construct) or, alternatively, all fingers may share the same virtual surface construct, which may follow the overall hand motions. A joint virtual plane may serve, e.g., as a virtual drawing canvas on which multiple lines can be drawn by the fingers at once.
In an embodiment and by way of example, one or more control parameter(s) and the control object are applied to some control mechanism to determine the distance of the virtual control construct to a portion of the control object (e.g., tool tip(s), point(s) of interest on a user's hand or other points of interest). In some embodiments, a lag (e.g., filter or filtering function) is introduced to delay, or modify, application of the control mechanism according to a variable or a fixed increment of time, for example. Accordingly, embodiments can provide enhanced verisimilitude to the human-machine interaction, and/or increased fidelity of tracking control object(s) and/or control object portion(s).
In one example, the control object portion is a user's finger-tip. A control parameter is also the user's finger-tip. A control mechanism includes equating a plane-distance between virtual control construct and finger-tip to a distance between finger-tip and an arbitrary coordinate (e.g., center (or origin) of an interaction zone of the controller). Accordingly, the closer the finger-tip approaches to the arbitrary coordinate, the closer the virtual control construct approaches the finger-tip.
In another example, the control object is a hand, which includes a control object portion, e.g., a palm, determined by a “palm-point” or center of mass of the entire hand. A control parameter includes a velocity of the hand, as measured at the control object portion, i.e., the center of mass of the hand. A control mechanism includes filtering forward velocity over the last one (1) second. Accordingly, the faster the palm has recently been travelling forward, the closer the virtual control construct approaches to the control object (i.e., the hand).
In a further example, a control object includes a control object portion (e.g., a finger-tip). A control mechanism includes determining a distance between a thumb-tip (e.g., a first control object portion) and an index finger (e.g., a second control object portion). This distance can be used as a control parameter. Accordingly, the closer the thumb-tip and index-finger, the closer the virtual control construct is determined to be to the index finger. When the thumb-tip and index finger touch one another, the virtual control construct is determined to be partially pierced by the index finger. A lag (e.g., filter or filtering function) can introduce a delay in the application of the control mechanism by some time-increment proportional to any quantity of interest, for example horizontal jitter (i.e., the random motion of the control object in a substantially horizontal dimension). Accordingly, the greater the shake in a user's hand, the more lag will be introduced into the control mechanism.
Machine and user-interface control via free-space motions relies generally on a suitable motion-capture device or system for tracking the positions, orientations, and motions of one or more control objects. For a description of tracking positions, orientations, and motions of control objects, reference may be had to U.S. patent application Ser. No. 13/414,485, filed on Mar. 7, 2012, the entire enclosure of which is incorporated herein by reference. In various embodiments, motion capture can be accomplished visually, based on a temporal sequence of images of the control object (or a larger object of interest including the control object, such as the user's hand) captured by one or more cameras. In one embodiment, images acquired from two (or more) vantage points are used to define tangent lines to the surface of the object and approximate the location and shape of the object based thereon, as explained in more detail below. Other vision-based approaches that can be used in embodiments include, without limitation, stereo imaging, detection of patterned light projected onto the object, or the use of sensors and markers attached to or worn by the object (such as, e.g., markers integrated into a glove) and/or combinations thereof. Alternatively or additionally, the control object may be tracked acoustically or ultrasonically, or using inertial sensors such as accelerometers, gyroscopes, and/or magnetometers (e.g., MEMS sensors) attached to or embedded within the control object. Embodiments can be built employing one or more of particular motion-tracking approaches that provide control object position and/or orientation (and/or derivatives thereof) tracking with sufficient accuracy, precision, and responsiveness for the particular application.
As mentioned above, the control object may, alternatively, be tracked acoustically. In this case, the light sources 900, 902 are replaced by sonic sources. The sonic sources transmit sound waves (e.g., ultrasound that is not audible by the user) to the user; the user either blocks or alters the sound waves that impinge upon her, i.e., causes “sonic shadowing” or “sonic deflection.” Such sonic shadows and/or deflections can also be sensed and analyzed to reconstruct the shape, configuration, position, and orientation of the control object, and, based thereon, detect the user's gestures.
The computer 906 processing the images acquired by the cameras 900, 902 may be a suitably programmed general-purpose computer. As shown in
In one embodiment, an image analysis module 936 may analyze pairs of image frames acquired by the two cameras 900, 902 (and stored, e.g., in image buffers in memory 922) to identify the control object (or an object including the control object or multiple control objects, such as a user's hand) therein (e.g., as a non-stationary foreground object) and detect its edges. Next, the module 936 may, for each pair of corresponding rows in the two images, find an approximate cross-section of the control object by defining tangent lines on the control object that extend from the vantage points (i.e., the cameras) to the respective edge points of the control object, and inscribe an ellipse (or other geometric shape defined by only a few parameters) therein. The cross-sections may then be computationally connected in a manner that is consistent with certain heuristics and known properties of the control object (e.g., the requirement of a smooth surface) and resolves any ambiguities in the fitted ellipse parameters. As a result, the control object is reconstructed or modeled in three dimensions. This method, and systems for its implementation, are described in more detail in U.S. patent application Ser. No. 13/414,485, filed on Mar. 7, 2012, the entire enclosure of which is incorporated herein by reference. A larger object including multiple control objects can similarly be reconstructed with respective tangent lines and fitted ellipses, typically exploiting information of internal constraints of the object (such as a maximum physical separation between the fingertips of one hand). The image-analysis module 934 may, further, extract relevant control object parameters, such as tip positions and orientations as well as velocities, from the three-dimensional model. In some embodiments, this information can be inferred from the images at a lower level, prior to or without the need for fully reconstructing the control object. These operations are readily implemented by those skilled in the art without undue experimentation. In some embodiments, a filter module 938 receives input from the image-analysis module 964, and smoothens or averages the tracked control object motions; the degree of smoothing or averaging may depend on a control object velocity as determined by the image-analysis module 936.
A gesture-recognition module 940 may receive the tracking data about the control object from the image-analysis module 936 (or, after filtering, from the filter module 938), and use it to identify gestures, e.g., by comparison with gesture records stored in a database 941 on the permanent storage devices 924 and/or loaded into system memory 922. The gesture-recognition module may also include, e.g., as sub-modules, a gesture filter 942 that provides the functionality for ascertaining a dominant gesture among multiple simultaneously detected gestures, and a completion tracker 943 that determines a degree of completion of the gesture as the gesture is being performed.
An engagement-target module 944 may likewise receive data about the control object's location and/or orientation from the image-analysis module 936 and/or the filter module 938, and use the data to compute a representation of the virtual control construct, i.e., to define and/or update the position and orientation of the virtual control construct relative to the control object (and/or the screen); the representation may be stored in memory in any suitable mathematical form. A touch-detection module 945 in communication with the engagement-target module 944 may determine, for each frame, whether the control object touches or pierces the virtual control construct.
A user-interface control module 946 may map detected motions in the engaged mode into control input for the applications 934 running on the computer 906. Collectively, the end-user application 934 and the user-interface control module 946 may compute the screen content, i.e., an image for display on the screen 526, which may be stored in a display buffer (e.g., in memory 922 or in the buffer of a GPU included in the system). In particular, the user-interface control module 946 may include a cursor (sub)module 947 that determines a cursor location on the screen based on tracking data from the image-analysis module 936 (e.g., by computationally projecting the control object tip onto the screen), and visualizes the cursor at the computed location, optionally in a way that discriminates, based on output from the touch-detection module 945, between the engaged and disengaged mode (e.g., by using different colors). The cursor module 947 may also modify the cursor appearance based on the control object distance from the virtual control construct; for instance, the cursor may take the form of a circle having a radius proportional to the distance between the control object tip and the virtual control construct. Further, the user-interface control module 946 may include completion-indicator (sub)module 948, which depicts the degree of completion of a gesture, as determined by the completion tracker 943, with a suitable indicator (e.g., a partially filled circle). Additionally, the user-interface control module 946 may include a scaling (sub)module 949 that determines the scaling ratio between actual control-object movements and on-screen movements (e.g., based on direct user input via a scale-control panel) and causes adjustments to the displayed content based thereon.
The functionality of the different modules can, of course, be grouped and organized in many different ways, as a person of skill in the art would readily understand. Further, it need not necessarily be implemented on a single computer, but may be distributed between multiple computers. For example, the image-analysis and gesture-recognition functionality provided by modules 936, 938, 940, 944, 945, and optionally also the user-interface control functionality of module 946, may be implemented by a separate computer in communication with the computer on which the end-user applications 934 controlled via free-space control object motions are executed, and/or integrated with the cameras 900, 902 and light sources 912 into a single motion-capture device (which, typically, utilizes an application-specific integrated circuit (ASIC) or other special-purpose computer for image-processing). In another exemplary embodiment, the camera images are sent from a client terminal over a network to a remote server computer for processing, and the tracked control object positions and orientations are sent back to the client terminal as input into the user interface. Embodiments can be realized using any number and arrangement of computers (broadly understood to include any kind of general-purpose or special-purpose processing device, including, e.g., microcontrollers, ASICs, programmable gate arrays (PGAs), or digital signal processors (DSPs) and associated peripherals) executing the methods described herein, and any implementation of the various functional modules in hardware, software, or a combination thereof.
Computer programs incorporating various features or functionality described herein may be encoded on various computer readable storage media; suitable media include magnetic disk or tape, optical storage media such as compact disk (CD) or DVD (digital versatile disk), flash memory, and any other non-transitory medium capable of holding data in a computer-readable form. Computer-readable storage media encoded with the program code may be packaged with a compatible device or provided separately from other devices. In addition, program code may be encoded and transmitted via wired, optical, and/or wireless networks conforming to a variety of protocols, including the Internet, thereby allowing distribution, e.g., via Internet download and/or provided on-demand as web-services.
The systems and methods described herein may find application in a variety of computer-user-interface contexts, and may replace mouse operation or other traditional means of user input as well as provide new user-input modalities. Free-space control object motions and virtual-touch recognition may be used, for example, to provide input to commercial and industrial legacy applications (such as, e.g., business applications, including Microsoft Outlook™, office software, including Microsoft Office™, Windows™, Excel™, etc.; graphic design programs; including Microsoft Visio™ etc.), operating systems such as Microsoft Windows™; web applications (e.g., browsers, such as Internet Explorer™); other applications (such as e.g., audio, video, graphics programs, etc.), to navigate virtual worlds (e.g., in video games) or computer representations of the real world (e.g., Google Street View™), or to interact with three-dimensional virtual objects (e.g., Google Earth™).
The motion sensing device (e.g., 1000a-1, 1000a-2 and/or 1000a-3) is capable of detecting position as well as motion of hands and/or portions of hands and/or other detectable objects (e.g., a pen, a pencil, a stylus, a paintbrush, an eraser, a virtualized tool, and/or a combination thereof), within a region of space 510a from which it is convenient for a user to interact with system 500a. Region 510a can be situated in front of, nearby, and/or surrounding system 500a. In some embodiments, the position and motion sensing device can be integrated directly into display device 1004a as integrated device 1000a-2 and/or keyboard 1006a as integrated device 1000a-3. While
Tower 1002a and/or position and motion sensing device and/or other elements of system 500a can implement functionality to provide virtual control surface 1000a within region 510a with which engagement gestures are sensed and interpreted to facilitate user interactions with system 1002a. Accordingly, objects and/or motions occurring relative to virtual control surface 1000a within region 510a can be afforded differing interpretations than like (and/or similar) objects and/or motions otherwise occurring.
As illustrated in
The above-described 3D user-interaction technique enables the user to intuitively control and manipulate the electronic device and virtual objects by simply performing body gestures. Because the gesture-recognition system facilitates rendering of reconstructed 3D images of the gestures with high detection sensitivity, dynamic user interactions for display control are achieved in real time without excessive computational complexity. For example, the user can dynamically control the relationship between his actual movement and the corresponding action displayed on the screen. In addition, the device may display an on-screen indicator to reflect a degree of completion of the user's gesture in real time. Accordingly, embodiments can enable the user to dynamically interact with virtual objects displayed on the screen and advantageously enhances the realism of the virtual environment.
The terms and expressions employed herein are used as terms and expressions of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described or portions thereof. In addition, having described certain embodiments, it will be apparent to those of ordinary skill in the art that other embodiments incorporating the concepts disclosed herein may be used without undue experimentation. Accordingly, the described embodiments are to be considered in all respects as only illustrative and not restrictive.
This application is a continuation of U.S. patent application Ser. No. 16/195,755, filed Nov. 19, 2018, which is a continuation of U.S. patent application Ser. No. 15/279,363, filed Sep. 28, 2016, which is a continuation of U.S. patent application Ser. No. 14/155,722, filed Jan. 15, 2014. U.S. patent application Ser. No. 14/155,722 claims priority to and the benefit of, and incorporates herein by reference in their entireties, U.S. Provisional Application Nos. 61/825,515 and 61/825,480, both filed on May 20, 2013; No. 61/873,351, filed on Sep. 3, 2013; No. 61/877,641, filed on Sep. 13, 2013; No. 61/816,487, filed on Apr. 26, 2013; No. 61/824,691, filed on May 17, 2013; Nos. 61/752,725, 61/752,731, and 61/752,733, all filed on Jan. 15, 2013; No. 61/791,204, filed on Mar. 15, 2013; Nos. 61/808,959 and 61/808,984, both filed on Apr. 5, 2013; and No. 61/872,538, filed on Aug. 30, 2013. U.S. patent application Ser. No. 14/155,722 is a Continuation-in-Part of U.S. patent application Ser. No. 14/154,730, filed Jan. 14, 2014.
Number | Name | Date | Kind |
---|---|---|---|
2665041 | Maffucci | Jan 1954 | A |
4175862 | DiMatteo et al. | Nov 1979 | A |
4876455 | Sanderson et al. | Oct 1989 | A |
4879659 | Bowlin et al. | Nov 1989 | A |
4893223 | Arnold | Jan 1990 | A |
5038258 | Koch et al. | Aug 1991 | A |
5134661 | Reinsch | Jul 1992 | A |
5282067 | Liu | Jan 1994 | A |
5434617 | Bianchi | Jul 1995 | A |
5454043 | Freeman | Sep 1995 | A |
5574511 | Yang et al. | Nov 1996 | A |
5581276 | Cipolla et al. | Dec 1996 | A |
5594469 | Freeman et al. | Jan 1997 | A |
5659475 | Brown | Aug 1997 | A |
5691737 | Ito et al. | Nov 1997 | A |
5742263 | Wang et al. | Apr 1998 | A |
5900863 | Numazaki | May 1999 | A |
5940538 | Spiegel et al. | Aug 1999 | A |
6002808 | Freeman | Dec 1999 | A |
6031161 | Baltenberger | Feb 2000 | A |
6031661 | Tanaami | Feb 2000 | A |
6072494 | Nguyen | Jun 2000 | A |
6075895 | Qiao et al. | Jun 2000 | A |
6147678 | Kumar et al. | Nov 2000 | A |
6154558 | Hsieh | Nov 2000 | A |
6181343 | Lyons | Jan 2001 | B1 |
6184326 | Razavi et al. | Feb 2001 | B1 |
6184926 | Khosravi et al. | Feb 2001 | B1 |
6195104 | Lyons | Feb 2001 | B1 |
6204852 | Kumar et al. | Mar 2001 | B1 |
6252598 | Segen | Jun 2001 | B1 |
6263091 | Jain et al. | Jul 2001 | B1 |
6346933 | Lin | Feb 2002 | B1 |
6417970 | Travers et al. | Jul 2002 | B1 |
6463402 | Bennett et al. | Oct 2002 | B1 |
6492986 | Metaxas et al. | Dec 2002 | B1 |
6493041 | Hanko et al. | Dec 2002 | B1 |
6498628 | Iwamura | Dec 2002 | B2 |
6578203 | Anderson, Jr. et al. | Jun 2003 | B1 |
6603867 | Sugino et al. | Aug 2003 | B1 |
6629065 | Gadh et al. | Sep 2003 | B1 |
6661918 | Gordon et al. | Dec 2003 | B1 |
6674877 | Jojic et al. | Jan 2004 | B1 |
6702494 | Dumler et al. | Mar 2004 | B2 |
6734911 | Lyons | May 2004 | B1 |
6738424 | Allmen et al. | May 2004 | B1 |
6771294 | Pull et al. | Aug 2004 | B1 |
6798628 | Macbeth | Sep 2004 | B1 |
6804654 | Kobylevsky et al. | Oct 2004 | B2 |
6804656 | Rosenfeld et al. | Oct 2004 | B1 |
6814656 | Rodriguez | Nov 2004 | B2 |
6819796 | Hong et al. | Nov 2004 | B2 |
6901170 | Terada et al. | May 2005 | B1 |
6919880 | Morrison et al. | Jul 2005 | B2 |
6950534 | Cohen et al. | Sep 2005 | B2 |
6993157 | Oue et al. | Jan 2006 | B1 |
7152024 | Marschner et al. | Dec 2006 | B2 |
7213707 | Hubbs et al. | May 2007 | B2 |
7215828 | Luo | May 2007 | B2 |
7244233 | Krantz et al. | Jul 2007 | B2 |
7257237 | Luck et al. | Aug 2007 | B1 |
7259873 | Sikora et al. | Aug 2007 | B2 |
7308112 | Fujimura et al. | Dec 2007 | B2 |
7340077 | Gokturk et al. | Mar 2008 | B2 |
7483049 | Aman et al. | Jan 2009 | B2 |
7519223 | Dehlin et al. | Apr 2009 | B2 |
7532206 | Morrison et al. | May 2009 | B2 |
7536032 | Bell | May 2009 | B2 |
7542586 | Johnson | Jun 2009 | B2 |
7598942 | Underkoffler et al. | Oct 2009 | B2 |
7606417 | Steinberg et al. | Oct 2009 | B2 |
7646372 | Marks et al. | Jan 2010 | B2 |
7656372 | Sato et al. | Feb 2010 | B2 |
7665041 | Wilson et al. | Feb 2010 | B2 |
7692625 | Morrison et al. | Apr 2010 | B2 |
7831932 | Josephsoon et al. | Nov 2010 | B2 |
7840031 | Albertson et al. | Nov 2010 | B2 |
7861188 | Josephsoon et al. | Dec 2010 | B2 |
7940885 | Stanton et al. | May 2011 | B2 |
7948493 | Klefenz et al. | May 2011 | B2 |
7961174 | Markovic et al. | Jun 2011 | B1 |
7961934 | Thrun et al. | Jun 2011 | B2 |
7971156 | Albertson et al. | Jun 2011 | B2 |
7980885 | Gattwinkel et al. | Jul 2011 | B2 |
8023698 | Niwa et al. | Sep 2011 | B2 |
8035624 | Bell et al. | Oct 2011 | B2 |
8045825 | Shimoyama et al. | Oct 2011 | B2 |
8064704 | Kim et al. | Nov 2011 | B2 |
8085339 | Marks | Dec 2011 | B2 |
8086971 | Radivojevic et al. | Dec 2011 | B2 |
8111239 | Pryor et al. | Feb 2012 | B2 |
8112719 | Hsu et al. | Feb 2012 | B2 |
8144233 | Fukuyama | Mar 2012 | B2 |
8185176 | Mangat et al. | May 2012 | B2 |
8213707 | Li et al. | Jul 2012 | B2 |
8218858 | Gu | Jul 2012 | B2 |
8229134 | Duraiswami et al. | Jul 2012 | B2 |
8235529 | Raffle et al. | Aug 2012 | B1 |
8244233 | Chang et al. | Aug 2012 | B2 |
8249345 | Wu et al. | Aug 2012 | B2 |
8270669 | Aichi et al. | Sep 2012 | B2 |
8289162 | Mooring et al. | Oct 2012 | B2 |
8290208 | Kurtz et al. | Oct 2012 | B2 |
8304727 | Lee et al. | Nov 2012 | B2 |
8319832 | Nagata et al. | Nov 2012 | B2 |
8363010 | Nagata | Jan 2013 | B2 |
8395600 | Kawashima et al. | Mar 2013 | B2 |
8432377 | Newton | Apr 2013 | B2 |
8471848 | Tschesnok | Jun 2013 | B2 |
8514221 | King et al. | Aug 2013 | B2 |
8553037 | Smith et al. | Oct 2013 | B2 |
8582809 | Halimeh et al. | Nov 2013 | B2 |
8593417 | Kawashima et al. | Nov 2013 | B2 |
8605202 | Muijs et al. | Dec 2013 | B2 |
8631355 | Murillo et al. | Jan 2014 | B2 |
8638989 | Holz | Jan 2014 | B2 |
8659594 | Kim et al. | Feb 2014 | B2 |
8659658 | Vassigh et al. | Feb 2014 | B2 |
8693731 | Holz et al. | Apr 2014 | B2 |
8738523 | Sanchez et al. | May 2014 | B1 |
8744122 | Salgian et al. | Jun 2014 | B2 |
8768022 | Miga et al. | Jul 2014 | B2 |
8817087 | Weng et al. | Aug 2014 | B2 |
8842084 | Andersson et al. | Sep 2014 | B2 |
8843857 | Berkes et al. | Sep 2014 | B2 |
8854433 | Rafii | Oct 2014 | B1 |
8872914 | Gobush | Oct 2014 | B2 |
8878749 | Wu et al. | Nov 2014 | B1 |
8891868 | Ivanchenko | Nov 2014 | B1 |
8907982 | Zontrop et al. | Dec 2014 | B2 |
8922590 | Luckett, Jr. et al. | Dec 2014 | B1 |
8929609 | Padovani et al. | Jan 2015 | B2 |
8930852 | Chen et al. | Jan 2015 | B2 |
8942881 | Hobbs et al. | Jan 2015 | B2 |
8954340 | Sanchez et al. | Feb 2015 | B2 |
8957857 | Lee et al. | Feb 2015 | B2 |
9014414 | Katano et al. | Apr 2015 | B2 |
9056396 | Linnell | Jun 2015 | B1 |
9070019 | Holz | Jun 2015 | B2 |
9119670 | Yang et al. | Sep 2015 | B2 |
9122354 | Sharma | Sep 2015 | B2 |
9124778 | Crabtree | Sep 2015 | B1 |
9182812 | Zepeda | Nov 2015 | B2 |
9182838 | Kikkeri | Nov 2015 | B2 |
9342160 | Bailey et al. | May 2016 | B2 |
9389779 | Anderson et al. | Jul 2016 | B2 |
9459697 | Bedikian et al. | Oct 2016 | B2 |
9501152 | Bedikian et al. | Nov 2016 | B2 |
10042430 | Bedikian et al. | Aug 2018 | B2 |
10281987 | Yang et al. | May 2019 | B1 |
10739862 | Bedikian et al. | Aug 2020 | B2 |
11353962 | Bedikian et al. | Jun 2022 | B2 |
20010044858 | Rekimoto | Nov 2001 | A1 |
20010052985 | Ono | Dec 2001 | A1 |
20020008139 | Albertelli | Jan 2002 | A1 |
20020008211 | Kask | Jan 2002 | A1 |
20020021287 | Tomasi et al. | Feb 2002 | A1 |
20020041327 | Hildreth et al. | Apr 2002 | A1 |
20020080094 | Biocca et al. | Jun 2002 | A1 |
20020105484 | Navab et al. | Aug 2002 | A1 |
20030053658 | Pavlidis | Mar 2003 | A1 |
20030053659 | Pavlidis et al. | Mar 2003 | A1 |
20030081141 | Mazzapica | May 2003 | A1 |
20030123703 | Pavlidis et al. | Jul 2003 | A1 |
20030152289 | Luo | Aug 2003 | A1 |
20030202697 | Simard et al. | Oct 2003 | A1 |
20040103111 | Miller et al. | May 2004 | A1 |
20040125228 | Dougherty | Jul 2004 | A1 |
20040125984 | Ito et al. | Jul 2004 | A1 |
20040145809 | Brenner | Jul 2004 | A1 |
20040155877 | Hong et al. | Aug 2004 | A1 |
20040212725 | Raskar | Oct 2004 | A1 |
20050007673 | Chaoulov et al. | Jan 2005 | A1 |
20050068518 | Baney et al. | Mar 2005 | A1 |
20050094019 | Grosvenor et al. | May 2005 | A1 |
20050131607 | Breed | Jun 2005 | A1 |
20050156888 | Xie et al. | Jul 2005 | A1 |
20050168578 | Gobush | Aug 2005 | A1 |
20050236558 | Nabeshima et al. | Oct 2005 | A1 |
20050238201 | Shamale | Oct 2005 | A1 |
20060017807 | Lee et al. | Jan 2006 | A1 |
20060028656 | Venkatesh et al. | Feb 2006 | A1 |
20060029296 | King et al. | Feb 2006 | A1 |
20060034545 | Mattes et al. | Feb 2006 | A1 |
20060050979 | Kawahara | Mar 2006 | A1 |
20060072105 | Wagner | Apr 2006 | A1 |
20060098899 | King et al. | May 2006 | A1 |
20060204040 | Freeman et al. | Sep 2006 | A1 |
20060210112 | Cohen et al. | Sep 2006 | A1 |
20060262421 | Matsumoto et al. | Nov 2006 | A1 |
20060290950 | Platt et al. | Dec 2006 | A1 |
20070014466 | Baldwin | Jan 2007 | A1 |
20070042346 | Weller | Feb 2007 | A1 |
20070086621 | Aggarwal et al. | Apr 2007 | A1 |
20070130547 | Boillot | Jun 2007 | A1 |
20070206719 | Suryanarayanan et al. | Sep 2007 | A1 |
20070211023 | Boillot | Sep 2007 | A1 |
20070230929 | Niwa et al. | Oct 2007 | A1 |
20070238956 | Haras et al. | Oct 2007 | A1 |
20080013826 | Hillis et al. | Jan 2008 | A1 |
20080019576 | Senftner et al. | Jan 2008 | A1 |
20080030429 | Hailpern et al. | Feb 2008 | A1 |
20080031492 | Lanz | Feb 2008 | A1 |
20080056752 | Denton et al. | Mar 2008 | A1 |
20080064954 | Adams et al. | Mar 2008 | A1 |
20080106637 | Nakao et al. | May 2008 | A1 |
20080106746 | Shpunt et al. | May 2008 | A1 |
20080110994 | Knowles et al. | May 2008 | A1 |
20080111710 | Boillot | May 2008 | A1 |
20080118091 | Serfaty et al. | May 2008 | A1 |
20080126937 | Pachet | May 2008 | A1 |
20080187175 | Kim et al. | Aug 2008 | A1 |
20080244468 | Nishihara et al. | Oct 2008 | A1 |
20080246759 | Summers | Oct 2008 | A1 |
20080273764 | Scholl | Nov 2008 | A1 |
20080278589 | Thorn | Nov 2008 | A1 |
20080291160 | Rabin | Nov 2008 | A1 |
20080304740 | Sun et al. | Dec 2008 | A1 |
20080319356 | Cain et al. | Dec 2008 | A1 |
20090002489 | Yang et al. | Jan 2009 | A1 |
20090093307 | Miyaki | Apr 2009 | A1 |
20090102840 | Li | Apr 2009 | A1 |
20090103780 | Nishihara et al. | Apr 2009 | A1 |
20090116742 | Nishihara | May 2009 | A1 |
20090122146 | Zalewski et al. | May 2009 | A1 |
20090128564 | Okuno | May 2009 | A1 |
20090153655 | Ike et al. | Jun 2009 | A1 |
20090203993 | Mangat et al. | Aug 2009 | A1 |
20090203994 | Mangat et al. | Aug 2009 | A1 |
20090217211 | Hildreth | Aug 2009 | A1 |
20090257623 | Tang et al. | Oct 2009 | A1 |
20090274339 | Cohen et al. | Nov 2009 | A9 |
20090309710 | Kakinami | Dec 2009 | A1 |
20100001998 | Mandella et al. | Jan 2010 | A1 |
20100013662 | Stude | Jan 2010 | A1 |
20100013832 | Xiao et al. | Jan 2010 | A1 |
20100020078 | Shpunt | Jan 2010 | A1 |
20100023015 | Park | Jan 2010 | A1 |
20100026963 | Faulstich | Feb 2010 | A1 |
20100027845 | Kim et al. | Feb 2010 | A1 |
20100046842 | Conwell | Feb 2010 | A1 |
20100053164 | Imai et al. | Mar 2010 | A1 |
20100053209 | Rauch et al. | Mar 2010 | A1 |
20100053612 | Ou-Yang et al. | Mar 2010 | A1 |
20100058252 | Ko | Mar 2010 | A1 |
20100066676 | Kramer et al. | Mar 2010 | A1 |
20100066737 | Liu | Mar 2010 | A1 |
20100066975 | Rehnstrom | Mar 2010 | A1 |
20100091110 | Hildreth | Apr 2010 | A1 |
20100095206 | Kim | Apr 2010 | A1 |
20100118123 | Freedman et al. | May 2010 | A1 |
20100121189 | Ma et al. | May 2010 | A1 |
20100125815 | Wang et al. | May 2010 | A1 |
20100127995 | Rigazio et al. | May 2010 | A1 |
20100141762 | Siann et al. | Jun 2010 | A1 |
20100158372 | Kim et al. | Jun 2010 | A1 |
20100162165 | Addala et al. | Jun 2010 | A1 |
20100177929 | Kurtz et al. | Jul 2010 | A1 |
20100194863 | Lopes et al. | Aug 2010 | A1 |
20100199221 | Yeung et al. | Aug 2010 | A1 |
20100199230 | Latta et al. | Aug 2010 | A1 |
20100199232 | Mistry et al. | Aug 2010 | A1 |
20100201880 | Iwamura | Aug 2010 | A1 |
20100208942 | Porter et al. | Aug 2010 | A1 |
20100219934 | Matsumoto | Sep 2010 | A1 |
20100222102 | Rodriguez | Sep 2010 | A1 |
20100248836 | Suzuki et al. | Sep 2010 | A1 |
20100264833 | Van Endert et al. | Oct 2010 | A1 |
20100275159 | Matsubara et al. | Oct 2010 | A1 |
20100277411 | Yee et al. | Nov 2010 | A1 |
20100296698 | Lien et al. | Nov 2010 | A1 |
20100302015 | Kipman et al. | Dec 2010 | A1 |
20100302357 | Hsu et al. | Dec 2010 | A1 |
20100303298 | Marks et al. | Dec 2010 | A1 |
20100306712 | Snook et al. | Dec 2010 | A1 |
20100309097 | Raviv et al. | Dec 2010 | A1 |
20100321377 | Gay et al. | Dec 2010 | A1 |
20110007072 | Khan et al. | Jan 2011 | A1 |
20110025818 | Gallmeier et al. | Feb 2011 | A1 |
20110026765 | Ivanich et al. | Feb 2011 | A1 |
20110043806 | Guetta et al. | Feb 2011 | A1 |
20110057875 | Shigeta et al. | Mar 2011 | A1 |
20110066984 | Li | Mar 2011 | A1 |
20110080337 | Matsubara et al. | Apr 2011 | A1 |
20110080470 | Kuno et al. | Apr 2011 | A1 |
20110080490 | Clarkson et al. | Apr 2011 | A1 |
20110093820 | Zhang et al. | Apr 2011 | A1 |
20110107216 | Bi | May 2011 | A1 |
20110115486 | Frohlich et al. | May 2011 | A1 |
20110116684 | Coffman et al. | May 2011 | A1 |
20110119640 | Berkes et al. | May 2011 | A1 |
20110134112 | Koh et al. | Jun 2011 | A1 |
20110148875 | Kim et al. | Jun 2011 | A1 |
20110169726 | Holmdahl et al. | Jul 2011 | A1 |
20110173574 | Clavin et al. | Jul 2011 | A1 |
20110176146 | Alvarez Diez et al. | Jul 2011 | A1 |
20110181509 | Rautiainen et al. | Jul 2011 | A1 |
20110193778 | Lee et al. | Aug 2011 | A1 |
20110205151 | Newton et al. | Aug 2011 | A1 |
20110213664 | Osterhout et al. | Sep 2011 | A1 |
20110228978 | Chen et al. | Sep 2011 | A1 |
20110234840 | Klefenz et al. | Sep 2011 | A1 |
20110243451 | Oyaizu | Oct 2011 | A1 |
20110251896 | Impollonia et al. | Oct 2011 | A1 |
20110261178 | Lo et al. | Oct 2011 | A1 |
20110267259 | Tidemand et al. | Nov 2011 | A1 |
20110279397 | Rimon et al. | Nov 2011 | A1 |
20110286676 | El Dokor | Nov 2011 | A1 |
20110289455 | Reville et al. | Nov 2011 | A1 |
20110289456 | Reville et al. | Nov 2011 | A1 |
20110291925 | Israel et al. | Dec 2011 | A1 |
20110291988 | Bamji et al. | Dec 2011 | A1 |
20110296353 | Ahmed et al. | Dec 2011 | A1 |
20110299737 | Wang et al. | Dec 2011 | A1 |
20110304600 | Yoshida | Dec 2011 | A1 |
20110304650 | Campillo et al. | Dec 2011 | A1 |
20110310007 | Margolis et al. | Dec 2011 | A1 |
20110310220 | McEldowney | Dec 2011 | A1 |
20110314427 | Sundararajan | Dec 2011 | A1 |
20110317871 | Tossell et al. | Dec 2011 | A1 |
20120038637 | Marks | Feb 2012 | A1 |
20120050157 | Latta et al. | Mar 2012 | A1 |
20120065499 | Chono | Mar 2012 | A1 |
20120068914 | Jacobsen et al. | Mar 2012 | A1 |
20120098744 | Stinson, III | Apr 2012 | A1 |
20120113223 | Hilliges et al. | May 2012 | A1 |
20120113316 | Ueta et al. | May 2012 | A1 |
20120159380 | Kocienda et al. | Jun 2012 | A1 |
20120163675 | Joo et al. | Jun 2012 | A1 |
20120194517 | Izadi et al. | Aug 2012 | A1 |
20120204133 | Guendelman | Aug 2012 | A1 |
20120218263 | Meier et al. | Aug 2012 | A1 |
20120223959 | Lengeling | Sep 2012 | A1 |
20120236288 | Stanley | Sep 2012 | A1 |
20120250936 | Holmgren | Oct 2012 | A1 |
20120270654 | Padovani et al. | Oct 2012 | A1 |
20120274781 | Shet et al. | Nov 2012 | A1 |
20120281873 | Brown et al. | Nov 2012 | A1 |
20120293667 | Baba et al. | Nov 2012 | A1 |
20120314030 | Datta et al. | Dec 2012 | A1 |
20120320080 | Giese et al. | Dec 2012 | A1 |
20130019204 | Kotler et al. | Jan 2013 | A1 |
20130033483 | Im et al. | Feb 2013 | A1 |
20130038694 | Nichani et al. | Feb 2013 | A1 |
20130044951 | Cherng et al. | Feb 2013 | A1 |
20130050425 | Im et al. | Feb 2013 | A1 |
20130086531 | Sugita et al. | Apr 2013 | A1 |
20130097566 | Berglund | Apr 2013 | A1 |
20130120319 | Givon | May 2013 | A1 |
20130148852 | Partis et al. | Jun 2013 | A1 |
20130181897 | Izumi | Jul 2013 | A1 |
20130182079 | Holz | Jul 2013 | A1 |
20130182897 | Holz | Jul 2013 | A1 |
20130187952 | Berkovich et al. | Jul 2013 | A1 |
20130191911 | Dellinger et al. | Jul 2013 | A1 |
20130194173 | Zhu et al. | Aug 2013 | A1 |
20130208948 | Berkovich et al. | Aug 2013 | A1 |
20130222233 | Park et al. | Aug 2013 | A1 |
20130222640 | Baek et al. | Aug 2013 | A1 |
20130239059 | Chen et al. | Sep 2013 | A1 |
20130241832 | Rimon et al. | Sep 2013 | A1 |
20130252691 | Alexopouios | Sep 2013 | A1 |
20130257736 | Hou et al. | Oct 2013 | A1 |
20130258140 | Lipson et al. | Oct 2013 | A1 |
20130271397 | MacDougall et al. | Oct 2013 | A1 |
20130283213 | Guendelman et al. | Oct 2013 | A1 |
20130300831 | Mavromatis et al. | Nov 2013 | A1 |
20130307935 | Rappel et al. | Nov 2013 | A1 |
20130321265 | Bychkov | Dec 2013 | A1 |
20140002365 | Ackley et al. | Jan 2014 | A1 |
20140010441 | Shamaie | Jan 2014 | A1 |
20140015831 | Kim et al. | Jan 2014 | A1 |
20140055385 | Duheille | Feb 2014 | A1 |
20140055396 | Aubauer et al. | Feb 2014 | A1 |
20140063055 | Osterhout et al. | Mar 2014 | A1 |
20140063060 | Maciocci et al. | Mar 2014 | A1 |
20140064566 | Shreve et al. | Mar 2014 | A1 |
20140081521 | Frojdh et al. | Mar 2014 | A1 |
20140085203 | Kobayashi | Mar 2014 | A1 |
20140095119 | Lee et al. | Apr 2014 | A1 |
20140098018 | Kim et al. | Apr 2014 | A1 |
20140125775 | Holz | May 2014 | A1 |
20140125813 | Holz | May 2014 | A1 |
20140132738 | Ogura et al. | May 2014 | A1 |
20140134733 | Wu et al. | May 2014 | A1 |
20140139425 | Sakai | May 2014 | A1 |
20140139641 | Holz | May 2014 | A1 |
20140157135 | Lee et al. | Jun 2014 | A1 |
20140161311 | Kim | Jun 2014 | A1 |
20140168062 | Katz et al. | Jun 2014 | A1 |
20140176420 | Zhou et al. | Jun 2014 | A1 |
20140177913 | Holz | Jun 2014 | A1 |
20140189579 | Rimon et al. | Jul 2014 | A1 |
20140192024 | Holz | Jul 2014 | A1 |
20140201666 | Bedikian et al. | Jul 2014 | A1 |
20140201689 | Bedikian et al. | Jul 2014 | A1 |
20140222385 | Muenster et al. | Aug 2014 | A1 |
20140223385 | Ton et al. | Aug 2014 | A1 |
20140225826 | Juni | Aug 2014 | A1 |
20140225918 | Mittal et al. | Aug 2014 | A1 |
20140240215 | Tremblay et al. | Aug 2014 | A1 |
20140240225 | Eilat | Aug 2014 | A1 |
20140248950 | Tosas Bautista | Sep 2014 | A1 |
20140249961 | Zagel et al. | Sep 2014 | A1 |
20140253512 | Narikawa et al. | Sep 2014 | A1 |
20140253785 | Chan et al. | Sep 2014 | A1 |
20140267098 | Na et al. | Sep 2014 | A1 |
20140282282 | Holz | Sep 2014 | A1 |
20140307920 | Holz | Oct 2014 | A1 |
20140320408 | Zagorsek et al. | Oct 2014 | A1 |
20140344762 | Grasset et al. | Nov 2014 | A1 |
20140364209 | Perry | Dec 2014 | A1 |
20140364212 | Osman et al. | Dec 2014 | A1 |
20140369558 | Holz | Dec 2014 | A1 |
20140375547 | Katz et al. | Dec 2014 | A1 |
20150003673 | Fletcher | Jan 2015 | A1 |
20150009149 | Gharib et al. | Jan 2015 | A1 |
20150016777 | Abovitz et al. | Jan 2015 | A1 |
20150022447 | Hare et al. | Jan 2015 | A1 |
20150029091 | Nakashima et al. | Jan 2015 | A1 |
20150040040 | Balan et al. | Feb 2015 | A1 |
20150054729 | Minnen et al. | Feb 2015 | A1 |
20150084864 | Geiss et al. | Mar 2015 | A1 |
20150097772 | Starner | Apr 2015 | A1 |
20150103004 | Cohen et al. | Apr 2015 | A1 |
20150115802 | Kuti et al. | Apr 2015 | A1 |
20150116214 | Grunnet-Jepsen et al. | Apr 2015 | A1 |
20150131859 | Kim et al. | May 2015 | A1 |
20150172539 | Neglur | Jun 2015 | A1 |
20150193669 | Gu et al. | Jul 2015 | A1 |
20150205358 | Lyren | Jul 2015 | A1 |
20150205400 | Hwang et al. | Jul 2015 | A1 |
20150206321 | Scavezze et al. | Jul 2015 | A1 |
20150227795 | Starner et al. | Aug 2015 | A1 |
20150234569 | Hess | Aug 2015 | A1 |
20150253428 | Holz | Sep 2015 | A1 |
20150258432 | Stafford et al. | Sep 2015 | A1 |
20150261291 | Mikhailov et al. | Sep 2015 | A1 |
20150293597 | Mishra et al. | Oct 2015 | A1 |
20150304593 | Sakai | Oct 2015 | A1 |
20150309629 | Amariutei et al. | Oct 2015 | A1 |
20150323785 | Fukata et al. | Nov 2015 | A1 |
20150363070 | Katz | Dec 2015 | A1 |
20160062573 | Dascola et al. | Mar 2016 | A1 |
20160086046 | Holz et al. | Mar 2016 | A1 |
20160093105 | Rimon et al. | Mar 2016 | A1 |
20170102791 | Hosenpud et al. | Apr 2017 | A1 |
Number | Date | Country |
---|---|---|
1984236 | Jun 2007 | CN |
201332447 | Oct 2009 | CN |
101729808 | Jun 2010 | CN |
101930610 | Dec 2010 | CN |
101951474 | Jan 2011 | CN |
102053702 | May 2011 | CN |
201859393 | Jun 2011 | CN |
102201121 | Sep 2011 | CN |
102236412 | Nov 2011 | CN |
4201934 | Jul 1993 | DE |
10326035 | Jan 2005 | DE |
102007015495 | Oct 2007 | DE |
102007015497 | Jan 2014 | DE |
0999542 | May 2000 | EP |
1477924 | Nov 2004 | EP |
1837665 | Sep 2007 | EP |
2369443 | Sep 2011 | EP |
2378488 | Oct 2011 | EP |
2419433 | Apr 2006 | GB |
2480140 | Nov 2011 | GB |
2519418 | Apr 2015 | GB |
H02236407 | Sep 1990 | JP |
H08261721 | Oct 1996 | JP |
H09259278 | Oct 1997 | JP |
2000023038 | Jan 2000 | JP |
2002133400 | May 2002 | JP |
2003256814 | Sep 2003 | JP |
2004246252 | Sep 2004 | JP |
2006019526 | Jan 2006 | JP |
2006259829 | Sep 2006 | JP |
2007272596 | Oct 2007 | JP |
2008227569 | Sep 2008 | JP |
2009031939 | Feb 2009 | JP |
2009037594 | Feb 2009 | JP |
2010060548 | Mar 2010 | JP |
2011010258 | Jan 2011 | JP |
2011065652 | Mar 2011 | JP |
2011107681 | Jun 2011 | JP |
4906960 | Mar 2012 | JP |
2012527145 | Nov 2012 | JP |
101092909 | Jun 2011 | KR |
2422878 | Jun 2011 | RU |
200844871 | Nov 2008 | TW |
9426057 | Nov 1994 | WO |
2004114220 | Dec 2004 | WO |
2006020846 | Feb 2006 | WO |
2007137093 | Nov 2007 | WO |
2010007662 | Jan 2010 | WO |
2010032268 | Mar 2010 | WO |
2010076622 | Jul 2010 | WO |
2010088035 | Aug 2010 | WO |
2010138741 | Dec 2010 | WO |
2010148155 | Dec 2010 | WO |
2011024193 | Mar 2011 | WO |
2011036618 | Mar 2011 | WO |
2011044680 | Apr 2011 | WO |
2011045789 | Apr 2011 | WO |
2011119154 | Sep 2011 | WO |
2012027422 | Mar 2012 | WO |
2013109608 | Jul 2013 | WO |
2013109609 | Jul 2013 | WO |
2014208087 | Dec 2014 | WO |
2015026707 | Feb 2015 | WO |
Entry |
---|
U.S. Appl. No. 16/195,755—Office Action dated Jun. 8, 2020, 15 pages. |
U.S. Appl. No. 16/659,468—Response to Office Action dated Jun. 19, 2020 filed Sep. 18, 2020, 10 pages. |
U.S. Appl. No. 14/155,722—Response to Office Action dated Nov. 20, 2015, filed Feb. 19, 2016, 15 pages. |
U.S. Appl. No. 16/195,755—Response to Final Office Action dated Jun. 8, 2020 filed Sep. 21, 2020, 17 pages. |
U.S. Appl. No. 17/093,490 Office Action, dated Dec. 17, 2021, 101 pages. |
U.S. Appl. No. 17/093,490—Final Office Action dated May 2, 2022, 42 pages. |
U.S. Appl. No. 16/659,468—Final Office Action dated Nov. 20, 2020, 18 pages. |
U.S. Appl. No. 16/659,468—Response to Office Action dated Nov. 20, 2020 filed Mar. 22, 2021, 13 pages. |
U.S. Appl. No. 16/659,468—Notice of Allowance dated Apr. 23, 2021, 11 pages. |
U.S. Appl. No. 16/195,755—Advisory Action dated Sep. 30, 2020, 3 pages. |
U.S. Appl. No. 16/195,755—Non Final Office Action dated May 25, 2021,27 pages. |
U.S. Appl. No. 16/195,755—Response to Non-Final Office Action dated May 25, 2021, filed Aug. 25, 2021, 15 pages. |
U.S. Appl. No. 16/195,755—Notice of Allowance, dated Sep. 29, 2021, 6 pages. |
U.S. Appl. No. 16/195,755—Supplemental Notice of Allowance, dated Oct. 14, 2021, 9 pages. |
U.S. Appl. No. 14/154,730—Notice of Allowance dated May 3, 2016, 5 pages. |
U.S. Appl. No. 14/154,730—Office Action dated Nov. 6, 2015, 10 pages. |
U.S. Appl. No. 14/154,730—Response to Office Action dated Nov. 6, 2016, filed Feb. 4, 2016, 9 pages. |
U.S. Appl. No. 14/155,722—Office Action dated Nov. 20, 2015, 14 pages. |
U.S. Appl. No. 14/154,730—Notice of Allowance dated Jul. 14, 2016, 5 pages. |
U.S. Appl. No. 15/358,104—Notice of Allowance dated Apr. 11, 2018, 5 pages. |
U.S. Appl. No. 15/358,104—Office Action dated Nov. 2, 2017, 9 pages. |
U.S. Appl. No. 15/358,104—Response to Office Action dated Nov. 2, 2017, filed Mar. 2, 2018, 9 pages. |
U.S. Appl. No. 16/987,289—Non-Final Office Action dated Oct. 14, 2021, 12 pages. |
U.S. Appl. No. 16/987,289—Notice of Allowance dated Feb. 11, 2022, 83 pages. |
U.S. Appl. No. 16/987,289—Response to Non-Final Office Action dated Oct. 14, 2021 filed Jan. 11, 2022, 93 pages. |
U.S. Appl. No. 16/054,891—Notice of Allowance dated Apr. 1, 2020, 6 pages. |
U.S. Appl. No. 16/054,891—Office Action mailed Oct. 24, 2019, 26 pages. |
U.S. Appl. No. 16/054,891—Response to Office Action dated Oct. 24, 2019, filed Feb. 24, 2020, 15 pages. |
U.S. Appl. No. 14/476,694—Advisory Action dated Jun. 22, 2017, 8 pages. |
U.S. Appl. No. 14/476,694—Final Office Action dated Feb. 26, 2018, 53 pages. |
U.S. Appl. No. 14/476,694—Final Office Action dated Apr. 7, 2017, 32 pages. |
U.S. Appl. No. 14/476,694—Notice of Allowance dated Dec. 28, 2018, 22 pages. |
U.S. Appl. No. 14/476,694—Office Action dated Aug. 10, 2017, 71 pages. |
U.S. Appl. No. 14/476,694—Office Action dated Jul. 30, 2018, 68 pages. |
U.S. Appl. No. 14/476,694—Office Action dated Nov. 1, 2016, 29 pages. |
U.S. Appl. No. 14/476,694—Response to Final Office Action dated Feb. 26, 2018 filed Jun. 19, 2018, 16 pages. |
U.S. Appl. No. 14/476,694—Response to Office Action dated Aug. 10, 2017, filed Nov. 10, 2017, 14 pages. |
U.S. Appl. No. 14/476,694—Response to Office Action dated Jul. 30, 2018 filed Nov. 9, 2018, 19 pages. |
U.S. Appl. No. 14/476,694—Response to Office Action dated Apr. 7, 2017 filed Jun. 6, 2017, 18 pages. |
U.S. Appl. No. 14/476,694—Response to Office Action dated Nov. 1, 2016 filed Jan. 31, 2017, 15 pages. |
U.S. Appl. No. 14/476,694—Response to Final Office Action dated Feb. 26, 2018 filed May 30, 2018, 15 pages. |
U.S. Appl. No. 16/402,134—Non-Final Office Action dated Jan. 27, 2020, 19 pages. |
U.S. Appl. No. 16/402,134—Notice of Allowance dated Jul. 15, 2020, 9 pages. |
U.S. Appl. No. 16/402,134—Response to Office Action dated Jan. 27, 2020, filed May 27, 2020, 7 pages. |
U.S. Appl. No. 14/262,691—Office Action dated Dec. 11, 2015, 31 pages. |
U.S. Appl. No. 14/262,691—Response to Offfice Action dated Dec. 11, 2015, filed May 11, 2016, 15 pages. |
U.S. Appl. No. 14/262,691—Office Action dated Jan. 31, 2017, 27 pages. |
U.S. Appl. No. 14/262,691—Response to Office Action dated Jan. 31, 2017, filed Jun. 30, 2017, 20 pages. |
U.S. Appl. No. 14/262,691—Notice of Allowance dated Oct. 30, 2017, 35 pages. |
U.S. Appl. No. 14/155,722—Notice of Allowance dated May 27, 2016, 10 pages. |
U.S. Appl. No. 15/279,363—Office Action dated Jan. 25, 2018, 8 pages. |
U.S. Appl. No. 15/279,363—Response to Office Action dated Jan. 25, 2018, filed May 24, 2018, 11 pages. |
U.S. Appl. No. 15/279,363—Notice of Allowance dated Jul. 10, 2018, 5 pages. |
U.S. Appl. No. 15/917,066—Office Action dated Nov. 1, 2018, 31 pages. |
U.S. Appl. No. 14/262,691—Office Action dated Aug. 19, 2016, 36 pages. |
U.S. Appl. No. 14/262,691—Response to Office Action dated Aug. 19, 2016, filed Nov. 21, 2016, 13 pages. |
U.S. Appl. No. 14/262,691—Supplemental Response to Office Action dated Jan. 31, 2017, Jul. 20, 2017, 22 pages. |
U.S. Appl. No. 15/917,066—Response to Office Action dated Nov. 1, 2018, filed Mar. 1, 2019, 12 pages. |
U.S. Appl. No. 15/917,066—Office Action dated Mar. 19, 2019, 37 pages. |
U.S. Appl. No. 15/917,066—Response to Office Action dated Mar. 19, 2019, filed May 23, 2019, 12 pages. |
U.S. Appl. No. 15/917,066—Notice of Allowance dated Jun. 14, 2019, 5 pages. |
U.S. Appl. No. 16/659,468—Office Action dated Jun. 19, 2020, 75 pages. |
U.S. Appl. No. 16/195,755—Office Action dated Nov. 29, 2019, 16 pages. |
U.S. Appl. No. 16/195,755—Response to Office Action dated Nov. 29, 2019, filed Feb. 27, 2020, 13 pages. |
Arthington, et al., “Cross-section Reconstruction During Uniaxial Loading,” Measurement Science and Technology, vol. 20, No. 7, Jun. 10, 2009, Retrieved from the internet: http:iopscience.iop.org/0957-0233/20/7/075701, pp. 1-9. |
Barat et al., “Feature Correspondences From Multiple Views of Coplanar Ellipses”, 2nd International Symposium on Visual Computing, Author Manuscript, 2006, 10 pages. |
Bardinet, et al., “Fitting of Iso-Surfaces Using Superquadrics and Free-Form Deformations” [on-line], Jun. 24-25, 1994 [retrieved Jan. 9, 2014], 1994 Proceedings of IEEE Workshop on Biomedical Image Analysis, Retrieved from the Internet: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=315882&tag:=1, pp. 184-193. |
Butail, S., et al., “Three-Dimensional Reconstruction of the Fast-Start Swimming Kinematics of Densely Schooling Fish,” Journal of the Royal Society interface, Jun. 3, 2011, retrieved from the Internet <http://www.ncbi.nlm.nih.gov/pubmed/21642367>, pp. 0, 1-12. |
Cheikh et al., “Multipeople Tracking Across Multiple Cameras”, International Journal on New Computer Architectures and Their Applications (IJNCAA), vol. 2, No. 1, 2012, pp. 23-33. |
Chung, et al., “Recovering LSHGCs and SHGCs from Stereo,” International Journal of Computer Vision, vol. 20, No. 1/2, 1996, pp. 43-58. |
Cumani, A., et al., “Recovering the 3D Structure of Tubular Objects from Stereo Silhouettes,” Pattern Recognition, Elsevier, GB, vol. 30, No. 7, Jul. 1, 1997, 9 pages. |
Davis et al., “Toward 3-D Gesture Recognition”, International Journal of Pattern Recognition and Artificial Intelligence, vol. 13, No. 03, 1999, pp. 381-393. |
Di Zenzo, S., et al., “Advances in Image Segmentation,” Image and Vision Computing, Elsevier, Guildford, GBN, vol. 1, No. 1, Copyright Butterworth & Co Ltd., Nov. 1, 1983, pp. 196-210. |
Forbes, K., et al., “Using Silhouette Consistency Constraints to Build 3D Models,” University of Cape Town, Copyright De Beers 2003, Retrieved from the internet: <http://www.dip.ee.uct.ac.za/˜kforbes/Publications/Forbes2003Prasa.pdf> on Jun. 17, 2013, 6 pages. |
Heikkila, J., “Accurate Camera Calibration and Feature Based 3-D Reconstruction from Monocular Image Sequences”, Infotech Oulu and Department of Electrical Engineering, University of Oulu, 1997, 126 pages. |
Kanhangad, V., et al., “A Unified rramework for Contactless Hand Verification,” IEEE Transactions on Information Forensics and Security, IEEE, Piscataway, NJ, US., vol. 6, No. 3, Sep. 1, 2011, pp. 1014-1027. |
Kim, et al., “Development of an Orthogonal Double-image Processing Algorithm to Measure Bubble,” Department of Nuclear Engineering and Technology, Seoul National University Korea, vol. 39 No. 4, Published Jul. 6, 2007, pp. 313-326. |
Kulesza, et al., “Arrangement of a Multi Stereo Visual Sensor System for a Human Activities Space,” Source: Stereo Vision, Book edited by: Dr. Asim Bhatti, ISBN 978-953-7619-22-0, Copyright Nov. 2008, I-Tech, Vienna, Austria, www.intechopen.com, pp. 153-173. |
May, S., et al., “Robust 3D-Mapping with Time-of-Flight Cameras,” 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, Piscataway, NJ, USA, Oct. 10, 2009, pp. 1673-1678. |
Olsson, K., et al., “Shape from Silhouette Scanner—Creating a Digital 3D Model of a Real Object, by Analyzing Photos From Multiple Views,” University of Linkoping, Sweden, Copyright VCG 2001, Retrieved from the Internet: <http://liu.diva-portal.org/smash/get/diva2:18671/FULLTEXT01> on Jun. 17, 2013, 52 pages. |
Pavlovic, V.I., et al., “Visual Interpretation of Hand Gestures for Human-Computer interaction: A Review,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, No. 7, Jul. 1997, pp. 677-695. |
Pedersini, et al., Accurate Surface Reconstruction from Apparent Contours, Sep. 5-8, 2000 European Signal Processing Conference EUSIPCO 2000, vol. 4, Retrieved from the Internet: http://home.deib.poliml.it/sarti/CV_and_publications.html, pp. 1-4. |
Rasmussen, Matihew K., “An Analytical Framework for the Preparation and Animation of a Virtual Mannequin forthe Purpose of Mannequin-Clothing Interaction Modeling”, A Thesis Submitted in Partial Fulfillment of the Requirements for the Master of Science Degree in Civil and Environmental Engineering in the Graduate College of the University of Iowa, Dec. 2008, 98 pages. |
U.S. Appl. No. 17/409,767—Office Action, dated Nov. 3, 2022, 30 pages. |
U.S. Appl. No. 17/833,556—Office Action, dated Oct. 27, 2022, 92 pages. |
Wu, Y., et al., “Vision-Based Gesture Recognition: A Review,” Beckman Institute, Copyright 1999, pp. 103-115. |
Zenzo et al., “Advantages In Image Segmentation,” image and Vision Computing, Elsevier Guildford, GB, Nov. 1, 1983, pp. 196-210. |
Number | Date | Country | |
---|---|---|---|
20220236808 A1 | Jul 2022 | US |
Number | Date | Country | |
---|---|---|---|
61877641 | Sep 2013 | US | |
61873351 | Sep 2013 | US | |
61872538 | Aug 2013 | US | |
61825515 | May 2013 | US | |
61825480 | May 2013 | US | |
61824691 | May 2013 | US | |
61816487 | Apr 2013 | US | |
61808984 | Apr 2013 | US | |
61808959 | Apr 2013 | US | |
61791204 | Mar 2013 | US | |
61752725 | Jan 2013 | US | |
61752731 | Jan 2013 | US | |
61752733 | Jan 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16195755 | Nov 2018 | US |
Child | 17666534 | US | |
Parent | 15279363 | Sep 2016 | US |
Child | 16195755 | US | |
Parent | 14155722 | Jan 2014 | US |
Child | 15279363 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14154730 | Jan 2014 | US |
Child | 14155722 | US |