Materials incorporated by reference in this filing include the following:
“PREDICTIVE INFORMATION FOR FREE SPACE GESTURE CONTROL AND COMMUNICATION,” U.S. Prov. App. No. 61/871,790, filed 29 Aug. 2013,
“PREDICTIVE INFORMATION FOR FREE-SPACE GESTURE CONTROL AND COMMUNICATION,” U.S. Prov. App. No. 61/873,758, filed 4 Sep. 2013,
“VELOCITY FIELD INTERACTION FOR FREE SPACE GESTURE INTERFACE AND CONTROL,” U.S. Prov. App. No. 61/891,880, filed 16 Oct. 2013,
“VELOCITY FIELD INTERACTION FOR FREE SPACE GESTURE INTERFACE AND CONTROL,” US Non. Prov. application. Ser. No. 14/516,493, filed 16 Oct. 2014,
“CONTACTLESS CURSOR CONTROL USING FREE-SPACE MOTION DETECTION,” U.S. Prov. App. No. 61/825,480, filed 20 May 2013,
“FREE-SPACE USER INTERFACE AND CONTROL USING VIRTUAL CONSTRUCTS,” U.S. Prov. App. No. 61/873,351, filed 3 Sep. 2013,
“FREE-SPACE USER INTERFACE AND CONTROL USING VIRTUAL CONSTRUCTS,” U.S. Prov. App. No. 61/877,641, filed 13 Sep. 2013,
“CONTACTLESS CURSOR CONTROL USING FREE-SPACE MOTION DETECTION,” U.S. Prov. App. No. 61/825,515, filed 20 May 2013,
“FREE-SPACE USER INTERFACE AND CONTROL USING VIRTUAL CONSTRUCTS,” US Non. Prov. application. Ser. No. 14/154,730, filed 14 Jan. 2014,
“SYSTEMS AND METHODS FOR MACHINE CONTROL,” US Non. Prov. application Ser. No. 14/280,018, filed 16 May 2014,
“DYNAMIC, FREE-SPACE USER INTERACTIONS FOR MACHINE CONTROL,” US Non. Prov. application. Ser. No. 14/155,722, filed 1 Jan. 2014, and
“PREDICTIVE INFORMATION FOR FREE SPACE GESTURE CONTROL AND COMMUNICATION,” US Non. Prov. application. Ser. No. 14/474,077, filed 29 Aug. 2014.
Embodiments relate generally to image analysis, and in particular embodiments to identifying shapes and capturing motions of objects in three-dimensional space.
Conventional motion capture approaches rely on markers or sensors worn by the subject while executing activities and/or on the strategic placement of numerous bulky and/or complex equipment in specialized environments to capture subject movements. Unfortunately, such systems tend to be expensive to construct. In addition, markers or sensors worn by the subject can be cumbersome and interfere with the subject's natural movement. Further, systems involving large numbers of cameras tend not to operate in real time, due to the volume of data that needs to be analyzed and correlated. Such considerations of cost, complexity and convenience have limited the deployment and use of motion capture technology.
Consequently, there is a need for improved techniques for capturing the motion of objects in real time without attaching sensors or markers thereto.
Among other aspects, embodiments can provide for improved image based machine interface and/or communication by interpreting a control object's position and/or motion (including objects having one or more articulating members, i.e., humans and/or animals and/or machines). Among other aspects, embodiments can enable automatically (e.g., programmatically) refine predictive information to determine improved predictive information based upon a discrepancy determined from characteristics of observed information. Predictive information can comprise radial solids and/or other shapes includable in a model. Embodiments can enable conformance of the model to real world changes in a control object (i.e., object being modeled) facilitating real time or near real time control, communication and/or interaction with machines. Inputs can be interpreted from one or a sequence of images, scans, etc. in conjunction with receiving input, commands, communications and/or other user-machine interfacing, gathering information about objects, events and/or actions existing or occurring within an area being explored, monitored, or controlled, and/or combinations thereof.
The technology disclosed relates to simplifying updating of a predictive model using clustering observed points. In particular, it relates to observing a set of points in a three-dimensional (3D) sensory space, determining surface normal directions from the points, clustering the points by their surface normal directions and adjacency, accessing a predictive model of a hand, refining positions of segments of the predictive model, matching the clusters of the points to the segments, and using the matched clusters to refine the positions of the matched segments.
The technology disclosed also relates to selecting a reference vector and determining a difference in angle between the surface normal directions from the points and the reference vector and using a magnitude of the difference to cluster the points.
In one embodiment, the reference vector is orthogonal to a field of view of camera used to capture the points on an image. In another embodiment, the reference vector is along a longitudinal axis of the hand. In yet another embodiment, the reference vector is along a longitudinal axis of a portion of the hand.
In some embodiments, refining positions of segments of the predictive model further includes calculating an error indication by determining whether the points and points on the segments of the predictive model are within a threshold closest distance.
In other embodiments, refining positions of segments of the predictive model further includes calculating an error indication by pairing the points in the set with points on axes of the segments of the predictive model, wherein the points in the set lie on vectors that are normal to the axes and determining a reduced root mean squared deviation (RMSD) of distances between paired point sets.
In yet other embodiments, refining positions of segments of the predictive model further includes calculating an error indication by pairing the points in the set with points on the segments of the predictive model, wherein normal vectors to the points in the set are parallel to each other and determining a reduced root mean squared deviation (RMSD) of distances between bases of the normal vectors.
In some other embodiment, refining positions of segments of the predictive model further includes determining physical proximity between points in the set based on the matched clusters, based on the determined physical proximity, identifying co-located segments of the predictive model that change positions together, and refining positions of segments of the predictive model responsive to the co-located segments.
In one embodiment, the co-located segments represent adjoining figures of the hand.
In another embodiment, the co-located segments represent subcomponents of a same finger.
The technology disclosed also relates to distinguishing between alternative motions between two observed locations of a control object in a three-dimensional (3D) sensory space. In particular, it relates to accessing first and second positions of a segment of a predictive model of a control object such that motion between the first position and the second position was at least partially occluded from observation in a three-dimensional (3D) sensory space. It further relates to receiving two or more alternative interpretations of movement from the first position to the second position, estimating entropy or extent of motion involved in the alternative interpretations, selecting an alternative interpretation with lower entropy or extent of motion than other interpretations, and applying the selected interpretation to predicting further positioning of the segment and of other segments of the predictive model from additional observations in the 3D sensory space.
In one embodiment, the control object is a hand. In another embodiment, the control object is a tool.
The technology disclosed also relates to a system enabling simplifying updating of a predictive model using clustering observed points. The system comprises of at least one camera oriented towards a field of view, a gesture database comprising a series of electronically stored records, each of the records relating a predictive model of a hand, and an image analyzer coupled to the camera and the database and configured to observe a set of points in a three-dimensional (3D) sensory space using at least one image captured by the camera, determine surface normal directions from the points, cluster the points by their surface normal directions and adjacency, access a particular predictive model of the hand, refine positions of segments of the particular predictive model, match the clusters of the points to the segments, and use the matched clusters to refine the positions of the matched segments.
The technology disclosed also relates to a system that distinguishes between alternative motions between two observed locations of a control object in a three-dimensional (3D) sensory space. The system comprises of at least one camera oriented towards a field of view, a gesture database comprising a series of electronically stored records, each of the records relating a predictive model of a hand, an image analyzer coupled to the camera and the database and configured to access first and second positions of a segment of a predictive model of a control object such that motion between the first position and the second position was at least partially occluded from observation in a three-dimensional (3D) sensory space, receive two or more alternative interpretations of movement from the first position to the second position, estimate entropy or extent of motion involved in the alternative interpretations, select an alternative interpretation with lower entropy or extent of motion than other interpretations, and apply the selected interpretation to predicting further positioning of the segment and of other segments of the predictive model from additional observations in the 3D sensory space.
According to one aspect, a method embodiment for improving predictive information includes receiving predictive information and observed information of an object movable in space. A weighting function is can be applied to the predictive information and the observed information to determine a discrepancy. The predictive information can be refined to determine an improved predictive information based at least in part on the discrepancy.
In an embodiment, applying a weighting function can include selecting one or more points from a surface portion as represented in the observed information. A score can be determined for each point selected. For each point selected, a distance between a model surface of the predictive information and a corresponding surface portion represented in the observed information can be determined based upon the scores and points. A discrepancy for one or more surface portions can be determined based upon the distances computed from the scores and points.
In an embodiment, determining a score for a point can include assigning to the point a point parameter based on the observed information. A weighting function can be applied to the point parameter and a reference parameter to determine a score for the point. When the score is applied to the point, a scored point results. Reference parameters can be determined in a variety of ways in embodiments. For example and without limitation, a characteristic or property of a detection device or mechanism (i.e., scanner, imaging camera, etc.) such as a vector normal to the field of view. Alternatively or in addition, a parameter representing an orientation of a detection device (i.e., determinable using the detection functions of the device itself, auxiliary position awareness functions of the device, other sources of analogous information and/or combinations thereof) or a parameter derived from a physical surface (i.e., desk, table, monitor, etc.) on which a detection device is positioned, and so forth.
In an embodiment, applying a weighting function to a point parameter and a reference parameter can include determining an angle between a normal vector for the point and a reference vector and weighting the point by a size of an angle formed between the normal vector for the point and the reference vector.
Advantageously, some embodiments can enable quicker, crisper gesture based or “free space” (i.e., not requiring physical contact) interfacing with a variety of machines (e.g., a computing systems, including desktop, laptop, tablet computing devices, special purpose computing machinery, including graphics processors, embedded microcontrollers, gaming consoles, audio mixers, or the like; wired or wirelessly coupled networks of one or more of the foregoing, and/or combinations thereof), obviating or reducing the need for contact-based input devices such as a mouse, joystick, touch pad, or touch screen. Some embodiments can provide for improved interface with computing and/or other machinery than would be possible with heretofore known techniques. In some embodiments, a richer human—machine interface experience can be provided.
A more complete understanding of the subject matter can be derived by referring to the detailed description and claims when considered in conjunction with the following figures, wherein like reference numbers refer to similar elements throughout the figures.
Among other aspects, embodiments described herein with reference to example implementations can provide for automatically (e.g., programmatically) refining predictive information to determine improved predictive information based upon a discrepancy determined from characteristics of observed information. Predictive information can comprise radial solids and/or other shapes includable in a model. Embodiments can enable conformance of the model to real world changes in a control object (i.e., object being modeled) facilitating real time or near real time control, communication and/or interaction with machines.
In a block 102, a cost function is applied to the predictive information and the observed information to determine a discrepancy. In an embodiment and by way of example, one method for applying a cost function is described below with reference to flowchart 102 of
One method for determining a score for a point illustrated by flowcharts 122, 132 of
In a block 103, the predictive information is refined to determine an improved predictive information based at least in part on the discrepancy 207 (of block 25 of
A gesture-recognition system recognizes gestures for purposes of providing input to the electronic device, but can also capture the position and shape of the user's hand 114 in consecutive video images in order to characterize a hand gesture in 3D space and reproduce it on the display screen of
In an implementation, observation information including observation of the control object can be compared against the model at one or more of periodically, randomly or substantially continuously (i.e., in real time). A “control object” as used herein with reference to an implementation is generally any three-dimensionally movable object or appendage with an associated position and/or orientation (e.g., the orientation of its longest axis) suitable for pointing at a certain location and/or in a certain direction. Control objects include, e.g., hands, fingers, feet, or other anatomical parts, as well as inanimate objects such as pens, styluses, handheld controls, portions thereof, and/or combinations thereof. Where a specific type of control object, such as the user's finger, is used hereinafter for ease of illustration, it is to be understood that, unless otherwise indicated or clear from context, any other type of control object can be used as well.
Observational information can include without limitation observed values of attributes of the control object corresponding to the attributes of one or more model subcomponents in the predictive information for the control object. In an implementation, comparison of the model with the observation information provides an error indication. In an implementation, an error indication can be computed by determining a closest distance determined between a first point A belonging to a set of points defining the virtual surface 322 and a second point B belonging to a model subcomponent 330 determined to be corresponding to the first point (e.g., nearest to the first point for example). In an implementation, the error indication can be applied to the predictive information to correct the model to more closely conform to the observation information. In an implementation, error indication can be applied to the predictive information repeatedly until the error indication falls below a threshold, a measure of conformance with the observation information rises above a threshold, or a fixed or variable number of times, or a fixed or variable number of times per time period, or combinations thereof.
In one implementation and with reference to
n=(p2−p1)×(p3−p1),
Another technique that can be used: (i) start with the set of points; (ii) form a first vector from P2-P1, (iii) apply rotation matrix to rotate the first vector 90 degrees away from the center of mass of the set of points. (The center of mass of the set of points can be determined by an average of the points). A yet further technique that can be used includes: (i) determine a first vector tangent to a point on a contour in a first image; (ii) determine from the point on the contour a second vector from that point to a virtual camera object in space; (iii) determine a cross product of the first vector and the second vector. The cross product is a normal vector to the contour.
Again with reference to
Again with reference to block 35 in
In one implementation, as illustrated by
s2=2ac(−2a2−2c2+b2−2a−2b−2c+4ac)+−2b2(a2+c2)
α=β=tan 2−1s−(a+c)b
φ=x1/norm(x)
θ=x2/norm(x)
Wherein norm(x) can be described as the norm of a 3D point x (470 in
Predictive information of the 3D hand model can be aligned to the observation information using any of a variety of techniques. Aligning techniques bring model portions (e.g., capsules, capsuloids, capsoodles) into alignment with the information from the image source (e.g., edge samples, edge rays, interior points, 3D depth maps, and so forth). In one implementation, the model is rigidly aligned to the observation information using iterative closest point (ICP) technique. The model can be non-rigidly aligned to the observation information by sampling techniques.
One ICP implementation includes finding an optimal rotation R and translation T from one set of points A to another set of points B. First each point from A is matched to a point in set B. A mean square error is computed by adding the error of each match:
MSE=sqrt(Σ(R*xi+T−yi)t*(R*xi+T−yi))
An optimal R and T can be computed and applied to the set of points A or B, in some implementations.
In order to enable the ICP to match points to points on the model, a capsule matching technique can be employed. One implementation of the capsule matcher includes a class that “grabs” the set of data and computes the closest point on each tracked hand (using information like the normal). Then the minimum of those closest points is associated to the corresponding hand and saved in a structure called “Hand Data.” Other points that don't meet a minimal distance threshold can be marked as unmatched.
In an implementation, motion(s) of the control object can be rigid transformation, in which case, points on the virtual surface(s) remain at the same distance(s) from one another through the motion. Motion(s) can be non-rigid transformations, in which points on the virtual surface(s) can vary in distance(s) from one another during the motion. In an implementation, observation information can be used to adjust (and/or recomputed) predictive information thereby enabling “tracking” the control object. In implementations, control object can be tracked by determining whether a rigid transformation or a non-rigid transformation occurs. In an implementation, when a rigid transformation occurs, a transformation matrix is applied to each point of the model uniformly. Otherwise, when a non-rigid transformation occurs, an error indication can be determined, and an error minimization technique such as described herein above can be applied.
In some implementations, rigid transformations and/or non-rigid transformations can be composed. One example composition implementation includes applying a rigid transformation to predictive information. Then an error indication can be determined, and an error minimization technique such as described herein above can be applied. In an implementation, determining a transformation can include determining a rotation matrix that provides a reduced RMSD (root mean squared deviation) between two paired sets of points. One implementation can include using Kabsch Algorithm to produce a rotation matrix. The Kabsch algorithm can be used to find an optimal rotation R and translation T that minimizes the error:
RMS=sqrt(Σ(R*xi+T−yi)t*(R*xi+T−yi))wi
The transformation (both R and T) are applied rigidly to the model, according to one implementation. The capsule matching and rigid alignment can be repeated until convergence. In one implementation, the Kabsch can be extended to ray or co-variances by the following minimizing:
Σ(R*xi+T−yi)t*Mi*(R*xi+T−yi)
In the equation above, Mi is a positive definite symmetric matrix. In other implementations and by way of example, one or more force lines can be determined from one or more portions of a virtual surface.
One implementation applies non-rigidly alignment to the observed by sampling the parameters of each finger. A finger is represented by a 3D vector where the entry of each vector is Pitch, Yaw and Bend of the finger. The Pitch and Yaw can be defined trivially. The bend is the angle between the first and second Capsule and the second and third Capsule which are set to be equal. The mean of the samples weighted by the RMS is taken to be the new finger parameter. After Rigid Alignment all data that has not been assigned to a hand, can be used to initialize a new object (hand or tool).
In an implementation, predictive information can include collision information concerning two or more capsoloids. By means of illustration, several possible fits of predicted information to observation information can be removed from consideration based upon a determination that these potential solutions would result in collisions of capsoloids.
In an implementation, a relationship between neighboring capsoloids, each having one or more attributes (e.g., determined minima and/or maxima of intersection angles between capsoloids) can be determined. In an implementation, determining a relationship between a first capsoloid having a first set of attributes and a second capsoloid having a second set of attributes includes detecting and resolving conflicts between first attribute and second attributes. For example, a conflict can include a capsoloid having one type of angle value with a neighbor having a second type of angle value incompatible with the first type of angle value. Attempts to attach a capsoloid with a neighboring capsoloid having attributes such that the combination will exceed what is allowed in the observation information—or to pair incompatible angles, lengths, shapes, or other such attributes—can be removed from the predicted information without further consideration.
In an implementation, predictive information can be artificially constrained to capsoloids positioned in a subset of the observation information—thereby enabling creation of a “lean model”. For example, as illustrated in
In an implementation, a lean model can be associated with a full predictive model. The lean model (or topological information, or properties described above) can be extracted from the predictive model to form a constraint. Then, the constraint can be imposed on the predictive information thereby enabling the predictive information to be constrained in one or more of behavior, shape, total (system) energy, structure, orientation, compression, shear, torsion, other properties, and/or combinations thereof.
In an implementation, the observation information can include components reflecting portions of the control object which are occluded from view of the device (“occlusions” or “occluded components”). In one implementation, the predictive information can be “fit” to the observation information as described herein above with the additional constraint(s) that some total property of the predictive information (e.g., potential energy) be minimized or maximized (or driven to lower or higher value(s) through iteration or solution). Properties can be derived from nature, properties of the control object being viewed, others, and/or combinations thereof. In another implementation, a deformation of the predictive information can be allowed subject to an overall permitted value of compression, deformation, flexibility, others, and/or combinations thereof.
In one implementation, raw image information and fast lookup table can be used to find a look up region that gives constant time of computation of the closest point on the contour given a position. Fingertip positions are used to compute point(s) on the contour which can be then determined whether the finger is extended or non-extended, according to some implementations. A signed distance function can be used to determine whether points lie outside or inside a hand region, in another implementation. An implementation includes checking to see if points are inside or outside the hand region.
In another implementation, a variety of information types can be abstracted from the 3D solid model of a hand. For example, velocities of a portion of a hand (e.g., velocity of one or more fingers, and a relative motion of a portion of the hand), state (e.g., position, an orientation, and a location of a portion of the hand), pose (e.g., whether one or more fingers are extended or non-extended, one or more angles of bend for one or more fingers, a direction to which one or more fingers point, a configuration indicating a pinch, a grab, an outside pinch, and a pointing finger), and whether a tool or object is present in the hand can be abstracted in various implementations.
In one implementation, the predictive information including the 3D solid model is filtered by applying various constraints based on known (or inferred) physical properties of the system. For example, some solutions would place the object outside the field of view of the cameras, and such solutions can readily be rejected. As another example, in some implementations, the type of object being modeled is known (e.g., it can be known that the object is or is expected to be a human hand). Techniques for determining object type are described below; for now, it is noted that where the object type is known, properties of that object can be used to rule out instances of the 3D solid model where the geometry is inconsistent with objects of that type. For example, human hands have a certain range of sizes and expected eccentricities, and such ranges can be used to filter the solutions in a particular slice. These constraints can be represented in any suitable format, e.g., the 3D solid model, an ordered list of parameters based on such a model, etc. As another example, if it is assumed that the object being modeled is a particular type of object (e.g., a hand), a parameter value can be assumed based on typical dimensions for objects of that type (e.g., an average cross-sectional dimension of a palm or finger). An arbitrary assumption can also be used, and any assumption can be improved or refined through iterative analysis.
In some implementations, known topological information of a control object can also be used to filter (or further filter) the 3D solid model. For example, if the object is known to be a hand, constraints on the spatial relationship between various parts of the hand (e.g., fingers have a limited range of motion relative to each other and/or to the palm of the hand) as represented in a physical model or explicit set of constraint parameters can be used to constrain one iteration of the 3D solid model based on results from other iterations.
In some implementations, multiple 3D solid models can be constructed over time for a control object. It is likely that the “correct” solution (i.e., the 3D solid model that best corresponds to the actual position and/pose of the object) will interpolate well with other iterations, while any “spurious” solutions (i.e., models that do not correspond to the actual position and/or pose of the object) will not. Incorrect or least correct solutions can be discarded in other implementations.
In one embodiment, a motion sensing and controller system provides for detecting that some variation(s) in one or more portions of interest of a user has occurred, for determining that an interaction with one or more machines corresponds to the variation(s), for determining if the interaction should occur, and, if so, for affecting the interaction. The Machine Sensory and Control System (MSCS) typically includes a portion detection system, a variation determination system, an interaction system and an application control system.
As
In one embodiment, the detection module 92 includes one or more capture device(s) 190A, 190B (e.g., light (or other electromagnetic radiation sensitive devices) that are controllable via the controller 96. The capture device(s) 190A, 190B can comprise individual or multiple arrays of image capture elements 190A (e.g., pixel arrays, CMOS or CCD photo sensor arrays, or other imaging arrays) or individual or arrays of photosensitive elements 190B (e.g., photodiodes, photo sensors, single detector arrays, multi-detector arrays, or other configurations of photo sensitive elements) or combinations thereof. Arrays of image capture device(s) 190C (of
While illustrated with reference to a particular embodiment in which control of emission module 91 and detection module 92 are co-located within a common controller 96, it should be understood that these functions will be separate in some embodiments, and/or incorporated into one or a plurality of elements comprising emission module 91 and/or detection module 92 in some embodiments. Controller 96 comprises control logic (hardware, software or combinations thereof) to conduct selective activation/de-activation of emitter(s) 180A, 180B (and/or control of active directing devices) in on-off, or other activation states or combinations thereof to produce emissions of varying intensities in accordance with a scan pattern which can be directed to scan an area of interest 5. Controller 96 can comprise control logic (hardware, software or combinations thereof) to conduct selection, activation and control of capture device(s) 190A, 190B (and/or control of active directing devices) to capture images or otherwise sense differences in reflectance or other illumination. Signal processing module 94 determines whether captured images and/or sensed differences in reflectance and/or other sensor—perceptible phenomena indicate a possible presence of one or more objects of interest 98, including control objects 99, the presence and/or variations thereof can be used to control machines and/or other applications 95.
In various embodiments, the variation of one or more portions of interest of a user can correspond to a variation of one or more attributes (position, motion, appearance, surface patterns) of a user hand 99, finger(s), points of interest on the hand 99, facial portion 98 other control objects (e.g., styli, tools) and so on (or some combination thereof) that is detectable by, or directed at, but otherwise occurs independently of the operation of the machine sensory and control system. Thus, for example, the system is configurable to ‘observe’ ordinary user locomotion (e.g., motion, translation, expression, flexing, deformation, and so on), locomotion directed at controlling one or more machines (e.g., gesturing, intentionally system-directed facial contortion, etc.), attributes thereof (e.g., rigidity, deformation, fingerprints, veins, pulse rates and/or other biometric parameters). In one embodiment, the system provides for detecting that some variation(s) in one or more portions of interest (e.g., fingers, fingertips, or other control surface portions) of a user has occurred, for determining that an interaction with one or more machines corresponds to the variation(s), for determining if the interaction should occur, and, if so, for at least one of initiating, conducting, continuing, discontinuing and/or modifying the interaction and/or a corresponding interaction.
For example and with reference to
A model management module 197 embodiment comprises a model refiner 197F to update one or more models 197B (or portions thereof) from sensory information (e.g., images, scans, other sensory-perceptible phenomenon) and environmental information (i.e., context, noise, etc.); enabling a model analyzer 197I to recognize object, position, motion and attribute information that might be useful in controlling a machine. Model refiner 197F employs an object library 197A to manage objects including one or more models 197B (i.e., of user portions (e.g., hand, face), other control objects (e.g., styli, tools)) or the like (see e.g., model 197B-1, 197B-2 of
One or more attributes 197-5 can define characteristics of a model subcomponent 197-3. Attributes can include e.g., attach points, neighbors, sizes (e.g., length, width, depth), rigidity, flexibility, torsion, zero or more degrees of freedom of motion with respect to one or more defined points, which can include endpoints for example, and other attributes defining a salient characteristic or property of a portion of control object 99 being modeled by predictive information 197B-1. In an embodiment, predictive information about the control object can include a model of the control object together with attributes defining the model and values of those attributes.
In an embodiment, observation information including observation of the control object can be compared against the model at one or more of periodically, randomly or substantially continuously (i.e., in real time). Observational information can include without limitation observed values of attributes of the control object corresponding to the attributes of one or more model subcomponents in the predictive information for the control object. In an embodiment, comparison of the model with the observation information provides an error indication. In an embodiment, an error indication can be computed by determining a closest distance determined between a first point A belonging to a set of points defining the virtual surface 194 and a second point B belonging to a model subcomponent 197-2 determined to be corresponding to the first point (e.g., nearest to the first point for example). In an embodiment, the error indication can be applied to the predictive information to correct the model to more closely conform to the observation information. In an embodiment, error indication can be applied to the predictive information repeatedly until the error indication falls below a threshold, a measure of conformance with the observation information rises above a threshold, or a fixed or variable number of times, or a fixed or variable number of times per time period, or combinations thereof.
In an embodiment and with reference to
In an embodiment, when the control object morphs, conforms, and/or translates, motion information reflecting such motion(s) is included into the observed information. Points in space can be recomputed based on the new observation information. The model subcomponents can be scaled, sized, selected, rotated, translated, moved, or otherwise re-ordered to enable portions of the model corresponding to the virtual surface(s) to conform within the set of points in space.
In an embodiment, motion(s) of the control object can be rigid transformation, in which case, points on the virtual surface(s) remain at the same distance(s) from one another through the motion. Motion(s) can be non-rigid transformations, in which points on the virtual surface(s) can vary in distance(s) from one another during the motion. In an embodiment, observation information can be used to adjust (and/or recomputed) predictive information thereby enabling “tracking” the control object. In embodiments, control object can be tracked by determining whether a rigid transformation or a non-rigid transformation occurs. In an embodiment, when a rigid transformation occurs, a transformation matrix is applied to each point of the model uniformly. Otherwise, when a non-rigid transformation occurs, an error indication can be determined, and an error minimization technique such as described herein above can be applied.
In an embodiment, rigid transformations and/or non-rigid transformations can be composed. One example composition embodiment includes applying a rigid transformation to predictive information. Then an error indication can be determined, and an error minimization technique such as described herein above can be applied. In an embodiment, determining a transformation can include calculating a rotation matrix that provides a reduced RMSD (root mean squared deviation) between two paired sets of points. One embodiment can include using Kabsch Algorithm to produce a rotation matrix.
In an embodiment and by way of example, one or more force lines can be determined from one or more portions of a virtual surface.
Collisions
In an embodiment, predictive information can include collision information concerning two or more capsoloids. By means of illustration, several possible fits of predicted information to observed information can be removed from consideration based upon a determination that these potential solutions would result in collisions of capsoloids.
In an embodiment, a relationship between neighboring capsoloids, each having one or more attributes (e.g., determined minima and/or maxima of intersection angles between capsoloids) can be determined. In an embodiment, determining a relationship between a first capsoloid having a first set of attributes and a second capsoloid having a second set of attributes includes detecting and resolving conflicts between first attribute and second attributes. For example, a conflict can include a capsoloid having one type of angle value with a neighbor having a second type of angle value incompatible with the first type of angle value. Attempts to attach a capsoloid with a neighboring capsoloid having attributes such that the combination will exceed what is allowed in the observed—or to pair incompatible angles, lengths, shapes, or other such attributes—can be removed from the predicted information without further consideration.
Lean Model
In an embodiment, predictive information can be artificially constrained to capsoloids positioned in a subset of the observed information—thereby enabling creation of a “lean model”. For example, as illustrated in
In an embodiment, a lean model can be associated with a full predictive model. The lean model (or topological information, or properties described above) can be extracted from the predictive model to form a constraint. Then, the constraint can be imposed on the predictive information thereby enabling the predictive information to be constrained in one or more of behavior, shape, total (system) energy, structure, orientation, compression, shear, torsion, other properties, and/or combinations thereof.
Occlusion and Clustering
In an embodiment, the observed can include components reflecting portions of the control object which are occluded from view of the device (“occlusions” or “occluded components”). In one embodiment, the predictive information can be “fit” to the observed as described herein above with the additional constraint(s) that some total property of the predictive information (e.g., potential energy) be minimized or maximized (or driven to lower or higher value(s) through iteration or solution). Properties can be derived from nature, properties of the control object being viewed, others, and/or combinations thereof. In another embodiment, as shown by
Friction
In an embodiment, a “friction constraint” is applied on the model 197B-1. For example, if fingers of a hand being modeled are close together (in position or orientation), corresponding portions of the model will have more “friction”. The more friction a model subcomponent has in the model, the less the subcomponent moves in response to new observed information. Accordingly the model is enabled to mimic the way portions of the hand that are physically close together move together, and move less overall.
In an embodiment and by way of example,
Image analysis can be achieved by various algorithms and/or mechanisms. For example,
As shown by
For example and according to one embodiment illustrated by
The ellipse equation (1) is solved for θ, subject to the constraints that: (1) (xC, yC) must lie on the centerline determined from the four tangents 195A, 195B, 195C, and 195D (i.e., centerline 189A of
A1x+Biy+D1=0
A2x÷B2y+D2=0
A3x+B3y+D3==0
A4x+B4y+D4=0 (2)
Four column vectors r12, r23, r14 and r24 are obtained from the coefficients Ai, Bi and Di of equations (2) according to equations (3), in which the “\” operator denotes matrix left division, which is defined for a square matrix M and a column vector v such that M \ v=r, where r is the column vector that satisfies Mr=v:
Four component vectors G and H are defined in equations (4) from the vectors of tangent coefficients A, B and D and scalar quantities p and q, which are defined using the column vectors r12, r23, r14 and r24 from equations (3).
c1=(r13+r24)/2
c2=(r14+r23)/2
δ1=c21−c11
δ2=c22−c12
p=δ1/δ2
q=c11−c12*p
G=Ap+B
H=Aq+D (4)
Six scalar quantities vA2, vAB, vB2, wA2, wAB, and wB2 are defined by equation (5) in terms of the components of vectors G and H of equation (4).
Using the parameters defined in equations (1)-(5), solving for θ is accomplished by solving the eighth-degree polynomial equation (6) for t, where the coefficients Qi (for i=0 to 8) are defined as shown in equations (7)-(15).
0=Q8t8+Q7t7+Q6t6+Q5t5+Q4t4+Q3t3+Q2t2+Q1t+Q0 (6)
The parameters A1, B1, G1, H1, vA2, vAB, vB2, wA2, wAB, and wB2 used in equations (7)-(15) are defined as shown in equations (1)-(4). The parameter n is the assumed semi-major axis (in other words, a0). Once the real roots t are known, the possible values of θ are defined as θ=atan(t).
In this exemplary embodiment, equations (6)-(15) have at most three real roots; thus, for any four tangent lines, there are at most three possible ellipses that are tangent to all four lines and that satisfy the a=a0 constraint. (In some instances, there may be fewer than three real roots.) For each real root θ, the corresponding values of (xC, yC) and b can be readily determined. Depending on the particular inputs, zero or more solutions will be obtained; for example, in some instances, three solutions can be obtained for a typical configuration of tangents. Each solution is completely characterized by the parameters {θ, a=a0, b, (xC, yC)}. Alternatively, or additionally, a model builder 197C and model updater 197D provide functionality to define, build and/or customize model(s) 197B using one or more components in object library 197A. Once built, model refiner 197F updates and refines the model, bringing the predictive information of the model in line with observed information from the detection system 90A.
The model subcomponents 197-1, 197-2, 197-3, and 197-4 can be scaled, sized, selected, rotated, translated, moved, or otherwise re-ordered to enable portions of the model corresponding to the virtual surface(s) to conform within the points 193 in space. Model refiner 197F employs a variation detector 197G to substantially continuously determine differences between sensed information and predictive information and provide to model refiner 197F a variance useful to adjust the model 197B accordingly. Variation detector 197G and model refiner 197F are further enabled to correlate among model portions to preserve continuity with characteristic information of a corresponding object being modeled, continuity in motion, and/or continuity in deformation, conformation and/or torsional rotations.
An environmental filter 197H reduces extraneous noise in sensed information received from the detection system 90A using environmental information to eliminate extraneous elements from the sensory information. Environmental filter 197H employs contrast enhancement, subtraction of a difference image from an image, software filtering, and background subtraction (using background information provided by objects of interest determiner 198H (see below) to enable model refiner 197F to build, refine, manage and maintain model(s) 197B of objects of interest from which control inputs can be determined.
A model analyzer 197I determines that a reconstructed shape of a sensed object portion matches an object model in an object library; and interprets the reconstructed shape (and/or variations thereon) as user input. Model analyzer 197I provides output in the form of object, position, motion and attribute information to an interaction system 90C.
Again with reference to
A context determiner 198G and object of interest determiner 198H provide functionality to determine from the object, position, motion and attribute information objects of interest (e.g., control objects, or other objects to be modeled and analyzed), objects not of interest (e.g., background) based upon a detected context. For example, when the context is determined to be an identification context, a human face will be determined to be an object of interest to the system and will be determined to be a control object. On the other hand, when the context is determined to be a fingertip control context, the finger tips will be determined to be object(s) of interest and will be determined to be a control objects whereas the user's face will be determined not to be an object of interest (i.e., background). Further, when the context is determined to be a styli (or other tool) held in the fingers of the user, the tool tip will be determined to be object of interest and a control object whereas the user's fingertips might be determined not to be objects of interest (i.e., background). Background objects can be included in the environmental information provided to environmental filter 197H of model management module 197.
A virtual environment manager 198E provides creation, selection, modification and de-selection of one or more virtual constructs 198B (see
Further with reference to
A control module 199 embodiment comprises a command engine 199F to determine whether to issue command(s) and what command(s) to issue based upon the command information, related information and other information discernable from the object, position, motion and attribute information, as received from an interaction interpretation module 198. Command engine 199F employs command/control repository 199A (e.g., application commands, OS commands, commands to MSCS, misc. commands) and related information indicating context received from the interaction interpretation module 198 to determine one or more commands corresponding to the gestures, context, etc. indicated by the command information. For example, engagement gestures can be mapped to one or more controls, or a control-less screen location, of a presentation device associated with a machine under control. Controls can include imbedded controls (e.g., sliders, buttons, and other control objects in an application), or environmental level controls (e.g., windowing controls, scrolls within a window, and other controls affecting the control environment). In embodiments, controls may be displayed using 2D presentations (e.g., a cursor, cross-hairs, icon, graphical representation of the control object, or other displayable object) on display screens and/or presented in 3D forms using holography, projectors or other mechanisms for creating 3D presentations, or audible (e.g., mapped to sounds, or other mechanisms for conveying audible information) and/or touchable via haptic techniques.
Further, an authorization engine 199G employs biometric profiles 199B (e.g., users, identification information, privileges, etc.) and biometric information received from the interaction interpretation module 198 to determine whether commands and/or controls determined by the command engine 199F are authorized. A command builder 199C and biometric profile builder 199D provide functionality to define, build and/or customize command/control repository 199A and biometric profiles 199B.
Selected authorized commands are provided to machine(s) under control (i.e., “client”) via interface layer 196. Commands/controls to the virtual environment (i.e., interaction control) are provided to virtual environment manager 198E. Commands/controls to the emission/detection systems (i.e., sensory control) are provided to emission module 91 and/or detection module 92 as appropriate.
In various embodiments and with reference to
As shown, computer system 900 comprises elements coupled via communication channels (e.g. bus 901) including one or more general or special purpose processors 902, such as a Pentium® or Power PC®, digital signal processor (“DSP”), or other processing. System 900 elements also include one or more input devices 903 (such as a mouse, keyboard, joystick, microphone, remote control unit, Non-tactile sensors 910, biometric or other sensors 93 of
System 900 elements also include a computer readable storage media reader 905 coupled to a computer readable storage medium 906, such as a storage/memory device or hard or removable storage/memory media; examples are further indicated separately as storage device 908 and non-transitory memory 909, which can include hard disk variants, floppy/compact disk variants, digital versatile disk (“DVD”) variants, smart cards, read only memory, random access memory, cache memory or others, in accordance with a particular application (e.g. see data store(s) 197A, 198A, 199A and 199B of
System 900 element implementations can include hardware, software, firmware or a suitable combination. When implemented in software (e.g. as an application program, object, downloadable, servlet, and so on, in whole or part), a system 900 element can be communicated transitionally or more persistently from local or remote storage to memory for execution, or another suitable mechanism can be utilized, and elements can be implemented in compiled, simulated, interpretive or other suitable forms. Input, intermediate or resulting data or functional elements can further reside more transitionally or more persistently in a storage media or memory, (e.g. storage device 908 or memory 909) in accordance with a particular application.
Certain potential interaction determination, virtual object selection, authorization issuances and other aspects enabled by input/output processors and other element embodiments disclosed herein can also be provided in a manner that enables a high degree of broad or even global applicability; these can also be suitably implemented at a lower hardware/software layer. Note, however, that aspects of such elements can also be more closely linked to a particular application type or machine, or might benefit from the use of mobile code, among other considerations; a more distributed or loosely coupled correspondence of such elements with OS processes might thus be more desirable in such cases.
At action 1002, a set of points in a three-dimensional (3D) sensory space are observed.
At action 1012, surface normal directions from the points are determined.
At action 1022, the points are clustered based on their surface normal directions and adjacency.
At action 1032, a positions of segments of the predictive model are refined. In some embodiments, refining positions of segments of the predictive model further includes calculating an error indication by determining whether the points and points on the segments of the predictive model are within a threshold closest distance.
In other embodiments, refining positions of segments of the predictive model further includes calculating an error indication by pairing the points in the set with points on axes of the segments of the predictive model, wherein the points in the set lie on vectors that are normal to the axes and determining a reduced root mean squared deviation (RMSD) of distances between paired point sets.
In yet other embodiments, refining positions of segments of the predictive model further includes calculating an error indication by pairing the points in the set with points on the segments of the predictive model, wherein normal vectors to the points in the set are parallel to each other and determining a reduced root mean squared deviation (RMSD) of distances between bases of the normal vectors.
In some other embodiment, refining positions of segments of the predictive model further includes determining physical proximity between points in the set based on the matched clusters, based on the determined physical proximity, identifying co-located segments of the predictive model that change positions together, and refining positions of segments of the predictive model responsive to the co-located segments. In one embodiment, the co-located segments represent adjoining figures of the hand. In another embodiment, the co-located segments represent subcomponents of a same finger.
At action 1042, a predictive model of a hand is accessed.
At action 1052, the clusters of the points to the segments are matched.
At action 1062, the matched clusters are used to refine the positions of the matched segments.
At action 1072, a reference vector is selected and a difference in angle between the surface normal directions from the points and the reference vector is determined. Further, a magnitude of the difference is used to cluster the points. In one embodiment, the reference vector is orthogonal to a field of view of camera used to capture the points on an image. In another embodiment, the reference vector is along a longitudinal axis of the hand. In yet another embodiment, the reference vector is along a longitudinal axis of a portion of the hand.
This method and other implementations of the technology disclosed can include one or more of the following features and/or features described in connection with additional methods disclosed. In the interest of conciseness, the combinations of features disclosed in this application are not individually enumerated and are not repeated with each base set of features. The reader will understand how features identified in this section can readily be combined with sets of base features identified as implementations in sections of this application.
Other implementations can include a non-transitory computer readable storage medium storing instructions executable by a processor to perform any of the methods described above. Yet another implementation can include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform any of the methods described above.
At action 1112, two or more alternative interpretations of movement from the first position to the second position are received.
At action 1122, entropy or extent of motion involved in the alternative interpretations is estimated.
At action 1132, an alternative interpretation with lower entropy or extent of motion than other interpretations is selected.
At action 1142, the selected interpretation is applied to predicting further positioning of the segment and of other segments of the predictive model from additional observations in the 3D sensory space.
This method and other implementations of the technology disclosed can include one or more of the following features and/or features described in connection with additional methods disclosed. In the interest of conciseness, the combinations of features disclosed in this application are not individually enumerated and are not repeated with each base set of features. The reader will understand how features identified in this section can readily be combined with sets of base features identified as implementations in sections of this application.
Other implementations can include a non-transitory computer readable storage medium storing instructions executable by a processor to perform any of the methods described above. Yet another implementation can include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform any of the methods described above.
Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
While the invention has been described by way of example and in terms of the specific embodiments, it is to be understood that the invention is not limited to the disclosed embodiments. To the contrary, it is intended to cover various modifications and similar arrangements as would be apparent to those skilled in the art. Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.
This application is a continuation of U.S. patent application Ser. No. 17/308,903 entitled “PREDICTIVE INFORMATION FOR FREE SPACE GESTURE CONTROL AND COMMUNICATION”, filed on May 5, 2021, which is a continuation of U.S. patent application Ser. No. 16/695,136 entitled “IMPROVING PREDICTIVE INFORMATION FOR FREE SPACE GESTURE CONTROL AND COMMUNICATION”, filed on Nov. 25, 2019, which is a continuation of U.S. patent application Ser. No. 16/004,119 entitled “PREDICTIVE INFORMATION FOR FREE SPACE GESTURE CONTROL AND COMMUNICATION”, filed on Jun. 8, 2018, which is a continuation of U.S. patent application Ser. No. 14/530,690, entitled “PREDICTIVE INFORMATION FOR FREE SPACE GESTURE CONTROL AND COMMUNICATION”, filed on Oct. 31, 2014, which claims the benefit of U.S. Provisional Patent Application No. 61/898,462, entitled, “PREDICTIVE INFORMATION FOR FREE SPACE GESTURE CONTROL AND COMMUNICATION,” filed on Oct. 31, 2013. The non-provisional and provisional applications are hereby incorporated by reference for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
2665041 | Maffucci | Jan 1954 | A |
3638989 | Sandquist | Feb 1972 | A |
3768022 | Lang et al. | Oct 1973 | A |
4175862 | DiMatteo et al. | Nov 1979 | A |
4876455 | Sanderson et al. | Oct 1989 | A |
4879659 | Bowlin et al. | Nov 1989 | A |
4893223 | Arnold | Jan 1990 | A |
5038258 | Koch et al. | Aug 1991 | A |
5134661 | Reinsch | Jul 1992 | A |
5282067 | Liu | Jan 1994 | A |
5434617 | Bianchi | Jul 1995 | A |
5454043 | Freeman | Sep 1995 | A |
5574511 | Yang et al. | Nov 1996 | A |
5581276 | Cipolla et al. | Dec 1996 | A |
5594469 | Freeman et al. | Jan 1997 | A |
5659475 | Brown | Aug 1997 | A |
5691737 | Ito et al. | Nov 1997 | A |
5742263 | Wang et al. | Apr 1998 | A |
5900863 | Numazaki | May 1999 | A |
5940538 | Spiegel et al. | Aug 1999 | A |
6002808 | Freeman | Dec 1999 | A |
6031161 | Baltenberger | Feb 2000 | A |
6031661 | Tanaami | Feb 2000 | A |
6072494 | Nguyen | Jun 2000 | A |
6075895 | Qiao et al. | Jun 2000 | A |
6147678 | Kumar et al. | Nov 2000 | A |
6154558 | Hsieh | Nov 2000 | A |
6181343 | Lyons | Jan 2001 | B1 |
6184326 | Razavi et al. | Feb 2001 | B1 |
6184926 | Khosravi et al. | Feb 2001 | B1 |
6195104 | Lyons | Feb 2001 | B1 |
6204852 | Kumar et al. | Mar 2001 | B1 |
6252598 | Segen | Jun 2001 | B1 |
6256400 | Takata et al. | Jul 2001 | B1 |
6263091 | Jain et al. | Jul 2001 | B1 |
6346933 | Lin | Feb 2002 | B1 |
6417970 | Travers et al. | Jul 2002 | B1 |
6463402 | Bennett et al. | Oct 2002 | B1 |
6492986 | Metaxas et al. | Dec 2002 | B1 |
6493041 | Hanko et al. | Dec 2002 | B1 |
6498628 | Wamura | Dec 2002 | B2 |
6578203 | Anderson, Jr. et al. | Jun 2003 | B1 |
6597801 | Cham et al. | Jul 2003 | B1 |
6603867 | Sugino et al. | Aug 2003 | B1 |
6661918 | Gordon et al. | Dec 2003 | B1 |
6674877 | Jojic et al. | Jan 2004 | B1 |
6702494 | Dumler et al. | Mar 2004 | B2 |
6734911 | Lyons | May 2004 | B1 |
6738424 | Allmen et al. | May 2004 | B1 |
6771294 | Pulli et al. | Aug 2004 | B1 |
6798628 | Macbeth | Sep 2004 | B1 |
6804654 | Kobylevsky et al. | Oct 2004 | B2 |
6804656 | Rosenfeld et al. | Oct 2004 | B1 |
6814656 | Rodriguez | Nov 2004 | B2 |
6819796 | Hong et al. | Nov 2004 | B2 |
6901170 | Terada et al. | May 2005 | B1 |
6919880 | Morrison et al. | Jul 2005 | B2 |
6950534 | Cohen et al. | Sep 2005 | B2 |
6993157 | Due et al. | Jan 2006 | B1 |
7152024 | Marschner et al. | Dec 2006 | B2 |
7213707 | Tubbs et al. | May 2007 | B2 |
7215828 | Luo | May 2007 | B2 |
7244233 | Krantz et al. | Jul 2007 | B2 |
7257237 | Luck et al. | Aug 2007 | B1 |
7259873 | Sikora et al. | Aug 2007 | B2 |
7308112 | Fujimura et al. | Dec 2007 | B2 |
7340077 | Gokturk et al. | Mar 2008 | B2 |
7472047 | Kramer et al. | Dec 2008 | B2 |
7483049 | Aman et al. | Jan 2009 | B2 |
7519223 | Dehlin et al. | Apr 2009 | B2 |
7532206 | Morrison et al. | May 2009 | B2 |
7536032 | Bell | May 2009 | B2 |
7542586 | Johnson | Jun 2009 | B2 |
7598942 | Underkoffler et al. | Oct 2009 | B2 |
7606417 | Steinberg et al. | Oct 2009 | B2 |
7646372 | Marks et al. | Jan 2010 | B2 |
7656372 | Sato et al. | Feb 2010 | B2 |
7665041 | Wilson et al. | Feb 2010 | B2 |
7692625 | Morrison et al. | Apr 2010 | B2 |
7769994 | Peles | Aug 2010 | B2 |
7831932 | Josephsoon et al. | Nov 2010 | B2 |
7840031 | Albertson et al. | Nov 2010 | B2 |
7861188 | Josephsoon et al. | Dec 2010 | B2 |
7940885 | Stanton et al. | May 2011 | B2 |
7948493 | Klefenz et al. | May 2011 | B2 |
7961934 | Thrun et al. | Jun 2011 | B2 |
7971156 | Albertson et al. | Jun 2011 | B2 |
7980885 | Gattwinkel et al. | Jul 2011 | B2 |
8005263 | Fujimura et al. | Aug 2011 | B2 |
8023698 | Niwa et al. | Sep 2011 | B2 |
8035624 | Bell et al. | Oct 2011 | B2 |
8045825 | Shimoyama et al. | Oct 2011 | B2 |
8064704 | Kim et al. | Nov 2011 | B2 |
8085339 | Marks | Dec 2011 | B2 |
8086971 | Radivojevic et al. | Dec 2011 | B2 |
8111239 | Pryor et al. | Feb 2012 | B2 |
8112719 | Hsu et al. | Feb 2012 | B2 |
8144233 | Fukuyama | Mar 2012 | B2 |
8185176 | Mangat et al. | May 2012 | B2 |
8213707 | Li et al. | Jul 2012 | B2 |
8218858 | Gu | Jul 2012 | B2 |
8229134 | Duraiswami et al. | Jul 2012 | B2 |
8235529 | Raffle et al. | Aug 2012 | B1 |
8244233 | Chang et al. | Aug 2012 | B2 |
8249345 | Wu et al. | Aug 2012 | B2 |
8253564 | Lee et al. | Aug 2012 | B2 |
8270669 | Aichi et al. | Sep 2012 | B2 |
8289162 | Mooring et al. | Oct 2012 | B2 |
8290208 | Kurtz et al. | Oct 2012 | B2 |
8304727 | Lee et al. | Nov 2012 | B2 |
8319832 | Nagata et al. | Nov 2012 | B2 |
8363010 | Nagata | Jan 2013 | B2 |
8395600 | Kawashima et al. | Mar 2013 | B2 |
8432377 | Newton | Apr 2013 | B2 |
8514221 | King et al. | Aug 2013 | B2 |
8553037 | Smith et al. | Oct 2013 | B2 |
8582809 | Halimeh et al. | Nov 2013 | B2 |
8593417 | Kawashima et al. | Nov 2013 | B2 |
8605202 | Muijs et al. | Dec 2013 | B2 |
8620024 | Huang et al. | Dec 2013 | B2 |
8631355 | Murillo et al. | Jan 2014 | B2 |
8659594 | Kim et al. | Feb 2014 | B2 |
8659658 | Vassigh et al. | Feb 2014 | B2 |
8693731 | Holz et al. | Apr 2014 | B2 |
8738523 | Sanchez et al. | May 2014 | B1 |
8744122 | Salgian et al. | Jun 2014 | B2 |
8786596 | House | Jul 2014 | B2 |
8817087 | Weng et al. | Aug 2014 | B2 |
8842084 | Andersson et al. | Sep 2014 | B2 |
8843857 | Berkes et al. | Sep 2014 | B2 |
8872914 | Gobush | Oct 2014 | B2 |
8878749 | Wu et al. | Nov 2014 | B1 |
8891868 | Ivanchenko | Nov 2014 | B1 |
8902224 | Wyeld | Dec 2014 | B2 |
8907982 | Zontrop et al. | Dec 2014 | B2 |
8922590 | Luckett, Jr. et al. | Dec 2014 | B1 |
8929609 | Padovani et al. | Jan 2015 | B2 |
8930852 | Chen et al. | Jan 2015 | B2 |
8942881 | Tobbs et al. | Jan 2015 | B2 |
8954340 | Sanchez et al. | Feb 2015 | B2 |
8957857 | Lee et al. | Feb 2015 | B2 |
9014414 | Katano et al. | Apr 2015 | B2 |
9056396 | Innell | Jun 2015 | B1 |
9070019 | Holz | Jun 2015 | B2 |
9119670 | Yang et al. | Sep 2015 | B2 |
9122354 | Sharma | Sep 2015 | B2 |
9124778 | Crabtree | Sep 2015 | B1 |
9135503 | Sundaresan et al. | Sep 2015 | B2 |
9305229 | DeLean et al. | Apr 2016 | B2 |
9459697 | Bedikian et al. | Oct 2016 | B2 |
9721383 | Horowitz et al. | Aug 2017 | B1 |
10846942 | Horowitz et al. | Nov 2020 | B1 |
11461966 | Horowitz et al. | Oct 2022 | B1 |
20010044858 | Rekimoto | Nov 2001 | A1 |
20010052985 | Ono | Dec 2001 | A1 |
20020008139 | Albertelli | Jan 2002 | A1 |
20020008211 | Kask | Jan 2002 | A1 |
20020041327 | Hildreth et al. | Apr 2002 | A1 |
20020080094 | Biocca et al. | Jun 2002 | A1 |
20020105484 | Navab et al. | Aug 2002 | A1 |
20030053658 | Pavlidis | Mar 2003 | A1 |
20030053659 | Pavlidis et al. | Mar 2003 | A1 |
20030081141 | Mazzapica | May 2003 | A1 |
20030123703 | Pavlidis et al. | Jul 2003 | A1 |
20030152289 | Luo | Aug 2003 | A1 |
20030202697 | Simard et al. | Oct 2003 | A1 |
20040103111 | Miller et al. | May 2004 | A1 |
20040125228 | Dougherty | Jul 2004 | A1 |
20040125984 | Ito et al. | Jul 2004 | A1 |
20040145809 | Brenner | Jul 2004 | A1 |
20040155877 | Tong et al. | Aug 2004 | A1 |
20040212725 | Raskar | Oct 2004 | A1 |
20050007673 | Chaoulov et al. | Jan 2005 | A1 |
20050068518 | Baney et al. | Mar 2005 | A1 |
20050094019 | Grosvenor et al. | May 2005 | A1 |
20050131607 | Breed | Jun 2005 | A1 |
20050156888 | Kie et al. | Jul 2005 | A1 |
20050168578 | Gobush | Aug 2005 | A1 |
20050236558 | Nabeshima et al. | Oct 2005 | A1 |
20050238201 | Shamaie | Oct 2005 | A1 |
20050271279 | Fujimura et al. | Dec 2005 | A1 |
20060017807 | Lee et al. | Jan 2006 | A1 |
20060028656 | Venkatesh et al. | Feb 2006 | A1 |
20060029296 | King et al. | Feb 2006 | A1 |
20060034545 | Mattes et al. | Feb 2006 | A1 |
20060050979 | Kawahara | Mar 2006 | A1 |
20060072105 | Wagner | Apr 2006 | A1 |
20060098899 | King et al. | May 2006 | A1 |
20060111878 | Pendyala et al. | May 2006 | A1 |
20060204040 | Freeman et al. | Sep 2006 | A1 |
20060210112 | Cohen et al. | Sep 2006 | A1 |
20060262421 | Matsumoto et al. | Nov 2006 | A1 |
20060290950 | Platt et al. | Dec 2006 | A1 |
20070014466 | Baldwin | Jan 2007 | A1 |
20070042346 | Weller | Feb 2007 | A1 |
20070086621 | Aggarwal et al. | Apr 2007 | A1 |
20070130547 | Boillot | Jun 2007 | A1 |
20070206719 | Suryanarayanan et al. | Sep 2007 | A1 |
20070230929 | Niwa et al. | Oct 2007 | A1 |
20070238956 | Haras et al. | Oct 2007 | A1 |
20080013826 | Hillis et al. | Jan 2008 | A1 |
20080019576 | Senftner et al. | Jan 2008 | A1 |
20080030429 | Hailpern et al. | Feb 2008 | A1 |
20080031492 | Lanz | Feb 2008 | A1 |
20080056752 | Denton et al. | Mar 2008 | A1 |
20080064954 | Adams et al. | Mar 2008 | A1 |
20080106637 | Nakao et al. | May 2008 | A1 |
20080106746 | Shpunt et al. | May 2008 | A1 |
20080110994 | Knowles et al. | May 2008 | A1 |
20080118091 | Serfaty et al. | May 2008 | A1 |
20080126937 | Pachet | May 2008 | A1 |
20080187175 | Kim et al. | Aug 2008 | A1 |
20080244468 | Nishihara et al. | Oct 2008 | A1 |
20080246759 | Summers | Oct 2008 | A1 |
20080273764 | Scholl | Nov 2008 | A1 |
20080278589 | Thorn | Nov 2008 | A1 |
20080291160 | Rabin | Nov 2008 | A1 |
20080304740 | Sun et al. | Dec 2008 | A1 |
20080319356 | Cain et al. | Dec 2008 | A1 |
20090002489 | Yang et al. | Jan 2009 | A1 |
20090093307 | Miyaki | Apr 2009 | A1 |
20090102840 | Li | Apr 2009 | A1 |
20090116742 | Nishihara | May 2009 | A1 |
20090122146 | Zalewski et al. | May 2009 | A1 |
20090153655 | Ke et al. | Jun 2009 | A1 |
20090203993 | Mangat et al. | Aug 2009 | A1 |
20090203994 | Mangat et al. | Aug 2009 | A1 |
20090217211 | Hildreth et al. | Aug 2009 | A1 |
20090257623 | Tang et al. | Oct 2009 | A1 |
20090274339 | Cohen et al. | Nov 2009 | A9 |
20090309710 | Kakinami | Dec 2009 | A1 |
20100013832 | Xiao et al. | Jan 2010 | A1 |
20100020078 | Shpunt | Jan 2010 | A1 |
20100023015 | Park | Jan 2010 | A1 |
20100026963 | Faulstich | Feb 2010 | A1 |
20100027845 | Kim et al. | Feb 2010 | A1 |
20100046842 | Conwell | Feb 2010 | A1 |
20100053164 | Mai et al. | Mar 2010 | A1 |
20100053209 | Rauch et al. | Mar 2010 | A1 |
20100053612 | Ou-Yang et al. | Mar 2010 | A1 |
20100058252 | Ko | Mar 2010 | A1 |
20100066737 | Liu | Mar 2010 | A1 |
20100066975 | Rehnstrom | Mar 2010 | A1 |
20100091110 | Hildreth | Apr 2010 | A1 |
20100118123 | Freedman et al. | May 2010 | A1 |
20100121189 | Ma et al. | May 2010 | A1 |
20100125815 | Wang et al. | May 2010 | A1 |
20100127995 | Rigazio et al. | May 2010 | A1 |
20100141762 | Siann et al. | Jun 2010 | A1 |
20100158372 | Kim et al. | Jun 2010 | A1 |
20100177929 | Kurtz et al. | Jul 2010 | A1 |
20100194863 | Opes et al. | Aug 2010 | A1 |
20100199232 | Mistry et al. | Aug 2010 | A1 |
20100201880 | Iwamura | Aug 2010 | A1 |
20100208942 | Porter et al. | Aug 2010 | A1 |
20100219934 | Matsumoto | Sep 2010 | A1 |
20100222102 | Rodriguez | Sep 2010 | A1 |
20100264833 | Van Endert et al. | Oct 2010 | A1 |
20100277411 | Yee et al. | Nov 2010 | A1 |
20100296698 | Lien et al. | Nov 2010 | A1 |
20100302357 | Hsu et al. | Dec 2010 | A1 |
20100303298 | Marks et al. | Dec 2010 | A1 |
20100306712 | Snook et al. | Dec 2010 | A1 |
20100309097 | Raviv et al. | Dec 2010 | A1 |
20100329509 | Fahn et al. | Dec 2010 | A1 |
20110007072 | Khan et al. | Jan 2011 | A1 |
20110025818 | Gallmeier et al. | Feb 2011 | A1 |
20110026765 | Ivanich et al. | Feb 2011 | A1 |
20110043806 | Guetta et al. | Feb 2011 | A1 |
20110057875 | Shigeta et al. | Mar 2011 | A1 |
20110066984 | Li | Mar 2011 | A1 |
20110080470 | Kuno et al. | Apr 2011 | A1 |
20110080490 | Clarkson et al. | Apr 2011 | A1 |
20110093820 | Zhang et al. | Apr 2011 | A1 |
20110107216 | Bi | May 2011 | A1 |
20110115486 | Frohlich et al. | May 2011 | A1 |
20110116684 | Coffman et al. | May 2011 | A1 |
20110119640 | Berkes et al. | May 2011 | A1 |
20110134112 | Koh et al. | Jun 2011 | A1 |
20110148875 | Kim et al. | Jun 2011 | A1 |
20110169726 | Holmdahl et al. | Jul 2011 | A1 |
20110173574 | Clavin et al. | Jul 2011 | A1 |
20110176146 | Alvarez Diez et al. | Jul 2011 | A1 |
20110181509 | Rautiainen et al. | Jul 2011 | A1 |
20110193778 | Lee et al. | Aug 2011 | A1 |
20110205151 | Newton et al. | Aug 2011 | A1 |
20110213664 | Osterhout et al. | Sep 2011 | A1 |
20110228978 | Chen et al. | Sep 2011 | A1 |
20110234840 | Klefenz et al. | Sep 2011 | A1 |
20110243451 | Oyaizu | Oct 2011 | A1 |
20110251896 | Impollonia et al. | Oct 2011 | A1 |
20110261178 | Lo et al. | Oct 2011 | A1 |
20110267259 | Tidemand et al. | Nov 2011 | A1 |
20110279397 | Rimon et al. | Nov 2011 | A1 |
20110286676 | El Dokor | Nov 2011 | A1 |
20110289455 | Reville et al. | Nov 2011 | A1 |
20110289456 | Reville et al. | Nov 2011 | A1 |
20110291925 | Srael et al. | Dec 2011 | A1 |
20110291988 | Bamji et al. | Dec 2011 | A1 |
20110296353 | Ahmed et al. | Dec 2011 | A1 |
20110299737 | Wang et al. | Dec 2011 | A1 |
20110304600 | Yoshida | Dec 2011 | A1 |
20110304650 | Campillo et al. | Dec 2011 | A1 |
20110310007 | Margolis et al. | Dec 2011 | A1 |
20110310220 | McEldowney | Dec 2011 | A1 |
20110314427 | Sundararajan | Dec 2011 | A1 |
20120038637 | Marks | Feb 2012 | A1 |
20120050157 | Atta et al. | Mar 2012 | A1 |
20120062736 | Xiong | Mar 2012 | A1 |
20120065499 | Chono | Mar 2012 | A1 |
20120068914 | Jacobsen et al. | Mar 2012 | A1 |
20120113316 | Ueta et al. | May 2012 | A1 |
20120159380 | Kocienda et al. | Jun 2012 | A1 |
20120163675 | Joo et al. | Jun 2012 | A1 |
20120194517 | Zadi et al. | Aug 2012 | A1 |
20120204133 | Guendelman et al. | Aug 2012 | A1 |
20120223959 | Engeling | Sep 2012 | A1 |
20120236288 | Stanley | Sep 2012 | A1 |
20120250936 | Holmgren | Oct 2012 | A1 |
20120270654 | Padovani et al. | Oct 2012 | A1 |
20120274781 | Shet et al. | Nov 2012 | A1 |
20120281873 | Brown et al. | Nov 2012 | A1 |
20120293667 | Baba et al. | Nov 2012 | A1 |
20120314030 | Datta et al. | Dec 2012 | A1 |
20130019204 | Kotler et al. | Jan 2013 | A1 |
20130038694 | Nichani et al. | Feb 2013 | A1 |
20130044951 | Cherng et al. | Feb 2013 | A1 |
20130050425 | Im et al. | Feb 2013 | A1 |
20130086531 | Sugita et al. | Apr 2013 | A1 |
20130097566 | Berglund | Apr 2013 | A1 |
20130120319 | Givon | May 2013 | A1 |
20130148852 | Partis et al. | Jun 2013 | A1 |
20130182079 | Holz | Jul 2013 | A1 |
20130182897 | Holz | Jul 2013 | A1 |
20130182902 | Holz | Jul 2013 | A1 |
20130187952 | Berkovich et al. | Jul 2013 | A1 |
20130191911 | Dellinger et al. | Jul 2013 | A1 |
20130208948 | Berkovich et al. | Aug 2013 | A1 |
20130222640 | Baek et al. | Aug 2013 | A1 |
20130239059 | Chen et al. | Sep 2013 | A1 |
20130241832 | Rimon et al. | Sep 2013 | A1 |
20130252691 | Alexopoulos | Sep 2013 | A1 |
20130257736 | Hou et al. | Oct 2013 | A1 |
20130258140 | Lipson et al. | Oct 2013 | A1 |
20130271397 | MacDougall et al. | Oct 2013 | A1 |
20130300831 | Mavromatis et al. | Nov 2013 | A1 |
20130307935 | Rappel et al. | Nov 2013 | A1 |
20130321265 | Bychkov et al. | Dec 2013 | A1 |
20140010441 | Shamaie | Jan 2014 | A1 |
20140064566 | Shreve et al. | Mar 2014 | A1 |
20140081521 | Frojdh et al. | Mar 2014 | A1 |
20140085203 | Kobayashi | Mar 2014 | A1 |
20140125775 | Holz | May 2014 | A1 |
20140125813 | Holz | May 2014 | A1 |
20140132738 | Ogura et al. | May 2014 | A1 |
20140139425 | Sakai | May 2014 | A1 |
20140139641 | Holz | May 2014 | A1 |
20140157135 | Lee et al. | Jun 2014 | A1 |
20140161311 | Kim | Jun 2014 | A1 |
20140168062 | Katz et al. | Jun 2014 | A1 |
20140176420 | Zhou et al. | Jun 2014 | A1 |
20140177913 | Holz | Jun 2014 | A1 |
20140189579 | Rimon et al. | Jul 2014 | A1 |
20140192024 | Holz | Jul 2014 | A1 |
20140201666 | Bedikian et al. | Jul 2014 | A1 |
20140201689 | Bedikian et al. | Jul 2014 | A1 |
20140222385 | Muenster et al. | Aug 2014 | A1 |
20140223385 | Ton et al. | Aug 2014 | A1 |
20140225826 | Juni | Aug 2014 | A1 |
20140240215 | Tremblay et al. | Aug 2014 | A1 |
20140240225 | Eilat | Aug 2014 | A1 |
20140248950 | Tosas Bautista | Sep 2014 | A1 |
20140253512 | Narikawa et al. | Sep 2014 | A1 |
20140253785 | Chan et al. | Sep 2014 | A1 |
20140267098 | Na et al. | Sep 2014 | A1 |
20140307920 | Holz | Oct 2014 | A1 |
20140344762 | Grasset et al. | Nov 2014 | A1 |
20140364209 | Perry | Dec 2014 | A1 |
20140364212 | Osman et al. | Dec 2014 | A1 |
20140369558 | Holz | Dec 2014 | A1 |
20140375547 | Katz et al. | Dec 2014 | A1 |
20150003673 | Fletcher | Jan 2015 | A1 |
20150009149 | Gharib et al. | Jan 2015 | A1 |
20150016777 | Abovitz et al. | Jan 2015 | A1 |
20150022447 | Hare et al. | Jan 2015 | A1 |
20150029091 | Nakashima et al. | Jan 2015 | A1 |
20150084864 | Geiss et al. | Mar 2015 | A1 |
20150097772 | Starner | Apr 2015 | A1 |
20150103004 | Cohen et al. | Apr 2015 | A1 |
20150115802 | Kuti et al. | Apr 2015 | A1 |
20150116214 | Grunnet-Jepsen et al. | Apr 2015 | A1 |
20150131859 | Kim et al. | May 2015 | A1 |
20150172539 | Neglur | Jun 2015 | A1 |
20150193669 | Gu et al. | Jul 2015 | A1 |
20150205358 | Lyren | Jul 2015 | A1 |
20150205400 | Hwang et al. | Jul 2015 | A1 |
20150206321 | Scavezze et al. | Jul 2015 | A1 |
20150227795 | Starner et al. | Aug 2015 | A1 |
20150234469 | Akiyoshi | Aug 2015 | A1 |
20150234569 | Hess | Aug 2015 | A1 |
20150258432 | Stafford et al. | Sep 2015 | A1 |
20150261291 | Mikhailov et al. | Sep 2015 | A1 |
20150304593 | Sakai | Oct 2015 | A1 |
20150323785 | Fukata et al. | Nov 2015 | A1 |
20160062573 | Dascola et al. | Mar 2016 | A1 |
20160086046 | Holz et al. | Mar 2016 | A1 |
20160093105 | Rimon et al. | Mar 2016 | A1 |
Number | Date | Country |
---|---|---|
1984236 | Jun 2007 | CN |
201332447 | Oct 2009 | CN |
101729808 | Jun 2010 | CN |
101930610 | Dec 2010 | CN |
101951474 | Jan 2011 | CN |
102053702 | May 2011 | CN |
201859393 | Jun 2011 | CN |
102201121 | Sep 2011 | CN |
102236412 | Nov 2011 | CN |
4201934 | Jul 1993 | DE |
10326035 | Jan 2005 | DE |
102007015495 | Oct 2007 | DE |
102007015497 | Jan 2014 | DE |
0999542 | May 2000 | EP |
1477924 | Nov 2004 | EP |
1837665 | Sep 2007 | EP |
2369443 | Sep 2011 | EP |
2419433 | Apr 2006 | GB |
2480140 | Nov 2011 | GB |
2519418 | Apr 2015 | GB |
H02236407 | Sep 1990 | JP |
H08261721 | Oct 1996 | JP |
H09259278 | Oct 1997 | JP |
2000023038 | Jan 2000 | JP |
2002133400 | May 2002 | JP |
2003256814 | Sep 2003 | JP |
2004246252 | Sep 2004 | JP |
2006019526 | Jan 2006 | JP |
2006259829 | Sep 2006 | JP |
2007272596 | Oct 2007 | JP |
2008227569 | Sep 2008 | JP |
2009031939 | Feb 2009 | JP |
2009037594 | Feb 2009 | JP |
2010060548 | Mar 2010 | JP |
2011010258 | Jan 2011 | JP |
2011501316 | Jan 2011 | JP |
2011065652 | Mar 2011 | JP |
2011107681 | Jun 2011 | JP |
4906960 | Mar 2012 | JP |
2012527145 | Nov 2012 | JP |
101092909 | Dec 2011 | KR |
2004114220 | Dec 2004 | NO |
2422878 | Jun 2011 | RU |
200844871 | Nov 2008 | TW |
9426057 | Nov 1994 | WO |
2006020846 | Feb 2006 | WO |
2007137093 | Nov 2007 | WO |
2010007662 | Jan 2010 | WO |
2010032268 | Mar 2010 | WO |
2010076622 | Jul 2010 | WO |
2010088035 | Aug 2010 | WO |
2010138741 | Dec 2010 | WO |
2011024193 | Mar 2011 | WO |
2011036618 | Mar 2011 | WO |
2011044680 | Apr 2011 | WO |
2011045789 | Apr 2011 | WO |
2011119154 | Sep 2011 | WO |
2012027422 | Mar 2012 | WO |
2013109608 | Jul 2013 | WO |
2013109609 | Jul 2013 | WO |
2014200589 | Dec 2014 | WO |
2014208087 | Dec 2014 | WO |
2015026707 | Feb 2015 | WO |
Entry |
---|
JS U.S. Appl. No. 14/530,690 - Office Action dated Dec. 13, 2017, 7 pgs (LEAP 1018-2). |
JS U.S. Appl. No. 14/474,068 - Office Action dated Sep. 12, 2016, 23 pages (LEAP 1086-2). |
JS U.S. Appl. No. 14/474,077 - Office Action dated Jul. 26, 2016, 30 pages (LEAP 1007-2). |
Ballan et al., “Lecture Notes Computer Science: 12th European Conference on Computer Vision: Motion Capture of Hands in Action Using Discriminative Salient Points”, Oct. 7-13, 2012 [retrieved Jul. 14, 2016], Springer Berlin Heidelberg, vol. 7577, pp. 640-653. Retrieved from the Internet: < http://link.springer.com/chapter/1 0.1 007/978-3-642-33783-3 46>. |
Cui et al., “Applications of Evolutionary Computing: Vision-Based Hand Motion Capture Using Genetic Algorithm”, 2004 [retrieved Jul. 15, 2016], Springer Berlin Heidelberg, vol. 3005 of LNCS, pp. 289-300. Retrieved from the Internet: <http://link.springer.com/chapter/10.1007/978-3-540-24653-4_30>. |
Delamarre et al., “Finding Pose of Hand in Video Images: A Stereo-based Approach”, Apr. 14-16, 1998 [retrieved Jul. 15, 2016], Third IEEE Intern Conf on Auto Face and Gesture Recog, pp. 585-590. Retrieved from the Internet: < http:// leeexplore.IEEE.org/xpl/login.jsp?tp=&arnumber=671011&url=http%3A%2F%2Fieeexplore.IEEE.org%2Fxpls% 2Fabs_all.jsp%3Farnumber%3D671011>. |
Gorce et al., “Model-Based 3D Hand Pose Estimation from Monocular Video”, Feb. 24, 2011 [retrieved Jul. 15, 2016], EEE Transac Pattern Analysis and Machine Intell, vol. 33, Issue: 9, pp. 1793-1805, Retri Internet: < http://ieeexplore. jeee.org/xpl/login .jsp ?tp=&arnu mber=571 9617 &url=http%3A %2 F%2 Fieeexplore. IEEE.org%2Fxpls%2 Fabs all. sp%3Farnumber%3D5719617>. |
Guo et al., Featured Wand for 3D Interaction, Jul. 2-5, 2007 [retrieved Jul. 15, 2016], 2007 IEEE International Conference on Multimedia and Expo, pp. 2230-2233. Retrieved from the Internet: < http://ieeexplore.IEEE.org/xpl/login. sp?tp=&arnumber=4285129&tag=1&url=http%3A%2F%2Fieeexplore.IEEE.org%2Fxpls%2Fabs_all.jsp%3Farnumber% 3D4285129%26tag%3D1>. |
Melax et al., “Dynamics Based 3D Skeletal Hand Tracking”, May 29, 2013 [retrieved Jul. 14, 2016], Proceedings of Graphics Interface, 2013, pp. 63-70. Retrived from the Internet: < http://dl.acm.org/citation.cfm?id=2532141>. |
Oka et al., “Real-Time Fingertip Tracking and Gesture Recognition”, Nov./Dec. 2002 [retrieved Jul. 15, 2016], IEEE Computer Graphics and Applications, vol. 22, Issue: 6, pp. 64-71. Retrieved from the Internet: < http://ieeexplore.IEEE. brg/xpl/login.jsp?tp=&amnumber=1046630&url=http%3A%2F%2Fieeexplore.IEEE.org%2Fxpls%2Fabsall.jsp% 3Farnumber%3D1046630>. |
Schlattmann et al., “Markerless 4 gestures 6 DOF real-time visual tracking of the human hand with automatic Initialization”, 2007 [retrieved Jul. 15, 2016], Eurographics 2007, vol. 26, No. 3, 10 pages, Retrieved from the Internet: <http://cg.cs.uni-bonn.de/aigaion2root/attachments/schlattmann-2007-markerless.pdf>. |
Wang et al., “Tracking of Deformable Hand in Real Time as Continuous Input for Gesture-based Interaction”, Jan. 28, 2007 [retrieved Jul. 15, 2016], Proceedings of the 12th International Conference on Intelligent User Interfaces, pp. 235-242. Retrieved fromthe Internet: < http://dl.acm.org/citation.cfm?id=1216338>. |
Zhao et al., “Combining Marker-Based Mocap and RGB-D Camera for Acquiring High-Fidelity Hand Motion Data”, Jul. 29, 2012 [retrieved Jul. 15, 2016], Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 33-42, Retrieved from the Internet: < http://dl.acm.org/citation.cfm?id=2422363>. |
PCT/US2013/021709 - International Search Report and Written Opinion dated Sep. 12, 2013, 22 pages (Leap 1006-4WO). |
PCT/US2013/021713 - International Search Report and Written Opinion dated Sep. 11, 2013, 7 pages (LEAP 1011-3 WO). |
U.S. Appl. No. 13/742,845 - Office Action dated Jul. 22, 2013, 21 pages (LEAP 1011-1). |
JS U.S. Appl. No. 13/742,953 - Office Action dated Jun. 14, 2013, 13 pages (LEAP 1006-8). |
JS U.S. Appl. No. 13/742,953 - Notice of Allowance dated Nov. 4, 2013, 14 pages (LEAP 1006-8). |
PCT/US2013/021709 - International Preliminary Report on Patentability dated Jul. 22, 2014, 22 pages (WO 2013/109608 - LEAP 1006-4). |
U.S. Appl. No. 13/414,485 - Office Action dated May 19, 2014, 16 pages (LEAP 1006-7). |
JS U.S. Appl. No. 13/414,485 - Final Office Action dated Feb. 12, 2015, 30 pages (LEAP 1006-7). |
JS 14/106, 148 - Office Action dated Jul. 6, 2015, 14 pages (LEAP 1011-2). |
PCT/US2013/069231 - International Search Report and Written Opinion mailed Mar. 13, 2014, 7 pages (Leap 1003-3WO). |
U.S. Appl. No. 13/744,810 - Office Action dated Jun. 7, 2013, 15 pages (LEAP 1003-2). |
U.S. Appl. No. 13/744,810 - Final Office Action dated Dec. 16, 2013, 18 pages (LEAP 1003-2). |
PCT/US2013/069231 - International Preliminary Report with Written Opinion dated May 12, 2015, 8 pages (Leap 1003-3WO). |
JS U.S. Appl. No. 14/250,758 - Office Action dated Jul. 6, 2015, 8 pages (LEAP 1076-2). |
JS U.S. Appl. No. 13/414,485 - Office Action dated Jul. 30, 2015, 22 pages (LEAP 1006-7). |
JS U.S. Appl. No. 14/106,148 - Notice of Allowance dated Dec. 2, 2015, 41 pages (LEAP 1011-2). |
CN 2013800122765 - Office Action dated Nov. 2, 2015, 17 pages (Leap 1011-5CN). |
U.S. Appl. No. 14/959,880 - Notice of Allowance dated Mar. 2, 2016, 12 pages (LEAP 1011-7). |
U.S. Appl. No. 14/250,758—Final Office Action dated Mar. 10, 2016, 10 pages. |
U.S. Appl. No. 13/414,485—Office Action dated Apr. 21, 2016, 24 pages. |
U.S. Appl. No. 14/710,512—Notice of Allowance dated Apr. 28, 2016, 25 pages. |
U.S. Appl. No. 14/959,891—Office Action dated Apr. 11, 2016, 8 pages. |
U.S. Appl. No. 14/250,758—Response to Final Office Action dated Mar. 10, 2016 filed May 5, 2016, 12 pages. |
U.S. Appl. No. 14/106,148—Response to Office Action dated Jul. 6, 2015 filed Nov. 6, 2015, 41 pages. |
U.S. Appl. No. 14/959,880—Notice of Allowance dated Jul. 12, 2016, 8 pages. |
U.S. Appl. No. 14/106,148—Notice of Allowance dated Jul. 20, 2016, 30 pages. |
U.S. Appl. No. 14/959,891—Notice of Allowance dated Jul. 28, 2016, 19 pages. |
JP 2014-552391—First Office Action dated Dec. 9, 2014, 6 pages. |
U.S. Appl. No. 13/742,845—Response to Office Action dated Jul. 22, 2013 filed Sep. 26, 2013, 7 pages. |
U.S. Appl. No. 14/959,891—Response to Office Action dated Apr. 11, 2016 filed Jun. 8, 2016, 25 pages. |
DE 11 2013 000 590.5—First Office Action dated Nov. 5, 2014, 7 pages. |
DE 11 2013 000 590.5—Response to First Office Action dated Nov. 5, 2014 filed Apr. 24, 2015, 1 page. |
DE 11 2013 000 590.5—Second Office Action dated Apr. 29, 2015, 7 pages. |
DE 11 2013 000 590.5—Response to Second Office Action dated Apr. 29, 2015 filed Sep. 16, 2015, 11 pages. |
DE 11 2013 000 590.5—Third Office Action dated Sep. 28, 2015, 4 pages. |
DE 11 2013 000 590.5—Response to Third Office Action dated Sep. 28, 2015 filed Dec. 14, 2015, 64 pages. |
DE 11 2013 000 590.5—Notice of Allowance dated Jan. 18, 2016, 8 pages. |
CN 2013800122765—Response to First Office Action dated Nov. 2, 2015 filed May 14, 2016, 14 pages. |
JP 2014-552391—Response to First Office Action dated Dec. 9, 2014 filed Jun. 8, 2016, 9 pages. |
JP 2014-552391—Second Office Action dated Jul. 7, 2015, 7 pages. |
JP 2014-552391—Response to Second Office Action dated Jul. 7, 2015 filed Dec. 25, 2015, 4 pages. |
JP 2014-552391—Third Office Action dated Jan. 26, 2016, 5 pages. |
CN 2013800122765—Second Office Action dated Jul. 27, 2016, 6 pages. |
U.S. Appl. No. 14/250,758—Office Action dated Sep. 8, 2016, 9 pages. |
U.S. Appl. No. 14/710,499—Notice of Allowance dated Sep. 12, 2016, 28 pages. |
U.S. Appl. No. 14/710,499—Office Action dated Apr. 14, 2016, 30 pages. |
U.S. Appl. No. 14/710,499—Response to Office Action dated Apr. 14, 2016 filed Jul. 14, 2016, 37 pages. |
CN 2013800122765—Response to Second Office Action dated Jul. 27, 2016 filed Oct. 11, 2016, 3 pages. |
JP 2010-060548 A—Japanese Patent with English Abstract filed Mar. 18, 2010, 19 pages. |
JP 2011-107681 A—Japanese Patent with English Abstract filed Jun. 2, 2011, 16 pages. |
JP 2012-527145 A—Japanese Patent with English Abstract filed Nov. 1, 2012, 30 pages. |
U.S. Appl. No. 14/474,077—Office Action dated Sep. 8, 2017, 16 pages. |
De La Gorce et al., “Model-Based 3D Hand Pose Estimation from Monocular Video”, Feb. 24, 2011 [retrieved Jul. 15, 2016], IEEE Transac Pattern Analysis and Machine Intell, vol. 33, Issue: 9, pp. 1793-1805, Retri Internet: <http://ieeexplore. ieee.org/xpl/logi n .jsp ?tp=&arnu mber=571 9617 &u rl=http%3A %2 F%2 Fieeexplore. ieee.org%2Fxpls%2 Fabs all.jsp%3Farnumber%3D5719617>. |
Stenger, et al., “Model-Based 30 Tracking of an Articulated Hand”, Computer Vision and Pattern Recognition, 2001. CVPR 2001 Proceedings of the 2001 IEEE Computer Society Conference on. vol. 2. IEEE, 2001, pp. 1-6. |
U.S. Appl. No. 14/474,068—Notice of Allowance dated Jan. 25, 2017, 15 pages. |
U.S. Appl. No. 14/712,699—Office Action dated Nov. 7, 2016, 17 pages. |
U.S. Appl. No. 14/712,699—Response to Office Action dated Nov. 7, 2016 filed Mar. 7, 2017, 9 pages. |
U.S. Appl. No. 14/712,699—Notice of Allowance dated Apr. 24, 2017, 8 pages. |
U.S. Appl. No. 14/474,077—Response to Office Action dated Mar. 14, 2017 filed Jun. 9, 2017, 12 pages. |
U.S. Appl. No. 14/474,068—Response to Office Action dated Sep. 12, 2016 filed Dec. 12, 2016, 9 pages. |
U.S. Appl. No. 14/474,077—Response to Office Action dated Jul. 26, 2016 filed Dec. 1, 2016, 10 pages. |
U.S. Appl. No. 14/530,690—Notice of Allowance dated Feb. 12, 2018, 51 pgs. |
U.S. Appl. No. 14/530,690—Response to Office Action dated Dec. 13, 2017, filed Dec. 19, 2017, 8 pgs. |
U.S. Appl. No. 16/004,119—Office Action dated Feb. 25, 2019, 8 pages. |
U.S. Appl. No. 16/004,119—Response to Office Action dated Feb. 25, 2019, filed Jun. 24, 2019, 7 pages. |
U.S. Appl. No. 16/004,119—Notice of Allowance dated Jul. 25, 2019, 5 pages. |
U.S. Appl. No. 16/695,136—Office Action dated Sep. 16, 2020, 6 pages. |
U.S. Appl. No. 16/695,136—Response to Office Action dated Sep. 16, 2020, filed Dec. 18, 2020, 7 pages. |
U.S. Appl. No. 16/695,136—Notice of Allowance dated Jan. 13, 2021, 52 pages. |
U.S. Appl. No. 17/308,903—Notice of Allowance dated Jun. 15, 2022, 9 pages. |
U.S. Appl. No. 17/308,903—Notice of Allowance dated Sep. 28, 2022, 61 pages. |
U.S. Appl. No. 14/530,690—Office Action dated Dec. 13, 2017, 7 pgs. |
U.S. Appl. No. 14/474,068—Office Action dated Sep. 12, 2016, 23 pages. |
U.S. Appl. No. 14/474,077—Office Action dated Jul. 26, 2016, 30 pages. |
Delamarre et al., “Finding Pose of Hand in Video Images: A Stereo-based Approach”, Apr. 14-16, 1998 [retrieved Jul. 15, 2016], Third IEEE Intern Conf on Auto Face and Gesture Recog, pp. 585-590. Retrieved from the Internet: <http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=671011&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D671011>. |
Gorce et al., “Model-Based 3D Hand Pose Estimation from Monocular Video”, Feb. 24, 2011 [retrieved Jul. 15, 2016], IEEE Transac Pattern Analysis and Machine Intell, vol. 33, Issue: 9, pp. 1793-1805, Retri Internet: <http://ieeexplore.ieee.org/xpl/logi n .jsp?tp=&arnu mber=571 9617 &u rl=http%3A %2 F%2 Fieeexplore. ieee.org%2Fxpls%2 Fabs all. isp%3Farnumber%3D5719617>. |
Guo et al., Featured Wand for 3D Interaction, Jul. 2-5, 2007 [retrieved Jul. 15, 2016], 2007 IEEE International Conference on Multimedia and Expo, pp. 2230-2233. Retrieved from the Internet: <http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4285129&tag=1&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D4285129%26tag%3D1>. |
Melax et al., “Dynamics Based 3D Skeletal Hand Tracking”, May 29, 2013 [retrieved Jul. 14, 2016], Proceedings of Graphics Interface, 2013, pp. 63-70. Retrived from the Internet: <http://d1.acm.org/citation.cfm?id=2532141>. |
Oka et al., “Real-Time Fingertip Tracking and Gesture Recognition”, Nov./Dec. 2002 [retrieved Jul. 15, 2016], IEEE Computer Graphics and Applications, vol. 22, Issue: 6, pp. 64-71. Retrieved from the Internet: <http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1046630&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabsall.jsp%3Farnumber%3D1046630>. |
Wang et al., “Tracking of Deformable Hand in Real Time as Continuous Input for Gesture-based Interaction”, Jan. 28, 2007 [retrieved Jul. 15, 2016], Proceedings of the 12th International Conference on Intelligent User Interfaces, pp. 235-242. Retrieved fromthe Internet: <http://d1.acm.org/citation.cfm?id=1216338>. |
Zhao et al., “Combining Marker-Based Mocap and RGB-D Camera for Acquiring High-Fidelity Hand Motion Data”, Jul. 29, 2012 [retrieved Jul. 15, 2016], Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 33-42, Retrieved from the Internet: <http://d1.acm.org/citation.cfm?id=2422363>. |
PCT/US2013/021709—International Search Report and Written Opinion dated Sep. 12, 2013, 22 pages. |
PCT/US2013/021713—International Search Report and Written Opinion dated Sep. 11, 2013, 7 pages. |
U.S. Appl. No. 13/742,845—Office Action dated Jul. 22, 2013, 21 pages. |
U.S. Appl. No. 13/742,953—Office Action dated Jun. 14, 2013, 13 pages. |
U.S. Appl. No. 13/742,953—Notice of Allowance dated Nov. 4, 2013, 14 pages. |
PCT/US2013/021709—International Preliminary Report on Patentability dated Jul. 22, 2014, 22 pages (WO 2013/109608). |
U.S. Appl. No. 13/414,485—Office Action dated May 19, 2014, 16 pages. |
U.S. Appl. No. 13/414,485—Final Office Action dated Feb. 12, 2015, 30 pages. |
U.S. Appl. No. 14/106,148—Office Action dated Jul. 6, 2015, 14 pages. |
PCT/US2013/069231—International Search Report and Written Opinion dated Mar. 13, 2014, 7 pages. |
U.S. Appl. No. 13/744,810—Office Action dated Jun. 7, 2013, 15 pages. |
U.S. Appl. No. 13/744,810—Final Office Action dated Dec. 16, 2013, 18 pages. |
PCT/US2013/069231—International Preliminary Report with Written Opinion dated May 12, 2015, 8 pages. |
U.S. Appl. No. 14/250,758—Office Action dated Jul. 6, 2015, 8 pages. |
U.S. Appl. No. 13/414,485—Office Action dated Jul. 30, 2015, 22 pages. |
U.S. Appl. No. 14/106,148—Notice of Allowance dated Dec. 2, 2015, 41 pages. |
CN 2013800122765—Office Action dated Nov. 2, 2015, 17 pages. |
U.S. Appl. No. 14/959,880—Notice of Allowance dated Mar. 2, 2016, 12 pages. |
PCT/US2014/028265, Application (Determining Positional Information for an Object in Space), May 9, 2014, 117 pages. |
PCT/US2014/028265, International Search Report and Written Opinion, dated Jan. 7, 2015, 15 pages. |
PCT/US2013/021709, International Preliminary Report on Patentability and Written Opinion, dated Sep. 12, 2013, 22 pages (WO 2013/109608). |
PCT/US2013/021713—International Preliminary Report on Patentability dated Jul. 22, 2014, 13 pages, (WO 2013/109609). |
PCT/US2014/013012—International Search Report and Written Opinion dated May 14, 2014, published as WO 2014116991, 12 pages. |
Yamamoto et al., “A Study for Vision Based Data Glove Considering Hidden Fingertip with Self-Occlusion”, Aug. 8-10, 2012 [retrieved Mar. 16, 2018], 2012 13th ACIS Int. Conf on Soft Eng, AI, Network and Parallel & Dist Computing, pp. 315-320. Retrieved from the Internet: << http://ieeexplore.ieee.org/abstract/document/6299298/>>. |
Hong et al., “Variable structure multiple model for articulated human motion tracking from monocular video sequences”, May 2012 [retrieved Mar. 3, 2018], Science China Information Science, vol. 55, Issue 5, pp. 1138-1150. Retrieved from the Interent: <https://link.springer.com/article/10.1007/s11432-011-4529-8>. |
Calinon et al., A probabilistic approach based on dynamical systems to team and reproduce gestures by imitation, Dec. 19, 2011 [retrieved Mar. 16, 2018], pp. 1-12. Retrieved from the Internet: <https://web.archive.org/web/20111219151051 /http://infoscience.epfl.ch/record/147286/files/CalinonEtAIRAM2010.pdf?version=6>. |
Hamer et al., “Data-Driven Animation of Hand-Object Interactions”, Mar. 21-25, 2011, Face and Gesture 2011, pp. 360-367. [retrieved on Jun. 19, 2019], Retrieved from the internet <https://ieeexplore.ieee.org/abstract/document/5771426>. |
Shimada et al., “Hand Gesture Estimation and Model Refinement using Monocular Camera-Ambiguity Limitation by Inequality v Constraints”, Apr. 14-16, 1998, Procedings 3rd IEEE Inter Confer Auto Face and Gesture Recog,pp. 1-6. [retrieved on Jun. 19, 2019], Retrivied from the internet <https://ieeexplore.ieee.org/abstract/document/670960>. |
Delmarre et al., 3D Articulated Models and Multiview Tracking with Physical Forces, Mar. 2001 [retrieved Jun. 12, 2022], Computer Vision and Image Understanding, vol. 81, Issue 3, pp. 328-357. Retrieved: https://www.sciencedirect.com/science/article/pii/S1077314200908920 (Year: 2001). |
Kim et al., RetroDepth: 3D Silhouette Sensing for High-Precision Input On and Above Physical Surfaces, Apr. 26, 2014 [retrieved Jun. 12, 2022], CHI '14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1377-1386. Retrieved: (Year: 2014) https://dl.acm.org/doi/abs/10.1145/2556288.2557336 (Year: 2014). |
PCT/US2013/021709, International Preliminary Report on Patentability and Written Opinion, dated Jul. 22, 2014, 22 pages. |
Number | Date | Country | |
---|---|---|---|
20230169236 A1 | Jun 2023 | US |
Number | Date | Country | |
---|---|---|---|
61898462 | Oct 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17308903 | May 2021 | US |
Child | 18161811 | US | |
Parent | 16695136 | Nov 2019 | US |
Child | 17308903 | US | |
Parent | 16004119 | Jun 2018 | US |
Child | 16695136 | US | |
Parent | 14530690 | Oct 2014 | US |
Child | 16004119 | US |