Implementations relate generally to machine-user interfaces, and more specifically to the interpretation of free-space user movements as control inputs.
Current computer systems typically include a graphic user interface that can be navigated by a cursor, i.e., a graphic element displayed on the screen and movable relative to other screen content, and which serves to indicate a position on the screen. The cursor is usually controlled by the user via a computer mouse or touch pad. In some systems, the screen itself doubles as an input device, allowing the user to select and manipulate graphic user interface components by touching the screen where they are located. While touch can be convenient and relatively intuitive for many users, touch is not that accurate. Fingers are fat. The user's fingers can easily cover multiple links on a crowded display leading to erroneous selection. Touch is also unforgiving—it requires the user's motions to be confined to specific areas of space. For example, move one's hand merely one key-width to the right or left and type. Nonsense appears on the screen.
Mice, touch pads, and touch screens can be cumbersome and inconvenient to use. Touch pads and touch screens require the user to be in close physical proximity to the pad (which is often integrated into a keyboard) or screen so as to be able to reach them, which significantly restricts users' range of motion while providing input to the system. Touch is, moreover, not always reliably detected, sometimes necessitating repeated motions across the pad or screen to effect the input. Mice facilitate user input at some distance from the computer and screen (determined by the length of the connection cable or the range of the wireless connection between computer and mouse), but require a flat surface with suitable surface properties, or even a special mouse pad, to function properly. Furthermore, prolonged use of a mouse, in particular if it is positioned sub-optimally relative to the user, can result in discomfort or even pain.
Accordingly, alternative input mechanisms that provide users with the advantages of touch based controls but free the user from the many disadvantages of touch based control are highly desirable.
Aspects of the system and methods, described herein provide for improved machine interface and/or control by interpreting the motions (and/or position, configuration) of one or more control objects or portions thereof relative to one or more virtual control constructs defined (e.g., programmatically) in free space disposed at least partially within a field of view of an image-capture device. In implementations, the position, orientation, and/or motion of control object(s) (e.g., a user's finger(s), thumb, etc.; a suitable hand-held pointing device such as a stylus, wand, or some other control object; portions and/or combinations thereof) are tracked relative to virtual control surface(s) to facilitate determining whether an engagement gesture has occurred. Engagement gestures can include engaging with a control (e.g., selecting a button or switch), disengaging with a control (e.g., releasing a button or switch), motions that do not involve engagement with any control (e.g., motion that is tracked by the system, possibly followed by a cursor, and/or a single object in an application or the like), environmental interactions (i.e., gestures to direct an environment rather than a specific control, such as scroll up/down), special-purpose gestures (e.g., brighten/darken screen, volume control, etc.), as well as others or combinations thereof.
Engagement gestures can be mapped to one or more controls, or a control-less screen location, of a display device associated with the machine under control. Implementations provide for mapping of movements in three-dimensional (3D) space conveying control and/or other information to zero, one, or more controls. Controls can include imbedded controls (e.g., sliders, buttons, and other control objects in an application), or environmental-level controls (e.g., windowing controls, scrolls within a window, and other controls affecting the control environment). In implementations, controls can be displayable using two-dimensional (2D) presentations (e.g., a traditional cursor symbol, cross-hairs, icon, graphical representation of the control object, or other displayable object) on, e.g., one or more display screens, and/or 3D presentations using holography, projectors, or other mechanisms for creating 3D presentations. Presentations can also be audible (e.g., mapped to sounds, or other mechanisms for conveying audible information) and/or haptic.
In an implementation, determining whether motion information defines an engagement gesture can include finding an intersection (also referred to as a contact, pierce, or a “virtual touch”) of motion of a control object with a virtual control surface, whether actually detected or determined to be imminent; dis-intersection (also referred to as a “pull back” or “withdrawal”) of the control object with a virtual control surface; a non-intersection—i.e., motion relative to a virtual control surface (e.g., wave of a hand approximately parallel to the virtual surface to “erase” a virtual chalk board); or other types of identified motions relative to the virtual control surface suited to defining gestures conveying information to the machine. In an implementation and by way of example, one or more virtual control constructs can be defined computationally (e.g., programmatically using a computer or other intelligent machinery) based upon one or more geometric constructs to facilitate determining occurrence of engagement gestures from information about one or more control objects (e.g., hand, tool, combinations thereof) captured using imaging systems, scanning systems, or combinations thereof. Virtual control constructs in an implementation can include virtual surface constructs, virtual linear or curvilinear constructs, virtual point constructs, virtual solid constructs, and complex virtual constructs comprising combinations thereof. Virtual surface constructs can comprise one or more surfaces, e.g., a plane, curved open surface, closed surface, bounded open surface, or generally any multi-dimensional virtual surface definable in two or three dimensions. Virtual linear or curvilinear constructs can comprise any one-dimensional virtual line, curve, line segment or curve segment definable in one, two, or three dimensions. Virtual point constructs can comprise any zero-dimensional virtual point definable in one, two, or three dimensions. Virtual solids can comprise one or more solids, e.g., spheres, cylinders, cubes, or generally any three-dimensional virtual solid definable in three dimensions.
In an implementation, an engagement target can be defined using one or more virtual construct(s) coupled with a virtual control (e.g., slider, button, rotatable knob, or any graphical user interface component) for presentation to user(s) by a presentation system (e.g., displays, 3D projections, holographic presentation devices, non-visual presentation systems such as haptics, audio, and the like, any other devices for presenting information to users, or combinations thereof). Coupling a virtual control with a virtual construct enables the control object to “aim” for, or move relative to, the virtual control—and therefore the virtual control construct. Engagement targets in an implementation can include engagement volumes, engagement surfaces, engagement lines, engagement points, or the like, as well as complex engagement targets comprising combinations thereof. An engagement target can be associated with an application or non-application (e.g., OS, systems software, etc.) so that virtual control managers (i.e., program routines, classes, objects, etc. that manage the virtual control) can trigger differences in interpretation of engagement gestures including presence, position and/or shape of control objects, control object motions, or combinations thereof to conduct machine control. As explained in more detail below with reference to example implementations, engagement targets can be used to determine engagement gestures by providing the capability to discriminate between engagement and non-engagement (e.g., virtual touches, moves in relation to, and/or virtual pierces) of the engagement target by the control object.
In an implementation, determining whether motion information defines an engagement gesture can include determining one or more engagement attributes from the motion information about the control object. In an implementation, engagement attributes include motion attributes (e.g., speed, acceleration, duration, distance, etc.), gesture attributes (e.g., hand, two hands, tools, type, precision, etc.), other attributes and/or combinations thereof.
In an implementation, determining whether motion information defines an engagement gesture can include filtering motion information to determine whether motion comprises an engagement gesture. Filtering can be applied based upon engagement attributes, characteristics of motion, position in space, other criteria, and/or combinations thereof. Filtering can enable identification of engagement gestures, discrimination of engagement gestures from extraneous motions, discrimination of engagement gestures of differing types or meanings, and so forth.
In an implementation, sensing an engagement gesture provides an indication for selecting a mode to control a user interface of the machine (e.g., an “engaged mode” simulating a touch, or a “disengaged mode” simulating no contact and/or a hover in which a control is selected but not actuated). Other modes useful in various implementations include an “idle,” in which no control is selected nor virtually touched, and a “lock,” in which the last control to be engaged with remains engaged until disengaged. Yet further, hybrid modes can be created from the definitions of the foregoing modes in implementations.
In various implementations, to trigger an engaged mode—corresponding to, e.g., touching an object or a virtual object displayed on a screen—the control object's motion toward an engagement target such as a virtual surface construct (i.e., a plane, plane portion, or other (non-planar or curved) surface computationally or programmatically defined in space, but not necessarily corresponding to any physical surface) can be tracked; the motion can be, e.g., a forward motion starting from a disengaged mode, or a backward retreating motion. When the control object reaches a spatial location corresponding to this virtual surface construct—i.e., when the control object intersects “touches” or “pierces” the virtual surface construct—the user interface (or a component thereof, such as a cursor, user-interface control, or user-interface environment) is operated in the engaged mode; as the control object retracts from the virtual surface construct, user-interface operation switches back to the disengaged mode.
In implementations, the virtual surface construct can be fixed in space, e.g., relative to the screen; for example, it can be defined as a plane (or portion of a plane) parallel to and located several inches in front of the screen in one application, or as a curved surface defined in free space convenient to one or more users and optionally proximately to display(s) associated with one or more machines under control. The user can engage this plane while remaining at a comfortable distance from the screen (e.g., without needing to lean forward to reach the screen). The position of the plane can be adjusted by the user from time to time. In implementations, however, the user is relieved of the need to explicitly change the plane's position; instead, the plane (or other virtual surface construct) automatically moves along with, as if tethered to, the user's control object. For example, a virtual plane can be computationally defined as perpendicular to the orientation of the control object and located a certain distance, e.g., 3-4 millimeters, in front of its tip when the control object is at rest or moving with constant velocity. As the control object moves, the plane follows it, but with a certain time lag (e.g., 0.2 second). As a result, as the control object accelerates, the distance between its tip and the virtual touch plane changes, allowing the control object, when moving towards the plane, to eventually “catch” the plane—that is, the tip of the control object to touch or pierce the plane. Alternatively, instead of being based on a fixed time lag, updates to the position of the virtual plane can be computed based on a virtual energy potential defined to accelerate the plane towards (or away from) the control object tip depending on the plane-to-tip distance, likewise allowing the control object to touch or pierce the plane. Either way, such virtual touching or piercing can be interpreted as engagement events. Further, in some implementations, the degree of piercing (i.e., the distance beyond the plane that the control object reaches) is interpreted as an intensity level. To guide the user as she engages with or disengages from the virtual plane (or other virtual surface construct), the cursor symbol can encode the distance from the virtual surface visually, e.g., by changing in size with varying distance.
In an implementation, once engaged, further movements of the control object can serve to move graphical components across the screen (e.g., drag an icon, shift a scroll bar, etc.), change perceived “depth” of the object to the viewer (e.g., resize and/or change shape of objects displayed on the screen in connection, alone, or coupled with other visual effects) to create perception of “pulling” objects into the foreground of the display or “pushing” objects into the background of the display, create new screen content (e.g., draw a line), or otherwise manipulate screen content until the control object disengages (e.g., by pulling away from the virtual surface, indicating disengagement with some other gesture of the control object (e.g., curling the forefinger backward); and/or with some other movement of a second control object (e.g., waving the other hand, etc.)). Advantageously, tying the virtual surface construct to the control object (e.g., the user's finger), rather than fixing it relative to the screen or other stationary objects, allows the user to consistently use the same motions and gestures to engage and manipulate screen content regardless of his precise location relative to the screen. To eliminate the inevitable jitter typically accompanying the control object's movements and which might otherwise result in switching back and forth between the modes unintentionally, the control object's movements can be filtered and the cursor position thereby stabilized. Since faster movements will generally result in more jitter, the strength of the filter can depend on the speed of motion.
Accordingly, in one aspect, a computer-implemented method of controlling a machine user interface is provided. The method involves receiving information including motion information for a control object; determining from the motion information whether a motion of the control object is an engagement gesture according to an occurrence of an engagement gesture applied to at least one virtual control construct defined within a field of view of an image capturing device; determining a control to which the engagement gesture is applicable; and manipulating the control according to at least the motion information. The method can further include updating at least a spatial position of the virtual control construct(s) based at least in part on a spatial position of the control object determined from the motion information, thereby enabling the spatial position of the virtual control construct(s) to follow tracked motions of the control object.
In some implementations, determining whether a motion of the control object is an engagement gesture includes determining whether an intersection between the control object and the virtual control construct(s), a dis-intersection of the control object from the virtual control construct(s), or a motion of the control object relative to the virtual control construct(s) occurred. The method can further include determining from the motion information whether the engagement includes continued motion after intersection. In some implementations, determining from the motion information whether a motion of the control object is an engagement gesture includes determining from the motion information one or more engagement attributes (e.g., a potential energy) defining an engagement gesture. In some implementations, determining whether a motion of the control object is an engagement gesture includes identifying an engagement gesture by correlating motion information to at least one engagement gesture based at least upon one or more of motion of the control object, occurrence of any of an intersection, a dis-intersection or a non-intersection of the control object with the virtual control construct, and the set of engagement attributes.
Determining a control to which the engagement gesture is applicable can include selecting a control associated with an application, a control associated with an operating environment, and/or a special control. Manipulating a control according to at least the motion information can include controlling a user interface in a first mode, and otherwise controlling the user interface in a second mode different from the first mode.
In another aspect, a computer-implemented method of controlling a machine user interface is provided. The method includes receiving information including motion information for a control object. Further, it includes determining from the motion information whether a motion of the control object is an engagement gesture according to an occurrence of an engagement gesture applied to at least one virtual control construct defined within a field of view of an image capturing device by (i) determining whether an intersection occurred between control object and at least one virtual control construct, and when an intersection has occurred determining from the motion information whether the engagement includes continued motion after intersection; otherwise (ii) determining whether a dis-intersection of the control object from the at least one virtual control construct occurred; otherwise (iii) determining whether motion of the control object occurred relative to at least one virtual control construct; (iv) determining from the motion information a set of engagement attributes defining an engagement gesture; and (v) identifying an engagement gesture by correlating motion information to at least one engagement gesture based at least upon one or more of motion of the control object, occurrence of any of an intersection, a dis-intersection or a non-intersection of the control object with the virtual control construct, and the set of engagement attributes. Further, the method involves determining a control to which the engagement gesture is applicable, and manipulating the control according to at least the engagement gesture.
In another aspect, a computer-implemented method for facilitating control of a user interface via free-space motions of a control object is provided. One method implementation includes receiving data indicative of tracked motions of the control object, and computationally (i.e., using a processor) defining a virtual control construct and updating a spatial position (and, in some implementations, also a spatial orientation) of the virtual control construct based at least in part on the data such that the position of the virtual control construct follows the tracked motions of the control object. Further, implementations of the method involve computationally determining whether the control object intersects the virtual control construct, and, if so, controlling the user interface in a first mode (e.g., an engaged mode), and otherwise controlling the user interface in a second mode different from the first mode (e.g., a disengaged mode).
In some implementations, the virtual control construct follows the tracked motions of the control object with a time lag, which can be fixed or, e.g., depend on a motion parameter of the control object. In alternative implementations, the spatial position of the virtual control construct is updated based on a current distance between the control object and the virtual control construct, e.g., in accordance with a virtual energy potential defined as a function of that distance. The virtual energy potential can have minima at steady-state distances between the control object and the virtual control construct in the engaged mode and the disengaged mode. In some implementations, the steady-state distance in the engaged mode is equal to the steady-state distance in the disengaged mode; in other implementations, the steady-state distance in the engaged mode is larger (or smaller) than the steady-state distance in the disengaged mode.
Determining whether the control object intersects the virtual control construct can involve computing an intersection of a straight line through the axis of the control object with a screen displaying the user interface or, alternatively, computationally projecting a tip of the control object perpendicularly onto the screen. Controlling the user interface can involve updating the screen content based, at least in part, on the tracked control object motions and the operational mode (e.g., the engaged or disengaged mode). For example, in some implementations, it involves operating a cursor variably associated with a screen position; a cursor symbol can be displayed on the screen at that position. The cursor can also be indicative of a distance between the control object and the virtual control construct. (The term “cursor,” as used herein, refers to a control element operable to select a screen position—whether or not the control element is actually displayed—and manipulate screen content via movement across the screen, i.e., changes in the selected position.) In some implementations, the method further includes computationally determining, for a transition from the disengaged mode to the engaged mode, a degree of penetration of the virtual control construct by the control object, and controlling the user interface based at least in part thereon.
The method can also include acquiring a temporal sequence of images of the control object (e.g., with a camera system having depth-sensing capability) and/or computationally tracking the motions of the control object based on the sequence of images. In some implementations, the control object motions are computationally filtered based, at least in part, on the control object's velocity.
In another aspect, implementations pertain to a computer-implemented method for controlling a user interface via free-space motions of a control object. The method involves receiving motion information indicating positions of a control object being tracked in free space, and, using a processor, (i) defining a virtual control construct, at least a portion thereof having a spatial position determined based at least in part on the motion information such that the virtual control construct portion is positioned proximate to the control object, (ii) determining from the motion information whether the tracked motions of the control object indicate that the control object has intersected the virtual control construct, and (iii) switching from conducting control of a user interface in a first mode to conducting control of the user interface in a second mode based at least in part upon an occurrence of the control object intersecting the virtual control construct. The method can further involve updating at least the spatial position of the virtual control construct portion based at least in part on the motion information such that the virtual control construct portion is enabled to follow the control object.
In another aspect, implementations provide a system for controlling a machine user interface via free-space motions of a control object tracked with an image capturing device, the system including a processor and memory. The memory stores (i) motion information for the control object; and (ii) processor-executable instructions for causing the processor to determine from the motion information whether a motion of the control object is an engagement gesture according to an occurrence of an engagement gesture applied to at least one virtual control construct defined within a field of view of the image capturing device, to determine a control to which the engagement gesture is applicable, and to manipulate the control according to at least the motion information.
Yet another aspect pertains to a non-transitory machine-readable medium. In implementations, the medium stores one or more instructions which, when executed by one or more processors, cause the one or more processors to determine from motion information received for a control object whether a motion of the control object is an engagement gesture according to an occurrence of an engagement gesture applied to at least one virtual control construct defined within a field of view of an image capturing device; determine a control to which the engagement gesture is applicable; and manipulate the control according to at least the motion information.
In a further aspect, a system for controlling a user interface via free-space motions of a control object tracked by a motion-capture system is provided. The system includes a processor and associated memory, the memory storing processor-executable instructions for causing the processor to (i) computationally define a virtual control construct relative to the control object and update at least a spatial position thereof, based at least in part on the tracked motions of the control object, such that the spatial position of the virtual control construct follows the tracked motions of the control object, (ii) computationally determine whether the control object, in the current spatial position, intersects the virtual control construct, and (iii) if so, control the user interface in a first mode, and otherwise control the user interface in a second mode different from the first mode. In some implementations, the first and second modes are engaged and disengaged modes, respectively. Execution of the instructions by the processor can cause the processor to compute a position of the virtual control construct relative to the current position of the control object such that the virtual control construct follows the tracked motions of the control object with a time lag, and/or to update the spatial position of the virtual control construct in accordance with a virtual energy potential defined as a function of a distance between the control object and the virtual control construct.
The system can further include the motion-capture system for tracking the motions of the control object in three dimensions based on a temporal sequence of images of the control object. In some implementations, the motion-capture system includes one or more camera(s) acquiring the images and a plurality of image buffers for storing a most recent set of the images. The system can also have a filter for computationally filtering the motions of the control object based, at least in part, on a velocity of these motions. In addition, the system can include a screen for displaying the user interface; execution of the instructions by the processor can cause the processor to update screen content based, at least in part, on the mode and the tracked motions of the control object. In some implementation, execution of the instructions by the processor causes the processor to operate a cursor associated with a position on a screen based, at least in part, on the mode and the tracked motions of the control object. The screen can display a cursor symbol at the associated position; the cursor symbol can be indicative of a distance between the control object and the virtual control construct.
In another aspect, a non-transitory machine-readable medium storing one or more instructions is provided in which, when executed by one or more processors, cause the one or more processors to (i) computationally define a virtual control construct and update at least a spatial position thereof based at least in part on data indicative of tracked motions of a control object such that the position of the virtual control construct follows the tracked motions of the control object, (ii) computationally determine whether the control object intersects the virtual control construct, and (iii) if so, control the user interface in a first mode, and otherwise control the user interface in a second mode different from the first mode.
In yet another aspect, a computer-implemented method for facilitating control of a user interface via free-space motions of a control object is provided. The method involves receiving data indicative of tracked motions of the control object, and, using a processor, (i) computationally defining a virtual control construct and updating at least a spatial position thereof based at least in part on the data such that the position of the virtual control construct follows the tracked motions of the control object, (ii) computationally detecting when a tip of the control object transitions from one side of the virtual control construct to another side, and (iii) whenever it does, switching between two modes of controlling the user interface.
In a further aspect, yet another computer-implemented method for facilitating control of a user interface via free-space motions of a control object is provided. The method includes tracking motions of a control object and a gesturer; using a processor to continuously determine computationally whether the control object intersects a virtual control construct located at a temporarily fixed location in space and, if so, controlling the user interface in a first mode and otherwise controlling the user interface in a second mode different from the first mode; and, each time upon recognition of a specified gesture performed by the gesturer, using the processor to relocate the virtual control construct to a specified distance from an instantaneous position of the control object.
Among other aspects, implementations can enable quicker, crisper gesture based or “free space” (i.e., not requiring physical contact) interfacing with a variety of machines (e.g., a computing systems, including desktop, laptop, tablet computing devices, special purpose computing machinery, including graphics processors, embedded microcontrollers, gaming consoles, audio mixers, or the like; wired or wirelessly coupled networks of one or more of the foregoing, and/or combinations thereof), obviating or reducing the need for contact-based input devices such as a mouse, joystick, touch pad, or touch screen.
The foregoing will me more readily understood from the following detailed description, in particular, when taken in conjunction with the drawings, in which:
System and methods in accordance herewith generally utilize information about the motion of a control object, such as a user's finger or a stylus, in three-dimensional space to operate a user interface and/or components thereof based on the motion information. Various implementations take advantage of motion-capture technology to track the motions of the control object in real time (or near real time, i.e., sufficiently fast that any residual lag between the control object and the system's response is unnoticeable or practically insignificant). Other implementations can use synthetic motion data (e.g., generated by a computer game) or stored motion data (e.g., previously captured or generated). References to motions in “free space” or “touchless” motions are used herein with reference to an implementation to distinguish motions tied to and/or requiring physical contact of the moving object with a physical surface to effect input; however, in some applications, the control object can contact a physical surface ancillary to providing input, in such case the motion is still considered a “free-space” motion. Further, in some implementations, the virtual surface can be defined to co-reside at or very near a physical surface (e.g., a virtual touch screen can be created by defining a (substantially planar) virtual surface at or very near the screen of a display (e.g., television, monitor, or the like); or a virtual active table top can be created by defining a (substantially planar) virtual surface at or very near a table top convenient to the machine receiving the input).
A “control object” as used herein with reference to an implementation is generally any three-dimensionally movable object or appendage with an associated position and/or orientation (e.g., the orientation of its longest axis) suitable for pointing at a certain location and/or in a certain direction. Control objects include, e.g., hands, fingers, feet, or other anatomical parts, as well as inanimate objects such as pens, styluses, handheld controls, portions thereof, and/or combinations thereof. Where a specific type of control object, such as the user's finger, is used hereinafter for ease of illustration, it is to be understood that, unless otherwise indicated or clear from context, any other type of control object can be used as well.
A “virtual control construct” as used herein with reference to an implementation denotes a geometric locus defined (e.g., programmatically) in space and useful in conjunction with a control object, but not corresponding to a physical object; its purpose is to discriminate between different operational modes of the control object (and/or a user-interface element controlled therewith, such as a cursor) based on whether the control object intersects the virtual control construct. The virtual control construct, in turn, can be, e.g., a virtual surface construct (a plane oriented relative to a tracked orientation of the control object or an orientation of a screen displaying the user interface) or a point along a line or line segment extending from the tip of the control object.
The term “intersect” is herein used broadly with reference to an implementation to denote any instance in which the control object, which is an extended object, has at least one point in common with the virtual control construct and, in the case of an extended virtual control construct such as a line or two-dimensional surface, is not parallel thereto. This includes “touching” as an extreme case, but typically involves that portions of the control object fall on both sides of the virtual control construct.
Using the output of a suitable motion-capture system or motion information received from another source, various implementations facilitate user input via gestures and motions performed by the user's hand or a (typically handheld) pointing device. For example, in some implementations, the user can control the position of a cursor and/or other object on the screen by pointing at the desired screen location, e.g., with his index finger, without the need to touch the screen. The position and orientation of the finger relative to the screen, as determined by the motion-capture system, can be used to compute the intersection of a straight line through the axis of the finger with the screen, and a cursor symbol (e.g., an arrow, circle, cross hair, or hand symbol) can be displayed at the point of intersection. If the range of motion causes the intersection point to move outside the boundaries of the screen, the intersection with a (virtual) plane through the screen can be used, and the cursor motions can be re-scaled, relative to the finger motions, to remain within the screen boundaries. Alternatively to extrapolating the finger towards the screen, the position of the finger (or control object) tip can be projected perpendicularly onto the screen; in this implementation, the control object orientation can be disregarded. As will be readily apparent to one of skill in the art, many other ways of mapping the control object position and/or orientation onto a screen location can, in principle, be used; a particular mapping can be selected based on considerations such as, without limitation, the requisite amount of information about the control object, the intuitiveness of the mapping to the user, and the complexity of the computation. For example, in some implementations, the mapping is based on intersections with or projections onto a (virtual) plane defined relative to the camera, under the assumption that the screen is located within that plane (which is correct, at least approximately, if the camera is correctly aligned relative to the screen), whereas, in other implementations, the screen location relative to the camera is established via explicit calibration (e.g., based on camera images including the screen).
In some implementations, the cursor can be operated in at least two modes: a disengaged mode in which it merely indicates a position on the screen, typically without otherwise affecting the screen content; and one or more engaged modes, which allow the user to manipulate the screen content. In the engaged mode, the user can, for example, drag graphical user-interface elements (such as icons representing files or applications, controls such as scroll bars, or displayed objects) across the screen, or draw or write on a virtual canvas. Further, transient operation in the engaged mode can be interpreted as a click event. Thus, operation in the engaged mode generally corresponds to, or emulates, touching a touch screen or touch pad, or controlling a mouse with a mouse button held down.
The term “cursor,” as used in this discussion, refers generally to the cursor functionality rather than the visual element; in other words, the cursor is a control element operable to select a screen position—whether or not the control element is actually displayed and manipulate screen content via movement across the screen, i.e., changes in the selected position. The cursor need not always be visible in the engaged mode. In some instances, a cursor symbol still appears, e.g., overlaid onto another graphical element that is moved across the screen, whereas in other instances, cursor motion is implicit in the motion of other screen elements or in newly created screen content (such as a line that appears on the screen as the control object moves), obviating the need for a special symbol. In the disengaged mode, a cursor symbol is typically used to visualize the current cursor location. Alternatively or additionally, a screen element or portion presently co-located with the cursor (and thus the selected screen location) can change brightness, color, or some other property to indicate that it is being pointed at. However, in certain implementations, the symbol or other visual indication of the cursor location can be omitted so that the user has to rely on his own observation of the control object relative to the screen to estimate the screen location pointed at. (For example, in a shooter game, the player can have the option to shoot with or without a “virtual sight” indicating a pointed-to screen location.)
Discrimination between the engaged and disengaged modes can be achieved by tracking the control object relative to a virtual control construct such as a virtual plane (or, more generally, a virtual surface construct). In an implementation and by way of example, as illustrated in
Transitions between the different operational modes can, but need not, be visually indicated by a change in the shape, color (as in
Of course, the system under control need not be a desktop computer.
The virtual surface construct need not be planar, but can be curved in space, e.g., to conform to the user's range of movements.
The location and/or orientation of the virtual surface construct (or other virtual control construct) can be defined relative to the room and/or stationary objects (e.g., a screen) therein, relative to the user, relative to the device 114 or relative to some combination. For example, a planar virtual surface construct can be oriented parallel to the screen, perpendicular to the direction of the control object, or at some angle in between. The location of the virtual surface construct can, in some implementations, be set by the user, e.g., by means of a particular gesture recognized by the motion-capture system. To give just one example, the user can, with her index finger stretched out, have her thumb and middle finger touch so as to pin the virtual surface construct at a certain location relative to the current position of the index-finger-tip. Once set in this manner, the virtual surface construct can be stationary until reset by the user via performance of the same gesture in a different location.
In some implementations, the virtual surface construct is tied to and moves along with the control object, i.e., the position and/or orientation of the virtual surface construct are updated based on the tracked control object motion. This affords the user maximum freedom of motion by allowing the user to control the user interface from anywhere (or almost anywhere) within the space monitored by the motion-capture system. To enable the relative motion between the control object and virtual surface construct that is necessary for piercing the surface, the virtual surface construct follows the control object's movements with some delay. Thus, starting from a steady-state distance between the virtual surface construct and the control object tip in the disengaged mode, the distance generally decreases as the control object accelerates towards the virtual surface construct, and increases as the control object accelerates away from the virtual surface construct. If the control object's forward acceleration (i.e., towards the virtual surface construct) is sufficiently fast and/or prolonged, the control object eventually pierces the virtual surface construct. Once pierced, the virtual surface construct again follows the control object's movements. However, whereas, in the disengaged mode, the virtual surface construct is “pushed” ahead of the control object (i.e., is located in front of the control object tip), it is “pulled” behind the control object in the engaged mode (i.e., is located behind the control object tip). To disengage, the control object generally needs to be pulled back through the virtual surface construct with sufficient acceleration to exceed the surface's responsive movement.
In an implementation, an engagement target can be defined as merely the point where the user touches or pierces a virtual control construct. For example, a virtual point construct can be defined along a line extending from or through the control object tip, or any other point or points on the control object, located a certain distance from the control object tip in the steady state, and moving along the line to follow the control object. The line can, e.g., be oriented in the direction of the control object's motion, perpendicularly project the control object tip onto the screen, extend in the direction of the control object's axis, or connect the control object tip to a fixed location, e.g., a point on the display screen. Irrespective of how the line and virtual point construct are defined, the control object can, when moving sufficiently fast and in a certain manner, “catch” the virtual point construct. Similarly, a virtual line construct (straight or curved) can be defined as a line within a surface intersecting the control object at its tip, e.g., as a line lying in the same plane as the control object and oriented perpendicular (or at some other non-zero angle) to the control object. Defining the virtual line construct within a surface tied to and intersecting the control object tip ensures that the control object can eventually intersect the virtual line construct.
In an implementation, engagement targets defined by one or more virtual point constructs or virtual line (i.e., linear or curvilinear) constructs can be mapped onto engagement targets defined as virtual surface constructs, in the sense that the different mathematical descriptions are functionally equivalent. For example, a virtual point construct can correspond to the point of a virtual surface construct that is pierced by the control object (and a virtual line construct can correspond to a line in the virtual surface construct going through the virtual point construct). If the virtual point construct is defined on a line projecting the control object tip onto the screen, control object motions perpendicular to that line move the virtual point construct in a plane parallel to the screen, and if the virtual point construct is defined along a line extending in the direction of the control object's axis, control object motions perpendicular to that line move the virtual point construct in a plane perpendicular to that axis; in either case, control object motions along the line move the control object tip towards or away from the virtual point construct and, thus, the respective plane. Thus, the user's experience interacting with a virtual point construct can be little (or no) different from interacting with a virtual surface construct. Hereinafter, the description will, for ease of illustration, focus on virtual surface constructs. A person of skill in the art will appreciate, however, that the approaches, methods, and systems described can be straightforwardly modified and applied to other virtual control constructs (e.g., virtual point constructs or virtual linear/curvilinear constructs).
The position and/or orientation of the virtual surface construct (or other virtual control construct) are typically updated continuously or quasi-continuously, i.e., as often as the motion-capture system determines the control object location and/or direction (which, in visual systems, corresponds to the frame rate of image acquisition and/or image processing). However, implementations in which the virtual surface construct is updated less frequently (e.g., only every other frame, to save computational resources) or more frequently (e.g., based on interpolations between the measured control object positions) can be provided for in implementations.
In some implementations, the virtual surface construct follows the control object with a fixed time lag, e.g., between 0.1 and 1.0 second. In other words, the location of the virtual surface construct is updated, for each frame, based on where the control object tip was a certain amount of time (e.g., 0.2 second) in the past. This is illustrated in
At a first point t=t0 in time, when the control object is at rest, the virtual plane is located at its steady-state distance din front of the control object tip; this distance can be, e.g., a few millimeters. At a second point t=t1 in time—after the control object has started moving towards the virtual plane, but before the lag period has passed—the virtual plane is still in the same location, but its distance from the control object tip has decreased due to the control object's movement. One lag period later, at t=t1+Δtlog, the virtual plane is positioned the steady-state distance away from the location of the control object tip at the second point in time, but due to the control object's continued forward motion, the distance between the control object tip and the virtual plane has further decreased. Finally, at a fourth point in time t=t2, the control object has pierced the virtual plane. One lag time after the control object has come to a halt, at t=t2+Δtlog, the virtual plane is again a steady-state distance away from the control object tip—but now on the other side. When the control object is subsequently pulled backwards, the distance between its tip and the virtual plane decreases again (t=t3 and t=t4), until the control object tip emerges at the first side of the virtual plane (t=t5). The control object can stop at a different position than where it started, and the virtual plane will eventually follow it and be, once more, a steady-state distance away from the control object tip (t=t6). Even if the control object continues moving, if it does so at a constant speed, the virtual plane will, after an initial lag period to “catch up,” follow the control object at a constant distance.
The steady-state distances in the disengaged mode and the engaged mode can, but need not be the same. In some implementations, for instance, the steady-state distance in the engaged mode is larger, such that disengaging from the virtual plane (i.e., “unclicking”) appears harder to the user than engaging (i.e., “clicking”) because it requires a larger motion. Alternatively or additionally, to achieve a similar result, the lag times can differ between the engaged and disengaged modes. Further, in some implementations, the steady-state distance is not fixed, but adjustable based on the control object's speed of motion, generally being greater for higher control object speeds. As a result, when the control object moves very fast, motions toward the plane are “buffered” by the rather long distance that the control object has to traverse relative to the virtual plane before an engagement event is recognized (and, similarly, backwards motions for disengagement are buffered by a long disengagement steady-state distance). A similar effect can also be achieved by decreasing the lag time, i.e., increasing the responsiveness of touch-surface position updates, as the control object speed increases. Such speed-based adjustments can serve to avoid undesired switching between the modes that can otherwise be incidental to fast control object movements.
In various implementations, the position of the virtual plane (or other virtual surface construct) is updated not based on a time lag, but based on its current distance from the control object tip. That is, for any image frame, the distance between the current control object tip position and the virtual plane is computed (e.g., with the virtual-plane position being taken from the previous frame), and, based thereon, a displacement or shift to be applied to the virtual plane is determined. In some implementations, the update rate as a function of distance can be defined in terms of a virtual “potential-energy surface” or “potential-energy curve.” In
The potential-energy curve need not be symmetric, or course.
Furthermore, the potential piercing energy need not, or not only, be a function of the distance from the control object tip to the virtual surface construct, but can depend on other factors. For example, in some implementations, a stylus with a pressure-sensitive grip is used as the control object. In this case, the pressure with which the user squeezes the stylus can be mapped to the piercing energy.
Whichever way the virtual surface construct is updated, jitter in the control object's motions can result in unintentional transitions between the engaged and disengaged modes. While such modal instability can be combatted by increasing the steady-state distance (i.e., the “buffer zone” between control object and virtual surface construct), this comes at the cost of requiring the user, when she intends to switch modes, to perform larger movements that can feel unnatural. The trade-off between modal stability and user convenience can be improved by filtering the tracked control object movements. Specifically, jitter can be filtered out, based on the generally more frequent changes in direction associated with it, with some form of time averaging. Accordingly, in one implementation, a moving-average filter spanning, e.g., a few frames, is applied to the tracked movements, such that only a net movement within each time window is used as input for cursor control. Since jitter generally increases with faster movements, the time-averaging window can be chosen to likewise increase as a function of control object velocity (such as a function of overall control object speed or of a velocity component, e.g., perpendicular to the virtual plane). In another implementation, the control object's previous and newly measured position are averaged with weighting factors that depend, e.g., on velocity, frame rate, and/or other factors. For example, the old and new positions can be weighted with multipliers of x and (1−x), respectively, where x varies between 0 and land increases with velocity. In one extreme, for x=1, the cursor remains completely still, whereas for the other extreme, x=0, no filtering is performed at all.
In some implementations, temporary piercing of the virtual surface construct—i.e., a clicking motion including penetration of the virtual surface construct immediately followed by withdrawal from the virtual surface construct—switches between modes and locks in the new mode. For example, starting in the disengaged mode, a first click event can switch the control object into the engaged mode, where it can then remain until the virtual surface construct is clicked at again.
Further, in some implementations, the degree of piercing (i.e., the distance beyond the virtual surface construct that the control object initially reaches, before the virtual surface construct catches up) is interpreted as an intensity level that can be used to refine the control input. For example, the intensity (of engagement) in a swiping gesture for scrolling through screen content can determine the speed of scrolling. Further, in a gaming environment or other virtual world, different intensity levels when touching a virtual object (by penetrating the virtual surface construct while the cursor is positioned on the object as displayed on the screen) can correspond to merely touching the object versus pushing the object over. As another example, when hitting the keys of a virtual piano displayed on the screen, the intensity level can translate into the volume of the sound created. Thus, touching or engagement of a virtual surface construct (or other virtual control construct) can provide user input beyond the binary discrimination between engaged and disengaged modes.
As will be readily apparent to those of skill in the art, the methods described above can be readily extended to the control of a user interface with multiple simultaneously tracked control objects. For instance, both left and right index fingers of a user can be tracked, each relative to its own associated virtual touch surface, to operate to cursors simultaneously and independently. As another example, the user's hand can be tracked to determine the positions and orientations of all fingers; each finger can have its own associated virtual surface construct (or other virtual control construct) or, alternatively, all fingers can share the same virtual surface construct, which can follow the overall hand motions. A joint virtual plane can serve, e.g., as a virtual drawing canvas on which multiple lines can be drawn by the fingers at once.
In an implementation and by way of example, one or more control parameter(s) and the control object are applied to some control mechanism to determine the distance of the virtual control construct to a portion of the control object (e.g., tool tip(s), point(s) of interest on a user's hand or other points of interest). In some implementations, a lag (e.g., filter or filtering function) is introduced to delay, or modify, application of the control mechanism according to a variable or a fixed increment of time, for example. Accordingly, implementations can provide enhanced verisimilitude to the human-machine interaction, and/or increased fidelity of tracking control object(s) and/or control object portion(s).
In one example, the control object portion is a user's finger-tip. A control parameter is also the user's finger-tip. A control mechanism includes equating a plane-distance between virtual control construct and finger-tip to a distance between finger-tip and an arbitrary coordinate (e.g., center (or origin) of an interaction zone of the controller). Accordingly, the closer the finger-tip approaches to the arbitrary coordinate, the closer the virtual control construct approaches the finger-tip.
In another example, the control object is a hand, which includes a control object portion, e.g., a palm, determined by a “palm-point” or center of mass of the entire hand. A control parameter includes a velocity of the hand, as measured at the control object portion, i.e., the center of mass of the hand. A control mechanism includes filtering forward velocity over the last one (1) second. Accordingly, the faster the palm has recently been travelling forward, the closer the virtual control construct approaches to the control object (i.e., the hand).
In a further example, a control object includes a control object portion (e.g., a finger-tip). A control mechanism includes determining a distance between a thumb-tip (e.g., a first control object portion) and an index finger (e.g., a second control object portion). This distance can be used as a control parameter. Accordingly, the closer the thumb-tip and index-finger, the closer the virtual control construct is determined to be to the index finger. When the thumb-tip and index finger touch one another, the virtual control construct is determined to be partially pierced by the index finger. A lag (e.g., filter or filtering function) can introduce a delay in the application of the control mechanism by some time-increment proportional to any quantity of interest, for example horizontal jitter (i.e., the random motion of the control object in a substantially horizontal dimension). Accordingly, the greater the shake in a user's hand, the more lag will be introduced into the control mechanism.
User-interface control via free-space motions relies generally on a suitable motion-capture device or system for tracking the positions, orientations, and motions of one or more control objects. For a description of tracking positions, orientations, and motions of control objects, reference can be had to U.S. patent application Ser. No. 13/414,485, filed on Mar. 7, 2012, the entire enclosure of which is incorporated herein by reference. In various implementations, motion capture can be accomplished visually, based on a temporal sequence of images of the control object (or a larger object of interest including the control object, such as the user's hand) captured by one or more cameras. In one implementation, images acquired from two (or more) vantage points are used to define tangent lines to the surface of the object and approximate the location and shape of the object based thereon, as explained in more detail below. Other vision-based approaches that can be used in implementations include, without limitation, stereo imaging, detection of patterned light projected onto the object, or the use of sensors and markers attached to or worn by the object (such as, e.g., markers integrated into a glove) and/or combinations thereof. Alternatively or additionally, the control object can be tracked acoustically or ultrasonically, or using inertial sensors such as accelerometers, gyroscopes, and/or magnetometers (e.g., MEMS sensors) attached to or embedded within the control object. Implementations can be built employing one or more of particular motion-tracking approaches that provide control object position and/or orientation (and/or derivatives thereof) tracking with sufficient accuracy, precision, and responsiveness for the particular application.
The computer 506 processing the images acquired by the cameras 500, 502 can be a suitably programmed general-purpose computer. As shown in
The image-processing and tracking module 536 can analyze pairs of image frames acquired by the two cameras 500, 502 (and stored, e.g., in image buffers in memory 522) to identify the control object (or an object including the control object or multiple control objects, such as a user's hand) therein (e.g., as a non-stationary foreground object) and detect its edges. Next, the module 536 can, for each pair of corresponding rows in the two images, find an approximate cross-section of the control object by defining tangent lines on the control object that extend from the vantage points (i.e., the cameras) to the respective edge points of the control object, and inscribe an ellipse (or other geometric shape defined by only a few parameters) therein. The cross-sections can then be computationally connected in a manner that is consistent with certain heuristics and known properties of the control object (e.g., the requirement of a smooth surface) and resolves any ambiguities in the fitted ellipse parameters. As a result, the control object is reconstructed or modeled in three dimensions. This method, and systems for its implementation, are described in more detail in U.S. patent application Ser. No. 13/414,485, filed on Mar. 7, 2012, the entire enclosure of which is incorporated herein by reference. A larger object including multiple control objects can similarly be reconstructed with respective tangent lines and fitted ellipses, typically exploiting information of internal constraints of the object (such as a maximum physical separation between the fingertips of one hand). The image-processing and tracking module 534 can, further, extract relevant control object parameters, such as tip positions and orientations as well as velocities, from the three-dimensional model. In some implementations, this information can be inferred from the images at a lower level, prior to or without the need for fully reconstructing the control object. These operations are readily implemented by those skilled in the art without undue experimentation. In some implementations, a filter module 538 receives input from the image-processing and tracking module 564, and smoothens or averages the tracked control object motions; the degree of smoothing or averaging can depend on a control object velocity as determined by the tracking module 536.
An engagement-target module 540 can receive tracking data about the control object from the image-processing and tracking module 536 and/or the filter module 538, and use that data to compute a representation of the virtual control construct, i.e., to define and/or update the position and orientation of the virtual control construct relative to the control object (and/or the screen); the representation can be stored in memory in any suitable mathematical form. A touch-detection module 542 in communication with the engagement-target module 540 can determine, for each frame, whether the control object touches or pierces the virtual control construct. A cursor module 544 can, based on tracking data from the image-processing and tracking module 536, determine a cursor location on the screen (e.g., as the projection of the control object tip onto the screen). The cursor module 544 can also include a visualization component that depicts a cursor at the computed location, preferably in a way that discriminates, based on output from the touch-detection module 542, between the engaged and disengaged mode (e.g., by using different colors). The visualization component of the cursor module 544 can also modify the cursor appearance based on the control object distance from the virtual control construct; for instance, the cursor can take the form of a circle having a radius proportional to the distance between the control object tip and the virtual control construct. A user-interface control module 546 can map detected motions in the engaged mode into control input for the applications 534 running on the computer 506. Collectively, the end-user application 534, user-interface control module 546, and cursor module 544 can compute the screen content, i.e., an image for display on the screen 526, which can be stored in a display buffer (e.g., in memory 522 or in the buffer of a GPU included in the system).
The functionality of the different modules can, of course, be grouped and organized in many different ways, as a person of skill in the art would readily understand. Further, it need not necessarily be implemented on a single computer, but can be distributed between multiple computers. For example, the image-processing and tracking functionality of module 536 can be provided by a separate computer in communication with the computer on which the end-user applications controlled via free-space control object motions are executed. In one exemplary implementation, the cameras 500, 502, light sources 512, and computational facility for image-processing and tracking are integrated into a single motion-capture device (which, typically, utilizes an application-specific integrated circuit (ASIC) or other special-purpose computer for image-processing). In another exemplary implementation, the camera images are sent from a client terminal over a network to a remote server computer for processing, and the tracked control object positions and orientations are sent back to the client terminal as input into the user interface. Implementations can be realized using any number and arrangement of computers (broadly understood to include any kind of general-purpose or special-purpose processing device, including, e.g., microcontrollers, ASICs, programmable gate arrays (PGAs), or digital signal processors (DSPs) and associated peripherals) executing the methods described herein, an any implementation of the various functional modules in hardware, software, or a combination thereof.
Computer programs incorporating various features or functionality described herein can be encoded on various computer readable storage media; suitable media include magnetic disk or tape, optical storage media such as compact disk (CD) or DVD (digital versatile disk), flash memory, and any other non-transitory medium capable of holding data in a computer-readable form. Computer-readable storage media encoded with the program code can be packaged with a compatible device or provided separately from other devices. In addition, program code can be encoded and transmitted via wired, optical, and/or wireless networks conforming to a variety of protocols, including the Internet, thereby allowing distribution, e.g., via Internet download and/or provided on-demand as web-services.
The systems and methods described herein can find application in a variety of computer-user-interface contexts, and can replace mouse operation or other traditional means of user input as well as provide new user-input modalities. Free-space control object motions and virtual-touch recognition can be used, for example, to provide input to commercial and industrial legacy applications (such as, e.g., business applications, including Microsoft Outlook™, office software, including Microsoft Office™, Windows™, Excel™, etc.; graphic design programs; including Microsoft Visio™ etc.), operating systems such as Microsoft Windows™; web applications (e.g., browsers, such as Internet Explorer™); other applications (such as e.g., audio, video, graphics programs, etc.), to navigate virtual worlds (e.g., in video games) or computer representations of the real world (e.g., Google Street View™), or to interact with three-dimensional virtual objects (e.g., Google Earth™).
An example of a compound gesture will be illustrated with reference to an implementation illustrated by
The motion sensing device (e.g., 600a-1, 600a-2 and/or 600a-3) is capable of detecting position as well as motion of hands and/or portions of hands and/or other detectable objects (e.g., a pen, a pencil, a stylus, a paintbrush, an eraser, a virtualized tool, and/or a combination thereof), within a region of space 110a from which it is convenient for a user to interact with system 100a. Region 110a can be situated in front of, nearby, and/or surrounding system 100a. In some implementations, the position and motion sensing device can be integrated directly into display device 604a as integrated device 600a-2 and/or keyboard 106a as integrated device 600a-3. While
Tower 102a and/or position and motion sensing device and/or other elements of system 100a can implement functionality to provide virtual control surface 600a within region 110a with which engagement gestures are sensed and interpreted to facilitate user interactions with system 602a. Accordingly, objects and/or motions occurring relative to virtual control surface 600a within region 110a can be afforded differing interpretations than like (and/or similar) objects and/or motions otherwise occurring.
As illustrated in
Certain implementations were described above. It is, however, expressly noted that the described implementations are not limiting, nor exhaustive, but rather the intention is that additions and modifications to what was expressly described herein can be provided for in implementations readily apparent to one of ordinary skill having access to the foregoing. Moreover, it is to be understood that the features of the various implementations described herein are not mutually exclusive and can exist in various combinations and permutations, even if such combinations or permutations were not made expressly herein. The implementations described herein have been presented for purposes of illustration and are not intended to be exhaustive or limiting. Many variations and modifications are possible in light of the foregoing teaching. The implementations described herein as well as implementations apparent in view of the foregoing description are limited only by the following claims.
This application is a continuation of U.S. patent application Ser. No. 16/054,891, filed Aug. 3, 2018, entitled “Free-space User Interface and Control Using Virtual Constructs”, which is a continuation of U.S. patent application Ser. No. 15/358,104, filed on Nov. 21, 2016, entitled “Free-space User Interface and Control Using Virtual Constructs”, which is a continuation of U.S. patent application Ser. No. 14/154,730, filed on Jan. 14, 2014, entitled “Free-space User Interface and Control Using Virtual Constructs” which claims priority to and the benefit of, and incorporates herein by reference in their entireties, U.S. Provisional Application Nos. 61/825,515 and 61/825,480, both filed on May 20, 2013; No. 61/873,351, filed on Sep. 3, 2013; No. 61/877,641, filed on Sep. 13, 2013; No. 61/816,487, filed on Apr. 26, 2013; No. 61/824,691, filed on May 17, 2013; Nos. 61/752,725, 61/752,731, and 61/752,733, all filed on Jan. 15, 2013; No. 61/791,204, filed on Mar. 15, 2013; Nos. 61/808,959 and 61/808,984, both filed on Apr. 5, 2013; and No. 61/872,538, filed on Aug. 30, 2013.
Number | Name | Date | Kind |
---|---|---|---|
2665041 | Maffucci | Jan 1954 | A |
4175862 | DiMatteo et al. | Nov 1979 | A |
4876455 | Sanderson et al. | Oct 1989 | A |
4879659 | Bowlin et al. | Nov 1989 | A |
4893223 | Arnold | Jan 1990 | A |
5038258 | Koch et al. | Aug 1991 | A |
5134661 | Reinsch | Jul 1992 | A |
5282067 | Bell | Feb 1994 | A |
5434617 | Bianchi | Jul 1995 | A |
5454043 | Freeman | Sep 1995 | A |
5574511 | Yang et al. | Nov 1996 | A |
5581276 | Cipolla et al. | Dec 1996 | A |
5594469 | Freeman et al. | Jan 1997 | A |
5659475 | Brown | Aug 1997 | A |
5691737 | Ito et al. | Nov 1997 | A |
5742263 | Wang et al. | Apr 1998 | A |
5900863 | Numazaki | May 1999 | A |
5940538 | Spiegel et al. | Aug 1999 | A |
6002808 | Freeman | Dec 1999 | A |
6031161 | Baltenberger | Feb 2000 | A |
6031661 | Tanaami | Feb 2000 | A |
6072494 | Nguyen | Jun 2000 | A |
6075895 | Qiao et al. | Jun 2000 | A |
6147678 | Kumar et al. | Nov 2000 | A |
6154558 | Hsieh | Nov 2000 | A |
6181343 | Lyons | Jan 2001 | B1 |
6184326 | Razavi et al. | Feb 2001 | B1 |
6184926 | Khosravi et al. | Feb 2001 | B1 |
6195104 | Lyons | Feb 2001 | B1 |
6204852 | Kumar et al. | Mar 2001 | B1 |
6252598 | Segen | Jun 2001 | B1 |
6263091 | Jain et al. | Jul 2001 | B1 |
6346933 | Lin | Feb 2002 | B1 |
6417970 | Travers et al. | Jul 2002 | B1 |
6463402 | Bennett et al. | Oct 2002 | B1 |
6492986 | Metaxas et al. | Dec 2002 | B1 |
6493041 | Hanko et al. | Dec 2002 | B1 |
6498628 | Iwamura | Dec 2002 | B2 |
6578203 | Anderson, Jr. et al. | Jun 2003 | B1 |
6603867 | Sugino et al. | Aug 2003 | B1 |
6629065 | Gadh et al. | Sep 2003 | B1 |
6661918 | Gordon et al. | Dec 2003 | B1 |
6674877 | Jojic et al. | Jan 2004 | B1 |
6702494 | Dumler et al. | Mar 2004 | B2 |
6734911 | Lyons | May 2004 | B1 |
6738424 | Allmen et al. | May 2004 | B1 |
6771294 | Pulli et al. | Aug 2004 | B1 |
6798628 | Macbeth | Sep 2004 | B1 |
6804654 | Kobylevsky et al. | Oct 2004 | B2 |
6804656 | Rosenfeld et al. | Oct 2004 | B1 |
6814656 | Rodriguez | Nov 2004 | B2 |
6819796 | Hong et al. | Nov 2004 | B2 |
6901170 | Terada et al. | May 2005 | B1 |
6919880 | Morrison et al. | Jul 2005 | B2 |
6950534 | Cohen et al. | Sep 2005 | B2 |
6993157 | Oue et al. | Jan 2006 | B1 |
7152024 | Marschner et al. | Dec 2006 | B2 |
7213707 | Hubbs et al. | May 2007 | B2 |
7215828 | Luo | May 2007 | B2 |
7244233 | Krantz et al. | Jul 2007 | B2 |
7257237 | Luck et al. | Aug 2007 | B1 |
7259873 | Sikora et al. | Aug 2007 | B2 |
7308112 | Fujimura et al. | Dec 2007 | B2 |
7340077 | Gokturk et al. | Mar 2008 | B2 |
7483049 | Aman et al. | Jan 2009 | B2 |
7519223 | Dehlin et al. | Apr 2009 | B2 |
7532206 | Morrison et al. | May 2009 | B2 |
7536032 | Bell | May 2009 | B2 |
7542586 | Johnson | Jun 2009 | B2 |
7598942 | Underkoffler et al. | Oct 2009 | B2 |
7606417 | Steinberg et al. | Oct 2009 | B2 |
7646372 | Marks et al. | Jan 2010 | B2 |
7656372 | Sato et al. | Feb 2010 | B2 |
7665041 | Wilson et al. | Feb 2010 | B2 |
7692625 | Morrison et al. | Apr 2010 | B2 |
7831932 | Josephsoon et al. | Nov 2010 | B2 |
7840031 | Albertson et al. | Nov 2010 | B2 |
7861188 | Josephsoon et al. | Dec 2010 | B2 |
7940885 | Stanton et al. | May 2011 | B2 |
7948493 | Klefenz et al. | May 2011 | B2 |
7961174 | Markovic et al. | Jun 2011 | B1 |
7961934 | Thrun et al. | Jun 2011 | B2 |
7971156 | Albertson et al. | Jun 2011 | B2 |
7980885 | Gattwinkel et al. | Jul 2011 | B2 |
8023698 | Niwa et al. | Sep 2011 | B2 |
8035624 | Bell et al. | Oct 2011 | B2 |
8045825 | Shimoyama et al. | Oct 2011 | B2 |
8064704 | Kim et al. | Nov 2011 | B2 |
8085339 | Marks | Dec 2011 | B2 |
8086971 | Radivojevic et al. | Dec 2011 | B2 |
8111239 | Pryor et al. | Feb 2012 | B2 |
8112719 | Hsu et al. | Feb 2012 | B2 |
8144233 | Fukuyama | Mar 2012 | B2 |
8185176 | Mangat et al. | May 2012 | B2 |
8213707 | Li et al. | Jul 2012 | B2 |
8218858 | Gu | Jul 2012 | B2 |
8229134 | Duraiswami et al. | Jul 2012 | B2 |
8235529 | Raffle et al. | Aug 2012 | B1 |
8244233 | Chang et al. | Aug 2012 | B2 |
8249345 | Wu et al. | Aug 2012 | B2 |
8270669 | Aichi et al. | Sep 2012 | B2 |
8289162 | Mooring et al. | Oct 2012 | B2 |
8290208 | Kurtz et al. | Oct 2012 | B2 |
8304727 | Lee et al. | Nov 2012 | B2 |
8319832 | Nagata et al. | Nov 2012 | B2 |
8363010 | Nagata | Jan 2013 | B2 |
8395600 | Kawashima et al. | Mar 2013 | B2 |
8432377 | Newton | Apr 2013 | B2 |
8471848 | Tschesnok | Jun 2013 | B2 |
8514221 | King et al. | Aug 2013 | B2 |
8553037 | Smith et al. | Oct 2013 | B2 |
8582809 | Halimeh et al. | Nov 2013 | B2 |
8593417 | Kawashima et al. | Nov 2013 | B2 |
8605202 | Muijs et al. | Dec 2013 | B2 |
8631355 | Murillo et al. | Jan 2014 | B2 |
8638989 | Holz | Jan 2014 | B2 |
8659594 | Kim et al. | Feb 2014 | B2 |
8659658 | Vassigh et al. | Feb 2014 | B2 |
8693731 | Holz et al. | Apr 2014 | B2 |
8738523 | Sanchez et al. | May 2014 | B1 |
8744122 | Salgian et al. | Jun 2014 | B2 |
8768022 | Miga et al. | Jul 2014 | B2 |
8817087 | Weng et al. | Aug 2014 | B2 |
8842084 | Andersson et al. | Sep 2014 | B2 |
8843857 | Berkes et al. | Sep 2014 | B2 |
8854433 | Rafii | Oct 2014 | B1 |
8872914 | Gobush | Oct 2014 | B2 |
8878749 | Wu et al. | Nov 2014 | B1 |
8891868 | Ivanchenko | Nov 2014 | B1 |
8907982 | Zontrop et al. | Dec 2014 | B2 |
8922590 | Luckett, Jr. et al. | Dec 2014 | B1 |
8929609 | Padovani et al. | Jan 2015 | B2 |
8930852 | Chen et al. | Jan 2015 | B2 |
8942881 | Hobbs et al. | Jan 2015 | B2 |
8954340 | Sanchez et al. | Feb 2015 | B2 |
8957857 | Lee et al. | Feb 2015 | B2 |
9014414 | Katano et al. | Apr 2015 | B2 |
9056396 | Linnell | Jun 2015 | B1 |
9070019 | Holz | Jun 2015 | B2 |
9119670 | Yang et al. | Sep 2015 | B2 |
9122354 | Sharma | Sep 2015 | B2 |
9124778 | Crabtree | Sep 2015 | B1 |
9182812 | Ybanez Zepeda | Nov 2015 | B2 |
9182838 | Kikkeri | Nov 2015 | B2 |
9342160 | Bailey | May 2016 | B2 |
9389779 | Anderson et al. | Jul 2016 | B2 |
9459697 | Bedikian et al. | Oct 2016 | B2 |
9501152 | Bedikian et al. | Nov 2016 | B2 |
10281987 | Yang et al. | May 2019 | B1 |
20010044858 | Rekimoto | Nov 2001 | A1 |
20010052985 | Ono | Dec 2001 | A1 |
20020008139 | Albertelli | Jan 2002 | A1 |
20020008211 | Kask | Jan 2002 | A1 |
20020021287 | Tomasi et al. | Feb 2002 | A1 |
20020041327 | Hildreth et al. | Apr 2002 | A1 |
20020080094 | Biocca et al. | Jun 2002 | A1 |
20020105484 | Navab et al. | Aug 2002 | A1 |
20030053658 | Pavlidis | Mar 2003 | A1 |
20030053659 | Pavlidis et al. | Mar 2003 | A1 |
20030081141 | Mazzapica | May 2003 | A1 |
20030123703 | Pavlidis et al. | Jul 2003 | A1 |
20030152289 | Luo | Aug 2003 | A1 |
20030202697 | Simard et al. | Oct 2003 | A1 |
20040103111 | Miller et al. | May 2004 | A1 |
20040125228 | Dougherty | Jul 2004 | A1 |
20040125984 | Ito et al. | Jul 2004 | A1 |
20040145809 | Brenner | Jul 2004 | A1 |
20040155877 | Hong et al. | Aug 2004 | A1 |
20040212725 | Raskar | Oct 2004 | A1 |
20050007673 | Chaoulov et al. | Jan 2005 | A1 |
20050068518 | Baney et al. | Mar 2005 | A1 |
20050094019 | Grosvenor et al. | May 2005 | A1 |
20050131607 | Breed | Jun 2005 | A1 |
20050156888 | Xie et al. | Jul 2005 | A1 |
20050168578 | Gobush | Aug 2005 | A1 |
20050236558 | Nabeshima et al. | Oct 2005 | A1 |
20050238201 | Shamaie | Oct 2005 | A1 |
20060017807 | Lee et al. | Jan 2006 | A1 |
20060028656 | Venkatesh et al. | Feb 2006 | A1 |
20060029296 | King et al. | Feb 2006 | A1 |
20060034545 | Mattes et al. | Feb 2006 | A1 |
20060050979 | Kawahara | Mar 2006 | A1 |
20060072105 | Wagner | Apr 2006 | A1 |
20060098899 | King et al. | May 2006 | A1 |
20060204040 | Freeman et al. | Sep 2006 | A1 |
20060210112 | Cohen et al. | Sep 2006 | A1 |
20060262421 | Matsumoto et al. | Nov 2006 | A1 |
20060290950 | Platt et al. | Dec 2006 | A1 |
20070014466 | Baldwin | Jan 2007 | A1 |
20070042346 | Weller | Feb 2007 | A1 |
20070086621 | Aggarwal et al. | Apr 2007 | A1 |
20070130547 | Boillot | Jun 2007 | A1 |
20070206719 | Suryanarayanan et al. | Sep 2007 | A1 |
20070211023 | Boillot | Sep 2007 | A1 |
20070230929 | Niwa et al. | Oct 2007 | A1 |
20070238956 | Haras et al. | Oct 2007 | A1 |
20080013826 | Hillis et al. | Jan 2008 | A1 |
20080019576 | Senftner et al. | Jan 2008 | A1 |
20080030429 | Hailpern et al. | Feb 2008 | A1 |
20080031492 | Lanz | Feb 2008 | A1 |
20080056752 | Denton et al. | Mar 2008 | A1 |
20080064954 | Adams et al. | Mar 2008 | A1 |
20080106637 | Nakao et al. | May 2008 | A1 |
20080106746 | Shpunt et al. | May 2008 | A1 |
20080110994 | Knowles et al. | May 2008 | A1 |
20080111710 | Boillot | May 2008 | A1 |
20080118091 | Serfaty et al. | May 2008 | A1 |
20080126937 | Pachet | May 2008 | A1 |
20080187175 | Kim et al. | Aug 2008 | A1 |
20080244468 | Nishihara et al. | Oct 2008 | A1 |
20080246759 | Summers | Oct 2008 | A1 |
20080273764 | Scholl | Nov 2008 | A1 |
20080278589 | Thorn | Nov 2008 | A1 |
20080291160 | Rabin | Nov 2008 | A1 |
20080304740 | Sun et al. | Dec 2008 | A1 |
20080319356 | Cain et al. | Dec 2008 | A1 |
20090002489 | Yang et al. | Jan 2009 | A1 |
20090093307 | Miyaki | Apr 2009 | A1 |
20090102840 | Li | Apr 2009 | A1 |
20090103780 | Nishihara et al. | Apr 2009 | A1 |
20090116742 | Nishihara | May 2009 | A1 |
20090122146 | Zalewski et al. | May 2009 | A1 |
20090128564 | Okuno | May 2009 | A1 |
20090153655 | Ike et al. | Jun 2009 | A1 |
20090203993 | Mangat et al. | Aug 2009 | A1 |
20090203994 | Mangat et al. | Aug 2009 | A1 |
20090217211 | Hildreth | Aug 2009 | A1 |
20090257623 | Tang et al. | Oct 2009 | A1 |
20090274339 | Cohen et al. | Nov 2009 | A9 |
20090309710 | Kakinami | Dec 2009 | A1 |
20100001998 | Mandella et al. | Jan 2010 | A1 |
20100013662 | Stude | Jan 2010 | A1 |
20100013832 | Xiao et al. | Jan 2010 | A1 |
20100020078 | Shpunt | Jan 2010 | A1 |
20100023015 | Park | Jan 2010 | A1 |
20100026963 | Faulstich | Feb 2010 | A1 |
20100027845 | Kim et al. | Feb 2010 | A1 |
20100046842 | Conwell | Feb 2010 | A1 |
20100053164 | Imai et al. | Mar 2010 | A1 |
20100053209 | Rauch et al. | Mar 2010 | A1 |
20100053612 | Ou-Yang et al. | Mar 2010 | A1 |
20100058252 | Ko | Mar 2010 | A1 |
20100066676 | Kramer et al. | Mar 2010 | A1 |
20100066737 | Liu | Mar 2010 | A1 |
20100066975 | Rehnstrom | Mar 2010 | A1 |
20100091110 | Hildreth | Apr 2010 | A1 |
20100095206 | Kim | Apr 2010 | A1 |
20100118123 | Freedman et al. | May 2010 | A1 |
20100121189 | Ma et al. | May 2010 | A1 |
20100125815 | Wang et al. | May 2010 | A1 |
20100127995 | Rigazio et al. | May 2010 | A1 |
20100141762 | Siann et al. | Jun 2010 | A1 |
20100158372 | Kim et al. | Jun 2010 | A1 |
20100162165 | Addala et al. | Jun 2010 | A1 |
20100177929 | Kurtz et al. | Jul 2010 | A1 |
20100194863 | Lopes et al. | Aug 2010 | A1 |
20100199221 | Yeung et al. | Aug 2010 | A1 |
20100199230 | Latta et al. | Aug 2010 | A1 |
20100199232 | Mistry et al. | Aug 2010 | A1 |
20100201880 | Iwamura | Aug 2010 | A1 |
20100208942 | Porter et al. | Aug 2010 | A1 |
20100219934 | Matsumoto | Sep 2010 | A1 |
20100222102 | Rodriguez | Sep 2010 | A1 |
20100264833 | Van Endert et al. | Oct 2010 | A1 |
20100275159 | Matsubara et al. | Oct 2010 | A1 |
20100277411 | Yee et al. | Nov 2010 | A1 |
20100296698 | Lien et al. | Nov 2010 | A1 |
20100302015 | Kipman et al. | Dec 2010 | A1 |
20100302357 | Hsu et al. | Dec 2010 | A1 |
20100303298 | Marks et al. | Dec 2010 | A1 |
20100306712 | Snook et al. | Dec 2010 | A1 |
20100309097 | Raviv et al. | Dec 2010 | A1 |
20100321377 | Gay et al. | Dec 2010 | A1 |
20110007072 | Khan et al. | Jan 2011 | A1 |
20110025818 | Gallmeier et al. | Feb 2011 | A1 |
20110026765 | Ivanich et al. | Feb 2011 | A1 |
20110043806 | Guetta et al. | Feb 2011 | A1 |
20110057875 | Shigeta et al. | Mar 2011 | A1 |
20110066984 | Li | Mar 2011 | A1 |
20110080337 | Matsubara et al. | Apr 2011 | A1 |
20110080470 | Kuno et al. | Apr 2011 | A1 |
20110080490 | Clarkson et al. | Apr 2011 | A1 |
20110093820 | Zhang et al. | Apr 2011 | A1 |
20110107216 | Bi | May 2011 | A1 |
20110115486 | Frohlich et al. | May 2011 | A1 |
20110116684 | Coffman et al. | May 2011 | A1 |
20110119640 | Berkes et al. | May 2011 | A1 |
20110134112 | Koh et al. | Jun 2011 | A1 |
20110148875 | Kim et al. | Jun 2011 | A1 |
20110169726 | Holmdahl et al. | Jul 2011 | A1 |
20110173574 | Clavin et al. | Jul 2011 | A1 |
20110176146 | Alvarez Diez et al. | Jul 2011 | A1 |
20110181509 | Rautiainen et al. | Jul 2011 | A1 |
20110193778 | Lee et al. | Aug 2011 | A1 |
20110205151 | Newton et al. | Aug 2011 | A1 |
20110213664 | Osterhout et al. | Sep 2011 | A1 |
20110228978 | Chen et al. | Sep 2011 | A1 |
20110234840 | Klefenz et al. | Sep 2011 | A1 |
20110243451 | Oyaizu | Oct 2011 | A1 |
20110251896 | Impollonia et al. | Oct 2011 | A1 |
20110261178 | Lo et al. | Oct 2011 | A1 |
20110267259 | Tidemand et al. | Nov 2011 | A1 |
20110279397 | Rimon et al. | Nov 2011 | A1 |
20110286676 | El Dokor | Nov 2011 | A1 |
20110289455 | Reville et al. | Nov 2011 | A1 |
20110289456 | Reville et al. | Nov 2011 | A1 |
20110291925 | Israel et al. | Dec 2011 | A1 |
20110291988 | Bamji et al. | Dec 2011 | A1 |
20110296353 | Ahmed et al. | Dec 2011 | A1 |
20110299737 | Wang et al. | Dec 2011 | A1 |
20110304600 | Yoshida | Dec 2011 | A1 |
20110304650 | Campillo et al. | Dec 2011 | A1 |
20110310007 | Margolis et al. | Dec 2011 | A1 |
20110310220 | McEldowney | Dec 2011 | A1 |
20110314427 | Sundararajan | Dec 2011 | A1 |
20110317871 | Tossell | Dec 2011 | A1 |
20120038637 | Marks | Feb 2012 | A1 |
20120050157 | Latta et al. | Mar 2012 | A1 |
20120065499 | Chono | Mar 2012 | A1 |
20120068914 | Jacobsen et al. | Mar 2012 | A1 |
20120113223 | Hilliges et al. | May 2012 | A1 |
20120113316 | Ueta et al. | May 2012 | A1 |
20120159380 | Kocienda et al. | Jun 2012 | A1 |
20120163675 | Joo et al. | Jun 2012 | A1 |
20120194517 | Izadi et al. | Aug 2012 | A1 |
20120204133 | Guendelman | Aug 2012 | A1 |
20120218263 | Meier et al. | Aug 2012 | A1 |
20120223959 | Lengeling | Sep 2012 | A1 |
20120236288 | Stanley | Sep 2012 | A1 |
20120250936 | Holmgren | Oct 2012 | A1 |
20120270654 | Padovani et al. | Oct 2012 | A1 |
20120274781 | Shet et al. | Nov 2012 | A1 |
20120281873 | Brown et al. | Nov 2012 | A1 |
20120293667 | Baba et al. | Nov 2012 | A1 |
20120314030 | Datta et al. | Dec 2012 | A1 |
20120320080 | Giese et al. | Dec 2012 | A1 |
20130019204 | Kotler et al. | Jan 2013 | A1 |
20130033483 | Im et al. | Feb 2013 | A1 |
20130038694 | Nichani et al. | Feb 2013 | A1 |
20130044951 | Cherng et al. | Feb 2013 | A1 |
20130050425 | Im et al. | Feb 2013 | A1 |
20130086531 | Sugita et al. | Apr 2013 | A1 |
20130097566 | Berglund | Apr 2013 | A1 |
20130120319 | Givon | May 2013 | A1 |
20130148852 | Partis et al. | Jun 2013 | A1 |
20130181897 | Izumi | Jul 2013 | A1 |
20130182079 | Holz | Jul 2013 | A1 |
20130182897 | Holz | Jul 2013 | A1 |
20130187952 | Berkovich et al. | Jul 2013 | A1 |
20130191911 | Dellinger et al. | Jul 2013 | A1 |
20130194173 | Zhu et al. | Aug 2013 | A1 |
20130208948 | Berkovich et al. | Aug 2013 | A1 |
20130222233 | Park et al. | Aug 2013 | A1 |
20130222640 | Baek et al. | Aug 2013 | A1 |
20130239059 | Chen et al. | Sep 2013 | A1 |
20130241832 | Rimon et al. | Sep 2013 | A1 |
20130252691 | Alexopoulos | Sep 2013 | A1 |
20130257736 | Hou et al. | Oct 2013 | A1 |
20130258140 | Lipson et al. | Oct 2013 | A1 |
20130271397 | MacDougall et al. | Oct 2013 | A1 |
20130283213 | Guendelman et al. | Oct 2013 | A1 |
20130300831 | Mavromatis et al. | Nov 2013 | A1 |
20130307935 | Rappel et al. | Nov 2013 | A1 |
20130321265 | Bychkov et al. | Dec 2013 | A1 |
20140002365 | Ackley et al. | Jan 2014 | A1 |
20140010441 | Shamaie | Jan 2014 | A1 |
20140015831 | Kim et al. | Jan 2014 | A1 |
20140055385 | Duheille | Feb 2014 | A1 |
20140055396 | Aubauer et al. | Feb 2014 | A1 |
20140063055 | Osterhout et al. | Mar 2014 | A1 |
20140063060 | Maciocci et al. | Mar 2014 | A1 |
20140064566 | Shreve et al. | Mar 2014 | A1 |
20140081521 | Frojdh et al. | Mar 2014 | A1 |
20140085203 | Kobayashi | Mar 2014 | A1 |
20140095119 | Lee et al. | Apr 2014 | A1 |
20140098018 | Kim et al. | Apr 2014 | A1 |
20140125775 | Holz | May 2014 | A1 |
20140125813 | Holz | May 2014 | A1 |
20140132738 | Ogura et al. | May 2014 | A1 |
20140134733 | Wu et al. | May 2014 | A1 |
20140139425 | Sakai | May 2014 | A1 |
20140139641 | Holz | May 2014 | A1 |
20140157135 | Lee et al. | Jun 2014 | A1 |
20140161311 | Kim | Jun 2014 | A1 |
20140168062 | Katz et al. | Jun 2014 | A1 |
20140176420 | Zhou et al. | Jun 2014 | A1 |
20140177913 | Holz | Jun 2014 | A1 |
20140189579 | Rimon et al. | Jul 2014 | A1 |
20140192024 | Holz | Jul 2014 | A1 |
20140201666 | Bedikian et al. | Jul 2014 | A1 |
20140201689 | Bedikian et al. | Jul 2014 | A1 |
20140222385 | Muenster et al. | Aug 2014 | A1 |
20140223385 | Ton et al. | Aug 2014 | A1 |
20140225826 | Juni | Aug 2014 | A1 |
20140225918 | Mittal et al. | Aug 2014 | A1 |
20140240215 | Tremblay et al. | Aug 2014 | A1 |
20140240225 | Eilat | Aug 2014 | A1 |
20140248950 | Tosas Bautista | Sep 2014 | A1 |
20140249961 | Zagel et al. | Sep 2014 | A1 |
20140253512 | Narikawa et al. | Sep 2014 | A1 |
20140253785 | Chan et al. | Sep 2014 | A1 |
20140267098 | Na et al. | Sep 2014 | A1 |
20140282282 | Holz | Sep 2014 | A1 |
20140307920 | Holz | Oct 2014 | A1 |
20140320408 | Zagorsek et al. | Oct 2014 | A1 |
20140344762 | Grasset et al. | Nov 2014 | A1 |
20140364209 | Perry | Dec 2014 | A1 |
20140364212 | Osman et al. | Dec 2014 | A1 |
20140369558 | Holz | Dec 2014 | A1 |
20140375547 | Katz et al. | Dec 2014 | A1 |
20150003673 | Fletcher | Jan 2015 | A1 |
20150009149 | Gharib et al. | Jan 2015 | A1 |
20150016777 | Abovitz et al. | Jan 2015 | A1 |
20150022447 | Hare et al. | Jan 2015 | A1 |
20150029091 | Nakashima et al. | Jan 2015 | A1 |
20150040040 | Balan et al. | Feb 2015 | A1 |
20150054729 | Minnen et al. | Feb 2015 | A1 |
20150084864 | Geiss et al. | Mar 2015 | A1 |
20150097772 | Starner | Apr 2015 | A1 |
20150103004 | Cohen et al. | Apr 2015 | A1 |
20150115802 | Kuti et al. | Apr 2015 | A1 |
20150116214 | Grunnet-Jepsen et al. | Apr 2015 | A1 |
20150131859 | Kim et al. | May 2015 | A1 |
20150172539 | Neglur | Jun 2015 | A1 |
20150193669 | Gu et al. | Jul 2015 | A1 |
20150205358 | Lyren | Jul 2015 | A1 |
20150205400 | Hwang et al. | Jul 2015 | A1 |
20150206321 | Scavezze et al. | Jul 2015 | A1 |
20150227795 | Starner et al. | Aug 2015 | A1 |
20150234569 | Hess | Aug 2015 | A1 |
20150253428 | Holz | Sep 2015 | A1 |
20150258432 | Stafford et al. | Sep 2015 | A1 |
20150261291 | Mikhailov et al. | Sep 2015 | A1 |
20150293597 | Mishra et al. | Oct 2015 | A1 |
20150304593 | Sakai | Oct 2015 | A1 |
20150309629 | Amariutei et al. | Oct 2015 | A1 |
20150323785 | Fukata et al. | Nov 2015 | A1 |
20150363070 | Katz | Dec 2015 | A1 |
20160062573 | Dascola et al. | Mar 2016 | A1 |
20160086046 | Holz et al. | Mar 2016 | A1 |
20160093105 | Rimon et al. | Mar 2016 | A1 |
20170102791 | Hosenpud et al. | Apr 2017 | A1 |
Number | Date | Country |
---|---|---|
1984236 | Jun 2007 | CN |
201332447 | Oct 2009 | CN |
101729808 | Jun 2010 | CN |
101930610 | Dec 2010 | CN |
101951474 | Jan 2011 | CN |
102053702 | May 2011 | CN |
201859393 | Jun 2011 | CN |
102201121 | Sep 2011 | CN |
102236412 | Nov 2011 | CN |
4201934 | Jul 1993 | DE |
10326035 | Jan 2005 | DE |
102007015495 | Oct 2007 | DE |
102007015497 | Jan 2014 | DE |
0999542 | May 2000 | EP |
1477924 | Nov 2004 | EP |
1837665 | Sep 2007 | EP |
2369443 | Sep 2011 | EP |
2378488 | Oct 2011 | EP |
2419433 | Apr 2006 | GB |
2480140 | Nov 2011 | GB |
2519418 | Apr 2015 | GB |
H02236407 | Sep 1990 | JP |
H08261721 | Oct 1996 | JP |
H09259278 | Oct 1997 | JP |
2000023038 | Jan 2000 | JP |
2002133400 | May 2002 | JP |
2003256814 | Sep 2003 | JP |
2004246252 | Sep 2004 | JP |
2006019526 | Jan 2006 | JP |
2006259829 | Sep 2006 | JP |
2007272596 | Oct 2007 | JP |
2008227569 | Sep 2008 | JP |
2009031939 | Feb 2009 | JP |
2009037594 | Feb 2009 | JP |
2010060548 | Mar 2010 | JP |
2011010258 | Jan 2011 | JP |
2011065652 | Mar 2011 | JP |
2011107681 | Jun 2011 | JP |
4906960 | Mar 2012 | JP |
2012527145 | Nov 2012 | JP |
101092909 | Jun 2011 | KR |
2422878 | Jun 2011 | RU |
200844871 | Nov 2008 | TW |
9426057 | Nov 1994 | WO |
2004114220 | Dec 2004 | WO |
2006020846 | Feb 2006 | WO |
2007137093 | Nov 2007 | WO |
2010007662 | Jan 2010 | WO |
2010032268 | Mar 2010 | WO |
2010076622 | Jul 2010 | WO |
2010088035 | Aug 2010 | WO |
2010138741 | Dec 2010 | WO |
2010148155 | Dec 2010 | WO |
2011024193 | Mar 2011 | WO |
2011036618 | Mar 2011 | WO |
2011044680 | Apr 2011 | WO |
2011045789 | Apr 2011 | WO |
2011119154 | Sep 2011 | WO |
2012027422 | Mar 2012 | WO |
2013109608 | Jul 2013 | WO |
2013109609 | Jul 2013 | WO |
2014208087 | Dec 2014 | WO |
2015026707 | Feb 2015 | WO |
Entry |
---|
U.S. Appl. No. 14/155,722—Notice of Allowance dated May 27, 2016, 10 pages. |
U.S. Appl. No. 14/626,820—Office Action dated Jan. 22, 2016, 13 pages. |
U.S. Appl. No. 14/626,820—Response to Office Action dated Jan. 22, 2016 filed May 21, 2016, 12 pages. |
U.S. Appl. No. 14/626,820—Final Office Action dated Sep. 8, 2016, 21 pages. |
U.S. Appl. No. 14/997,454—Office Action dated Dec. 1, 2016, 13 pages. |
U.S. Appl. No. 14/626,683—Office Action dated Jan. 20, 2016, 15 pages. |
U.S. Appl. No. 14/626,683—Final Office Action dated Sep. 12, 2016, 21 pages. |
U.S. Appl. No. 14/626,683—Response to Office Action dated Jan. 20, 2016 filed May 20, 2016, 15 pages. |
U.S. Appl. No. 14/626,898—Office Action dated Sep. 8, 2016, 29 pages. |
U.S. Appl. No. 14/626,898—Response to Office Action dated Sep. 8, 2016 filed Dec. 8, 2016, 21 pages. |
PCT/US2016/017632—Written Opinion of the International Searching Authority dated Jul. 27, 2016, 10 pages. |
U.S. Appl. No. 14/626,904—Office Action dated Jan. 25, 2017, 23 pages. |
U.S. Appl. No. 14/626,820—Response to Final Office Action dated Sep. 8, 2016, filed Jan. 9, 2017, 15 pages. |
U.S. Appl. No. 14/626,820—Nonfinal Office Action dated Mar. 24, 2017, 25 pages. |
U.S. Appl. No. 14/626,820—Advisory Action dated Jan. 26, 2017, 4 pages. |
PCT/US2016/017632—International Search Report and Written Opinion dated Jul. 27, 2016, 13 pages. |
U.S. Appl. No. 14/626,898—Notice of Allowance dated Feb. 15, 2017, 13 pages. |
PCT/US2016/017632—International Preliminary Report on Patentability dated Aug. 24, 2017, 12 pages. |
U.S. Appl. No. 15/358,104—Response to Office Action dated Nov. 2, 2017, filed Mar. 2, 2018, 9 pages. |
U.S. Appl. No. 15/358,104—Notice of Allowance dated Apr. 11, 2018, 41 pages. |
U.S. Appl. No. 14/476,694—Office Action dated Apr. 7, 2017, 32 pages. |
U.S. Appl. No. 14/155,722—Response to Office Action dated Nov. 20, 2015, filed Feb. 2, 2016, 15 pages. |
U.S. Appl. No. 15/279,363—Office Action dated Jan. 25, 2018, 29 pages. |
U.S. Appl. No. 15/279,363—Response to Office Action dated Jan. 25, 2018, filed May 24, 2018, 11 pages. |
U.S. Appl. No. 15/279,363—Notice of Allowance dated Jul. 10, 2018, 10 pages. |
U.S. Appl. No. 14/476,694—Final Office Action dated Apr. 7, 2017, 32 pages. |
U.S. Appl. No. 14/476,694—Response to Final Office Action dated Apr. 7, 2017 filed Jul. 6, 2017, 22 pages. |
U.S. Appl. No. 14/262,691—Final Office Action dated Aug. 19, 2016, 36 pages. |
U.S. Appl. No. 14/262,691—Response to Final Office Action dated Aug. 19, 2016, filed Nov. 21, 2016, 13 pages. |
U.S. Appl. No. 14/476,694—Final Office Action dated Feb. 26, 2018, 53 pages. |
U.S. Appl. No. 14/476,694—Office Action dated Jul. 30, 2018, 68 pages. |
U.S. Appl. No. 14/476,694—Response to Final Office Action dated Feb. 26, 2018 filed Jun. 19, 2018, 16 pages. |
U.S. Appl. No. 14/476,694—Respopnse to Office Action dated Jul. 30, 2018 filed Sep. 9, 2018, 19 pages. |
U.S. Appl. No. 15/917,066—Office Action dated Nov. 1, 2018, 31 pages. |
U.S. Appl. No. 14/262,691—Supplemental Response to Office Action dated Jan. 31, 2017, Jul. 20, 2018, 22 pages. |
U.S. Appl. No. 16/054,891—Office Action dated Oct. 24, 2019, 26 pages. |
U.S. Appl. No. 16/054,891—Response to Office Action dated Oct. 24, 2019, filed Feb. 24, 2020, 15 pages. |
U.S. Appl. No. 16/054,891—Notice of Allowance dated Apr. 1, 2020, 6 pages. |
U.S. Appl. No. 15/917,066—Response to Office Action dated Nov. 1, 2018, filed Mar. 1, 2019, 12 pages. |
U.S. Appl. No. 15/917,066—Office Action dated Mar. 19, 2019, 71 pages. |
U.S. Appl. No. 15/917,066—Response to Office Action dated Mar. 19, 2019, filed May 23, 2019, 12 pages. |
U.S. Appl. No. 15/917,066—Notice of Allowance dated Jun. 14, 2019, 5 pages. |
U.S. Appl. No. 16/659,468—Office Action dated Jun. 19, 2020, 111 pages. |
U.S. Appl. No. 15/917,066—Nonfinal Office Action dated Nov. 1, 2018, 31 pages. |
U.S. Appl. No. 16/659,468—Non-Final Office Action dated Jun. 19, 2020, 111 pages. |
U.S. Appl. No. 16/659,468—Response to Office Action dated Jun. 19, 2020 filed Sep. 18, 2020, 12 pages. |
U.S. Appl. No. 16/659,468—Final Office Action dated Nov. 20, 2020, 18 pages. |
U.S. Appl. No. 16/659,468—Response to Office Action dated Nov. 20, 2020 filed Mar. 22, 2021, 13 pages. |
U.S. Appl. No. 16/659,468—Notice of Allowance dated Apr. 23, 2021, 11 pages. |
U.S. Appl. No. 14/262,691, filed Apr. 25, 2014, U.S. Pat. No. 9,916,009, Mar. 13, 2018, Issued. |
U.S. Appl. No. 15/917,066, filed, Mar. 9, 2018, U.S. Pat. No. 10,452,151, Oct. 22, 2019, Issued. |
U.S. Appl. No. 16/659,468, filed Oct. 21, 2019, U.S. Pat. No. 11,099,653, Aug. 24, 2021, Issued. |
U.S. Appl. No. 17/409,767, filed Aug. 23, 2021, US-2021-0382563-A1, Dec. 9, 2021, Published. |
U.S. Appl. No. 14/457,015, filed Aug. 11, 2014, Abandoned. |
U.S. Appl. No. 14/476,694, filed Sep. 3, 2014, U.S. Pat. No. 10,281,987, May 7, 2019, Issued. |
U.S. Appl. No. 16/402,134, filed May 2, 2019, U.S. Pat. No. 10,831,281, Nov. 10, 2020, Issued. |
U.S. Appl. No. 17/093,490, filed Nov. 9, 2020, US-2021-0081054-A1, Mar. 18, 2021, Published. |
U.S. Appl. No. 14/154,730, filed Jan. 14, 2014, U.S. Pat. No. 9,501,152, Nov. 22, 2016, Issued. |
U.S. Appl. No. 15/358,104, filed Nov. 21, 2016, U.S. Pat. No. 10,042,430, Aug. 7, 2018, Issued. |
U.S. Appl. No. 16/054,891, filed Aug. 3, 2018, U.S. Pat. No. 10,739,862, Aug. 11, 2020, Issued. |
U.S. Appl. No. 14/155,722, filed Jan. 15, 2014, U.S. Pat. No. 9,459,697, Oct. 4, 2016, Issued. |
U.S. Appl. No. 15/279,363, filed Sep. 28, 2016, U.S. Pat. No. 10,139,918, Nov. 27, 2018, Issued. |
U.S. Appl. No. 16/195,755, filed Nov. 19, 2018, US-2019-0155394-A1, May 23, 2019, Allowed. |
U.S. Appl. No. 14/476,694—Notice of Allowance dated Dec. 28, 2018, 22 pages. |
U.S. Appl. No. 16/195,755—Office Action dated Nov. 29, 2019, 46 pages. |
U.S. Appl. No. 16/195,755—Office Action dated Jun. 8, 2020, 15 pages. |
U.S. Appl. No. 16/195,755—Response to Office Action dated Nov. 29, 2019, filed Feb. 27, 2020, 13 pages. |
U.S. Appl. No. 16/402,134—Non-Final Office Action dated Jan. 27, 2020, 58 pages. |
U.S. Appl. No. 16/402,134—Notice of Allowance dated Jul. 15, 2020, 9 pages. |
U.S. Appl. No. 14/476,694—Response to Office Action dated Jul. 30, 2018 filed Nov. 9, 2018, 19 pages. |
U.S. Appl. No. 16/659,468—Response to Office Action dated Jun. 19, 2020 filed , 12 pages. |
U.S. Appl. No. 16/195,755—Response to Final Office Action dated Jun. 8, 2020 filed Sep. 21, 2020, 17 pages. |
U.S. Appl. No. 17/093,490 Office Action, dated Dec. 17, 2021, 101 pages. |
U.S. Appl. No. 16/195,755—Advisory Action dated Sep. 30, 2020, 3 pages. |
U.S. Appl. No. 16/195,755—Non Final Office Action dated May 25, 2021, 19 pages. |
U.S. Appl. No. 16/195,755—Response to Non-Final Office Action dated May 25, 2021, filed Aug. 25, 2021, 15 pages. |
U.S. Appl. No. 16/195,755—Notice of Allowance, dated Sep. 29, 2021, 6 pages. |
U.S. Appl. No. 16/195,755—Supplemental Notice of Allowance, dated Oct. 14, 2021, 9 pages. |
U.S. Appl. No. 14/154,730—Office Action dated Nov. 6, 2015, 9 pages. |
U.S. Appl. No. 14/155,722—Office Action dated Nov. 20, 2015, 14 pages. |
U.S. Appl. No. 14/281,817—Office Action dated Sep. 28, 2015, 5 pages. |
U.S. Appl. No. 14/262,691—Office Action dated Dec. 11, 2015, 31 pages. |
U.S. Appl. No. 14/154,730—Response to Office Action dated Nov. 6, 2016, filed Feb. 4, 2016, 9 pages. |
U.S. Appl. No. 14/154,730—Notice of Allowance dated May 3, 2016, 5 pages. |
U.S. Appl. No. 14/476,694—Office Action dated Nov. 1, 2016, 28 pages. |
U.S. Appl. No. 14/476,694—Response to Office Action dated Nov. 1, 2016 filed Jan. 31, 2017, 15 pages. |
U.S. Appl. No. 15/358,104—Office Action dated Nov. 2, 2017, 9 pages. |
U.S. Appl. No. 14/476,694—Response to Office Action dated Apr. 7, 2017 filed Jul. 6, 2017, 22 pages. |
U.S. Appl. No. 14/476,694—Advisory Action dated Jun. 22, 2017, 8 pages |
U.S. Appl. No. 14/516,493—Office Action dated May 9, 2016, 21 pages. |
U.S. Appl. No. 14/516,493—Response dated May 9 Office Action filed Aug. 9, 2016, 18 pages |
U.S. Appl. No. 14/516,493—Office Action dated Nov. 17, 2016, 30 pages. |
Pavlovic, V.I., et al., “Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, No. 7, Jul. 1997, pp. 677-695. |
Wu, Y., et al., “Vision-Based Gesture Recognition: A Review,” Beckman Institute, Copyright 1999, pp. 103-115. |
U.S. Appl. No. 14/280,018—Office Action dated Feb. 12, 2016, 38 pages. |
PCT/US2013/021713—International Preliminary Report on Patentability dated Jul. 22, 2014, 13 pages. |
PCT/US2013/021713—International Search Report and Written Opinion dated Sep. 11, 2013, 7 pages. |
Arthington, et al., “Cross-section Reconstruction During Uniaxial Loading,” Measurement Science and Technology, vol. 20, No. 7, Jun. 10, 2009, Retrieved from the Internet: http:iopscience.iop.org/0957-0233/20/7/075701, pp. 1-9. |
Barat et al., “Feature Correspondences From Multiple Views of Coplanar Ellipses”, 2nd International Symposium on Visual Computing, Author Manuscript, 2006, 10 pages. |
Bardinet, et al., “Fitting of iso-Surfaces Using Superquadrics and Free-Form Deformations” [on-line], Jun. 24-25, 1994 [retrieved Jan. 9, 2014], 1994 Proceedings of IEEE Workshop on Biomedical Image Analysis, Retrieved from the Internet: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=315882&tag=1, pp. 184-193. |
Butail, S., et al., “Three-Dimensional Reconstruction of the Fast-Start Swimming Kinematics of Densely Schooling Fish,” Journal of the Royal Society Interface, Jun. 3, 2011, retrieved from the Internet <http://www.ncbi.nlm.nih.gov/pubmed/21642367>, pp. 0, 1-12. |
Cheikh et al., “Multipeople Tracking Across Multiple Cameras”, International Journal on New Computer Architectures and Their Applications (IJNCAA), vol. 2, No. 1, 2012, pp. 23-33. |
Chung, et al., “Recovering LSHGCs and SHGCs from Stereo,” International Journal of Computer Vision, vol. 20, No. 1/2, 1996, pp. 43-58. |
Cumani, A., et al., “Recovering the 3D Structure of Tubular Objects from Stereo Silhouettes,” Pattern Recognition, Elsevier, GB, vol. 30, No. 7, Jul. 1, 1997, 9 pages. |
Davis et al., “Toward 3-D Gesture Recognition”, International Journal of Pattern Recognition and Artificial Intelligence, vol. 13, No. 3, 1999, pp. 381-393. |
Di Zenzo, S., et al., “Advances in Image Segmentation,” Image and Vision Computing, Elsevier, Guildford, GBN, vol. 1, No. 1, Copyright Butterworth & Co Ltd., Nov. 1, 1983, pp. 196-210. |
Forbes, K., et al., “Using Silhouette Consistency Constraints to Build 3D Models,” University of Cape Town, Copyright De Beers 2003, Retrieved from the internet: <http://www.dip.ee.uct.ac.za/˜kforbes/Publications/Forbes2003Prasa.pdf> on Jun. 17, 2013, 6 pages. |
Heikkila, J., “Accurate Camera Calibration and Feature Based 3-D Reconstruction from Monocular Image Sequences”, Infotech Oulu and Department of Electrical Engineering, University of Oulu, 1997, 126 pages. |
Kanhangad, V., et al., “A Unified Framework for Contactless Hand Verification,” IEEE Transactions on Information Forensics and Security, IEEE, Piscataway, NJ, US., vol. 6, No. 3, Sep. 1, 2011, pp. 1014-1027. |
Kim, et al., “Development of an Orthogonal Double-Image Processing Algorithm to Measure Bubble,” Department of Nuclear Engineering and Technology, Seoul National University Korea, vol. 39 No. 4, Published Jul. 6, 2007, pp. 313-326. |
Kulesza, et al., “Arrangement of a Multi Stereo Visual Sensor System for a Human Activities Space,” Source: Stereo Vision, Book edited by: Dr. Asim Bhatti, ISBN 978-953-7619-22-0, Copyright Nov. 2008, I-Tech, Vienna, Austria, www.intechopen.com, pp. 153-173. |
May, S., et al., “Robust 3D-Mapping with Time-of-Flight Cameras,” 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, Piscataway, NJ, USA, Oct. 10, 2009, pp. 1673-1678. |
Olsson, K., et al., “Shape from Silhouette Scanner—Creating a Digital 3D Model of a Real Object by Analyzing Photos From Multiple Views,” University of Linkoping, Sweden, Copyright VCG 2001, Retrieved from the Internet: <http://liu.diva-portal.org/smash/get/diva2:18671/FULLTEXT01> on Jun. 17, 2013, 52 pages. |
Pedersini, et al., Accurate Surface Reconstruction from Apparent Contours, Sep. 5-8, 2000 European Signal Processing Conference EUSIPCO 2000, vol. 4, Retrieved from the Internet: http://home.deib.polimi.it/sarti/CV_and_publications.html, pp. 1-4. |
Rasmussen, Matihew K., “An Analytical Framework for the Preparation and Animation of a Virtual Mannequin forthe Purpose of Mannequin-Clothing Interaction Modeling”, A Thesis Submitted in Partial Fulfillment of the Requirements for the Master of Science Degree in Civil and Environmental Engineering in the Graduate College of the University of Iowa, Dec. 2008, 98 pages. |
U.S. Appl. No. 14/280,018—Replacement Response to Office Action, dated Feb. 12, 2016, filed Jun. 8, 2016, 16 pages. |
U.S. Appl. No. 14/280,018—Notice of Allowance dated Sep. 7, 2016, 7 pages. |
U.S. Appl. No. 14/280,018—Response to Office Action dated Feb. 12, 2016, filed May 12, 2016, 15 pages. |
U.S. Appl. No. 14/262,691—Response to Offfice Action dated Dec. 11, 2015, filed May 11, 2016, 15 pages. |
U.S. Appl. No. 14/262,691—Office Action dated Aug. 19, 2016, 36 pages. |
U.S. Appl. No. 14/262,691—Response to Office Action dated Aug. 19, 2016, filed Nov. 21, 2016, 13 pages. |
U.S. Appl. No. 14/262,691—Office Action dated Jan. 31, 2017, 27 pages. |
U.S. Appl. No. 14/262,691—Response to Office Action dated Jan. 31, 2017, filed Jun. 30, 2017, 20 pages. |
U.S. Appl. No. 14/262,691—Notice of Allowance dated Oct. 30, 2017, 35 pages. |
U.S. Appl. No. 14/476,694—Office Action dated Aug. 10, 2017, 71 pages. |
U.S. Appl. No. 14/476,694—Response to Office Action dated Aug. 10, 2017, filed Nov. 10, 2017, 14 pages. |
U.S. Appl. No. 14/155,722—Response to Office Action dated Nov. 20, 2015, filed Feb. 19, 2016, 15 pages. |
Number | Date | Country | |
---|---|---|---|
20200363874 A1 | Nov 2020 | US |
Number | Date | Country | |
---|---|---|---|
61877641 | Sep 2013 | US | |
61873351 | Sep 2013 | US | |
61872538 | Aug 2013 | US | |
61825480 | May 2013 | US | |
61825515 | May 2013 | US | |
61824691 | May 2013 | US | |
61816487 | Apr 2013 | US | |
61808959 | Apr 2013 | US | |
61808984 | Apr 2013 | US | |
61791204 | Mar 2013 | US | |
61752733 | Jan 2013 | US | |
61752731 | Jan 2013 | US | |
61752725 | Jan 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16054891 | Aug 2018 | US |
Child | 16987289 | US | |
Parent | 15358104 | Nov 2016 | US |
Child | 16054891 | US | |
Parent | 14154730 | Jan 2014 | US |
Child | 15358104 | US |