The disclosure relates to input systems and methods and, more particularly, to input systems and methods based on detection of three-dimensional (3D) motion of a 3D object.
A computer user often needs to interact with the computer, which may be realized using an interactive input device, such as a keyboard, a mouse, or a touch screen. However, there are limits in using these devices. For example, conventional touch screens usually are based on technologies such as, for example, capacitive sensing or electric-field sensing. Such technologies can only track objects, such as the user's fingers, near the screen (that is, a short operational range), and cannot recognize the objects' 3D structure. Moreover, touch screens are usually used in small computers such as table computers. For a larger computer, such as a desktop or a workstation, it is often not convenient for the user to reach to the screen.
Therefore, there is a need for a human-computer interactive input system that has a larger operational range, is accurate and fast to resolve fine objects, such as a user's fingers, and has the ability to track an object's 3D motion and interaction with a surface.
In accordance with the disclosure, there is provided a method for generating and displaying a graphic representation of an object on a display screen. The method includes capturing at least one image of the object using at least one image sensor, determining, according to the at least one image, three-dimensional (3D) coordinates of a 3D point on the object in a 3D coordinate system defined in a space containing the object, defining a touch interactive surface in the space, performing a projection of the 3D point onto a projection point on the touch interactive surface, determining 3D coordinates of the projection point in the 3D coordinate system according to the projection, determining a displaying position of the graphic representation on the display screen according to the 3D coordinates of the projection point, and displaying the graphic representation at the displaying position on the display screen.
Also in accordance with the disclosure, there is provided a non-transitory computer-readable storage medium storing a program for generating and displaying a graphic representation of an object on a display screen. The program, when executed by a computer, instructing the computer to capture at least one image of the object using at least one image sensor, determine, according to the at least one image, three-dimensional (3D) coordinates of a 3D point on the object in a 3D coordinate system defined in a space containing the object, define a touch interactive surface in the space, perform a projection of the 3D point onto a projection point on the touch interactive surface, determine 3D coordinates of the projection point in the 3D coordinate system according to the projection, determine a displaying position of the graphic representation on the display screen according to the 3D coordinates of the projection point, and display the graphic representation at the displaying position on the display screen.
Further in accordance with the disclosure, there is provided an apparatus for generating and displaying a graphic representation of an object on a display screen. The apparatus includes a processor and a non-transitory computer-readable storage medium storing a program. The program, when executed, instructs the processor to capture at least one image of the object using at least one image sensor, determine, according to the at least one image, three-dimensional (3D) coordinates of a 3D point on the object in a 3D coordinate system defined in a space containing the object, define a touch interactive surface in the space, perform a projection of the 3D point onto a projection point on the touch interactive surface, determine 3D coordinates of the projection point in the 3D coordinate system according to the projection, determine a displaying position of the graphic representation on the display screen according to the 3D coordinates of the projection point, and display the graphic representation at the displaying position on the display screen.
Features and advantages consistent with the disclosure will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the disclosure. Such features and advantages will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate several embodiments of the invention and together with the description, serve to explain the principles of the invention.
Embodiments consistent with the disclosure include an interactive input system and a method for interactive input.
Hereinafter, embodiments consistent with the disclosure will be described with reference to drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.
The computer 104 may include other components, such as a CPU 108 and a memory 110. Other applications, such as application 112, may also be installed on the computer 104. The computer 104 is also connected to a display 114, which may be used to graphically show the tracking results output by the sensing device 102.
At 201, the user places the sensing device 102 at a certain location. For example, the sensing device 102 may be placed on a table top and face up. The sensing device 102 may alternatively be mounted on the computer 104 or on the top of the display 114.
At 202, after the sensing device 102 is placed, the interactive system 100 begins the environment calibration process. In some embodiments, in the environment calibration process, the interactive system 100 detects background environment information, and calibrates a touch interactive surface. More details about the touch interactive surface will be described later in this disclosure. The environment calibration process may be fully automated to detect certain known environment objects, such as, for example, the display 114, a keyboard, or an optically marked touch pad. Alternatively, the environment calibration process may be manual. For example, the user may define an environment object as the touch interactive surface, or define a virtual plane, i.e., an imaginary plane not on any actual environment object, as the touch interactive surface. If the environment calibration process is manual, instructions may be displayed on, for example, the display 114, or may be delivered to the user in an audio format through, for example, a speaker (not shown).
At 203, during a normal usage period, the interactive system 100 continuously detects a foreground object, such as the user's hand or finger, and recognizes the foreground object's 3D structure and associated 3D movement. The interactive system 100 also detects changes in the background environment and recalibrates the background when needed.
At 204, the sensing device driver 106 translates the detected information into “3D interaction events” and sends the events to applications installed on and the operating system (OS) of the computer 104. For example, a 3D interaction event may be a 3D position, a 3D orientation, a size (such as length or width), and fine details of the foreground object, e.g., the user's hand or finger. The applications and the OS may change state according to the received events, and may update a graphical user interface (GUI) displayed on the display 114 accordingly.
At 205, the sensing device driver 106 compares the detected 3D position of the foreground object with the touch interactive surface, and determines object-to-surface information such as, for example, a distance between the foreground object and the surface, a projected two-dimensional (2D) position of the foreground on the surface. The sensing device driver 106 then converts the object-to-surface information to touch events, multi-touch events, or mouse events (206).
At 207, the sensing device driver 106 delivers the events to the applications or the OS, and translates the touch events into a hand writing process. Since the interactive system 100 can detect the foreground object's distance to and projected position on the touch interactive surface, before the foreground object actually touches the touch interactive surface, the interactive system 100 can predict a touch before the touch actually occurs, e.g., when the touch will occur and where on the touch interactive surface the touch will occur. The interactive system 100 can also determine and display a “hovering” feedback on the display 114.
At 208, the sensing device driver 106 compares the position of the foreground object with positions of the environment objects, such as positions of keys of a keyboard. The interactive system 100 may generate hovering feedback about which key the user will press before the user actually presses the key. In some embodiments, the interactive system 100 may display a virtual keyboard and such hovering feedback in a GUI on the display 114.
Consistent with embodiments of the disclosure, the sensing device 102 may be a stand-alone device separated from the computer 104 but can be coupled to the computer 104 via a wired connection (such as a USB cable) or a wireless connection (such as Bluetooth or WiFi). In some embodiments, the sensing device 102 may be integrated into the computer 104, i.e., may be part of the computer 104.
Consistent with embodiments of the disclosure, the sensing device 102 may include multiple imaging sensors, such as cameras. The imaging sensors may be visible light imaging sensors which are more responsive to visible light, or infrared (IR) imaging sensors which are more responsive to IR light. The sensing device 102 may also include one or more illumination sources, which provide illumination in various wavelengths according to the type of the imaging sensors. The illumination sources may be, for example, light-emitting diodes (LED's) or lasers equipped with diffusers. In some embodiments, the illumination sources may be omitted and the imaging sensors detect the environmental light reflected by an object or the light emitted by an object.
The sensing device 300 shown in
In the figures of the disclosure, LED's are illustrated as the illumination sources, as examples. As discussed above, other light sources, such as lasers equipped with diffusers, may also be employed.
In some embodiments, illumination within the IR bandwidth is needed. Such an illumination may be invisible to naked human eyes. In such embodiments, the illumination sources 306 may include, for example, LED's emitting IR light. Alternatively, the illumination sources 306 may include LED's emitting light with broader bands that may encompass visible light. In such situation, the illumination sources 306 may each be accompanied with an IR transmissive filter (not shown) placed, for example, in front of the corresponding illumination source 306.
In some embodiments, the sensing device 300 may also include an IR transmissive filter (not shown) placed in front of the imaging sensors 304 to filter out visible light. In some embodiments, the sensing device 300 may also include lens (not shown) placed in front of the imaging sensors 304 for focusing light. The IR transmissive filter may be placed in front of the lens, or between the lens and the imaging sensors 304.
Consistent with embodiments of the disclosure, the sensing device 300 may also include a controlling electronic circuit (not shown). The controlling electronic circuit may control the operation parameters of the imaging sensors 304, such as, for example, shutter duration or gain. The controlling electronic circuit may also control the synchronization between or among the multiple imaging sensors 304. Moreover, the controlling electronic circuit may control the illumination brightness of the illumination sources 306, the on/off or duration of the illumination from the illumination sources 306, or the synchronization between the illumination sources 306 and the imaging sensors 304. The controlling electronic circuit may also perform other functions such as, for example, power management, image data acquiring and processing, output of data to other devices, such as the computer 104, or receipt of commands from other devices, such as the computer 104.
In some embodiments, the sensing device 300 may further include one or more buttons configured to turn on/off or reset the sensing device 300, or to force recalibration of the environment. For example, one button may be configured to allow the user to forcibly start the manual calibration process to calibrate the touch interactive surface.
In some embodiments, the sensing device 300 may also include one or more indicator lights showing the state of the sensing device 300 such as, for example, whether the sensing device 300 is on or off, is performing the environment calibration, or is performing the touch interactive surface calibration.
In the examples shown in
In some embodiments, the sensing device 102 may have multiple separated units each having one imaging sensor. Hereinafter, such a design is also referred to as a separate design.
The sensing units 502 and 504 may each include one or more connection ports 510, either wired or wireless, for connecting to other sensing units or directly to the computer 104.
Consistent with embodiments of the disclosure, to detect, recognize, and track a foreground object, such as a hand or a finger of a user, the brightness of the background may need to be lowered. That is, a dark background may need to be created.
In some embodiments, the dark background may be created using polarized light. According to these embodiments, a background surface may be coated with a reflective material that has a “non-depolarizing” property, such as shown in
The light emitted by the illumination source 306 is polarized by the first polarizer 902 to have the first polarization direction. When this polarized light is reflected by the non-depolarizing material coated over the background surface, the polarization direction is preserved. Since the second polarizers 906 have a polarization direction inconsistent with that of the first circular polarizer 902, the reflected light with un-changed polarization direction, the reflected light, or at least most part of the reflected light, cannot pass through the circular polarizers 906 to reach the imaging sensors 304. In effect, the background surface appears to be dark or black to the imaging sensors 304.
On the other hand, when the polarized light is reflected by the foreground object, e.g., the hand or finger of the user, the polarized light will be de-polarized. Such de-polarized reflected light can pass through the second polarizers 906 and be received by the imaging sensors 304. That is, the foreground object appears to be bright to the imaging sensors 304, and thus the imaging sensors 304 can “see” the foreground object.
Another method consistent with embodiments of the disclosure for creating a dark background is to use “invisible” markers. Such “invisible” markers may be invisible to naked human eyes but can be detected by the imaging sensors consistent with embodiments of the disclosure.
A method consistent with embodiments of the disclosure for creating a background surface having “invisible” markers will be described in regard to
Consistent with embodiments of the disclosure, a first pattern is printed on the background surface, e.g., a fabric, using the first ink. The first pattern may, for example, be a pattern shown in
In some embodiments, the first pattern and the second pattern are essentially reversed to each other. That is, where a point in the first pattern is dark, the corresponding point in the second pattern is bright. As a result, the background surface exhibits a uniform color without patterns to naked human eyes, such as the background surface 1002 shown in
In some embodiments, the printing described above may also be a single phase printing process using one inkjet printer which contains two types of inks, i.e., a carbon based ink and a non-carbon based ink.
The methods for using the interactive system 100 and related algorithms consistent with embodiments of the disclosure will be described below. In some embodiments, the imaging sensors 304 may be calibrated before use. If the sensing device 102 employs a uni-body design, such as that shown in
Consistent with embodiments of the disclosure, the calibration process may generate multi-sensor calibration data that may be used for, e.g., removing distortion in an image output from an imaging sensor due to, e.g., imperfect lens. This may make the computer vision calculation and image processing easier and more accurate. The multi-sensor calibration data may also be used for calculating the 3D position of an object or a point using the pixel position of the object or the point in the image output from the imaging sensor.
In some embodiments, a static calibration may be performed before the interactive system 100 is used. The static calibration uses a checker-board and allows the imaging sensors 304 to take synchronized images when the user moves the checker-board to different locations/orientations. The interactive system 100 analyzes the captured images and generates camera calibration data including, for example intrinsic information of the imaging sensors 304, distortion of the imaging sensors 304, and rectification of multiple imaging sensors 304.
In some embodiments, an automatic calibration may be used during the use of the interactive system 100. The automatic calibration does not need a checker-board and does not need a dedicated calibration session before using the interactive system 100. The automatic calibration is suitable when the user frequently changes relative positions of the imaging sensors 304 in, e.g., a separate design or an adjustable uni-body design, or when the user adds customized lenses or customized imaging sensors 304 into the interactive system 100. According to the automatic calibration, when the user starts to use the interactive system 100, the imaging sensors 304 each take a synchronized snap shot. The interactive system 100 finds matching features, e.g., a finger tip, between snap shots taken by different imaging sensors, and records paired pixel coordinates of the same feature, e.g., the same finger tip, that appears in different snap shots. This process is repeated to collect a set of paired pixel coordinates, and the set of paired pixel coordinates are used by an imaging sensor calibration algorithm consistent with embodiments of the disclosure.
At 1302, the imaging sensors 304 capture videos or images of a background.
At 1304, the brightness of environment light is observed. The illumination intensity of the illumination sources 306 is adjusted according to the observed environmental brightness. In some embodiments, the illumination intensity is adjusted to be low enough to save energy but high enough to distinguish the foreground objects, e.g., hands or fingers, from the background.
At 1306, the gain level and the shutter duration of the imaging sensors are adjusted so that the final image is bright enough. Higher gain level results in brighter but nosier images. Longer shutter duration results in brighter images, but the images may be blurry when the foreground object is moving. In some embodiments, 1304 and 1306 are performed in a loop to find optimal illumination intensity of the illumination sources 306 and parameters of the imaging sensors 304.
At 1308, a background model is analyzed and estimated. At 1310, the background model is recorded. When tracking a foreground object, new images will be compared to this background model to distinguish the foreground object from the background.
At 1404, the background model is analyzed based on the accumulated images. In some embodiments, the background model may include, for example, an average brightness and a maximum brightness of each pixel, a brightness variance, i.e., noisiness, of each pixel, or a local texture property and local color property of each pixel.
At 1406, the background model is stored, and the process ends.
Then, at 1508, the analyzing results from each imaging sensor 304 are combined and processed to obtain the foreground object's 3D structure.
At 1602, the background model previously obtained is loaded. The background model may be, for example, a brightness-based background model, where the maximum brightness of each pixel for, e.g., 100 initial frames is stored.
Referring again to
Referring again to
At 1606, the new input image from the imaging sensor 304 is compared with the background model to extract a foreground region. In the background model, each pixel at position (x,y) may have a feature vector B(x,y). For example, if the background model is based on intensity/brightness, then B is a scalar, and the value of B(x,y) is the brightness of the pixel at position (x,y). If the background model is based on noisiness, then B is a scalar, and the value of B(x,y) is the variance at position (x,y). In some embodiments, for the new input image, the feature vector for every pixel, In(x,y), is calculated. Similar to B(x,y), the value of In(x,y) may be brightness or variance depending on what background model is used. A difference between In(x,y) and B(x,y) is calculated for each pixel position. If the difference at a pixel position is greater than a certain threshold, that pixel is determined to belong to the foreground region. Otherwise, that pixel is determined to belong to the background.
Referring again to
In some embodiments, the probabilities P_tip(x,y), P_finger(x,y), and P_palm(x,y) may be calculated by comparing a brightness distribution in a neighbor region around the pixel position (x,y) with a set of pre-defined templates, such as a finger tip template, a finger trunk template, and a palm template. The probability of a pixel being part of a finger tip, a finger trunk, or a palm, i.e., P_tip(x,y), P_finger(x,y), or P_palm(x,y) may be defined by how well the neighbor region fits the respective template, i.e., the finger tip template, the finger trunk template, or the palm template.
In some embodiments, the probabilities P_tip(x,y), P_finger(x,y), and P_palm(x,y) may be calculated by performing a function/operator F on the neighbor region of a pixel position (x,y). The function/operator fits the brightness of the neighbor region with light reflection model of a finger or a finger tip, and return a high value if the distribution is close to the reflection of a finger trunk (reflection from a cylinder shape), or a finger tip (reflection from a half dome shape).
Referring again to
The probabilities P_tip(x,y), P_finger(x,y), and P_palm(x,y), and the segmentation results may be used to calculate a hand structure, including finger skeleton information. As used in this disclosure, a finger skeleton refers to an abstraction of the structure of a finger. In some embodiments, the finger skeleton information may include, for example, a center line (also referred to as a skeleton line) of the finger, a position of the finger tip, and a boundary of the finger.
In some embodiments, after the user's hand is segmented to the fingers and the palm, the 2D boundary of a sub-part of the hand, e.g., a finger or a palm, may be obtained.
Referring again to
After all the scanning lines in the finger are processed, a series of center positions C(y) on the scanning lines L(y) is obtained. Connecting these center positions provides the center line of the finger, i.e., the finger skeleton's center line.
Referring again to
In other embodiments, the finger tip's position may be calculated by using the probability P_finger(x,y) as a weighting factor to average the positions of the pixels in the top region of the finger. In the resulting finger tip position (Tx,Ty), such as, for example, the result shown in
At 2502, the 2D sub-structure results, e.g., fingers or palm, from different imaging sensors 304 are compared and an association between sub-parts of the foreground object observed by different imaging sensors 304 is created. For example, finger A observed by imaging sensor A may be associated with finger C observed by imaging sensor B. In some embodiments, the association may be based on minimizing the total finger tip distance between all finger pairs, such as shown in
Referring again to
Referring again to
At 2506, a finger tip pair, T1(Tx1,Ty1) and T2(Tx2,Ty1), is processed to obtain 3D information, such as 3D position T(Tx,Ty,Tz), of the corresponding finger tip. In some embodiments, a 3D reprojection function may be used to calculate the 3D tip position T(Tx,Ty,Tz). The 3D reprojection function may use the 2D positions (Tx1,Ty1) and (Tx2,Ty1) of the finger tip, and information of the imaging sensors 304 and the lenses, such as, for example, focal length, sensor's pitch (e.g., pixels per millimeter), separation between the two imaging sensors 304 (baseline). In some embodiments, a disparity, d=Tx1−Tx2, is calculated and used as an input for the 3D reprojection function. The output of the 3D reprojection function is the 3D position (Tx,Ty,Tz) of the finger tip. The 3D position (Tx,Ty,Tz) may have a physical unit, and thus may also be expressed as (fx,fy,fz).
In some embodiments, the 3D reprojection function may be expressed using a 4×4 perspective transformation matrix obtained during the imaging sensor calibration process. This matrix may be a disparity-to-depth mapping matrix.
At 2508, using the skeleton line pair obtained as described above, a 3D skeleton line for the corresponding finger is calculated. In some embodiments, for the skeleton line pair, pixels on the two 2D skeleton lines are paired based on their y direction to obtain pairs of pixels. A pair of pixels may be processed in a manner similar to that described above for the processing of finger tip pairs, to obtain a 3D position of a point corresponding to the pair of pixels, as shown in
Referring back to
The above-obtained information may be combined to generate an output, such as the exemplary output shown in
For some applications such as painting and sculpturing, the user may need to use a finger or a pen as a tool. In such situation, the finger or the pen may need to be abstracted as a cylinder shape, and its direction and length may need to be calculated. Referring again to
In some embodiments, the finger is abstracted as a cylinder shape and its length is defined as the length of the cylinder shape, which may also be referred to as a finger cylinder length. The finger cylinder length may be defined as a distance between a very top point of the skeleton line of the finger or the position of the finger tip, P0(x,y,z), and a stop point P1(x,y,z). In some embodiments, the stop point P1 is the end of the skeleton line or the point where the skeleton line deviates from a straight line, e.g., where a difference from the skeleton line and a straight line is greater than a threshold. Similarly, the direction of the finger may be defined as the direction of a line connecting points P1 and P0.
At 2514, the 3D position and the orientation of the palm are calculated. The 3D position of the palm may also be referred to as a 3D center of the palm, which may be obtained by, for example, averaging the 3D positions of the boundary points shown in
The size and the orientation of the palm may be obtained by comparing the 3D center of the palm, 3D positions of the boundary points of the palm, 3D positions of the finger tips, and the directions of the fingers.
The embodiments discussed above are based on direct matching of multiple views (images) taken by different imaging sensors 304. Embodiments discussed below are related to a model based framework. The model based framework may improve the hand recognition reliability. For example, the model based framework may work for a single imaging sensor 304. That is, the 3D recognition of a hand may still be realized even if only a single imaging sensor 304 is used, because the brightness and the width of a finger from a single image may be used to derive a 3D finger position estimation. Moreover, with the model based framework, when a hand or a finger is partially visible in one view, but fully visible in another view, the interactive system 100 may reliably produce 3D hand tracking results. Even when a finger is obstructed, e.g., the finger merging together with another finger or bending into the palm region, and thus becoming invisible in all views, the position of that finger may still be continuously predicted.
Consistent with embodiments of the disclosure, when the foreground object can only be viewed by a single imaging sensor 304, the distance from the foreground object to the imaging sensor 304 may be estimated based on the brightness of the foreground object or the size of the foreground object. Then, such a distance may be combined with the position, i.e., 2D coordinates, of the foreground object in the view of the imaging sensor 304 to calculate a 3D position (x,y,z) of the foreground object.
Assuming other parameters, e.g., intensity of the illumination light and reflectance of the foreground object, are the same, the brightness of the foreground object, B, is inversely proportional to the square of the distance from the object to the illumination light. In some embodiments, since the illumination light is close to the imaging sensor 304, the distance from the object to the illumination light is approximately equal to the distance from the object to the imaging sensor 304, i.e., Dobj-sensor. This relationship can be expressed using the following equation:
In the above equation, coefficient K incorporates the effect of other parameters such as the intensity of the illumination light and the reflectance of the foreground object, and may be a constant. The above equation can be rewritten as:
Coefficient K can be calculated while the foreground object is able to be viewed by two or more imaging sensors 304. In such a situation, as discussed above, the 3D position of the foreground object can be calculated and thus the distance Dobj-sensor can be obtained. The distance Dobj-sensor may be continuously monitored to record Dobj-sensor at time t: Dobj-sensor(t). Meanwhile, the brightness of the foreground object at time t, B(t), can be obtained from images captured by the two or more imaging sensors 304. Plugging Dobj-sensor (t) and B(t) into Eq. (2) or Eq. (3) above, coefficient K can be calculated.
Then, if at time t′, only one single imaging sensor 304 can detect the foreground object, the brightness of the foreground object at t′, i.e., B(t′), and the coefficient K can be plugged into Eq. (3) to calculate Dobj-sensor(t′).
Similarly, the size of the foreground object in an image captured by an imaging sensor 304 may also be used to estimate Dobj-sensor. The size of the foreground object in an image captured by an imaging sensor 304 can be expressed as follows:
where coefficient K′ incorporates the effect of other parameters, such as the actual size of the foreground object. Eq. (4) can be rewritten as:
Similar to the embodiments where the brightness of the foreground object is used to estimate Dobj-sensor, in the embodiments of using the size of the foreground object in the image captured by the imaging sensor 304 to estimate Dobj-sensor, coefficient K can be calculated while the foreground object is able to be viewed by two or more imaging sensors 304, when the distance Dobj-sensor may be continuously calculated and monitored to record Dobj-sensor at time t: Dobj-sensor(t). Meanwhile, the size of the foreground object in the image captured by the imaging sensors 304 at time t, L(t), can be obtained from the captured images. Plugging Dobj-sensor(t) and L(t) into Eq. (4) or Eq. (5) above, coefficient K′ can be calculated.
Then, if at time t′, only one single imaging sensor 304 can detect the foreground object, the size of the foreground object in the captured image at t′, i.e., L(t′), and the coefficient K′ can be plugged into Eq. (5) to calculate Dobj-sensor(t′).
In some embodiments, the above-described methods for estimating Dobj-sensor may be combined to provide a more accurate result. That is, an estimate Dobj-sensor
Consistent with embodiments of the disclosure, the model based framework may be suitable for any number of views, either one view or two or more views.
For each view, a 2D hand structure analysis (described in previous framework) is performed. The 2D hand structure analysis produces a 2D hand structure (also referred to as a new 2D hand structure), including a 2D hand skeleton. Similar to the finger skeleton, a hand skeleton refers to an abstraction of the structure of a hand.
Tracking is then applied by combining the last 2D hand structure (obtained during the last update) and the new 2D hand structure (obtained during the current update as described above). The tracking process includes: 1) apply a filter on previous results to “predict” a predicted 2D hand structure; 2) use the association method to combine the new 2D hand structure with the predicted 2D hand structure; and 3) update the filter using the combined new result. This tracking process could produce a smooth skeleton position, is resistant to a sudden loss of finger in a view, and could provide a consistent finger ID. As used in this disclosure, a finger ID refers to an ID assigned to a detected finger. Once a finger is assigned a finger ID, even if it becomes invisible in following updates, that finger will still carry the same finger ID. For example, in one update, a middle finger and an index finger are detected. The middle finger is assigned a finger ID “finger#1” and the index finger is assigned a finger ID “finger#2”. They carry the assigned finger ID's throughout the process, even when one or both of them become invisible during later updates.
In some embodiments, filtering is applied on a 3D hand model to produce a smooth 3D result, including a 3D hand skeleton, which is re-projected to create a projected 2D hand skeleton on each view.
Then, for each view, the new 2D hand skeleton and the projected 2D hand skeleton are combined to obtain an association between finger IDs.
Then, 2D results of both views are combined to calculate a new 3D position of the hand and a new 3D finger skeleton. The final result is used as a new 3D hand model, which may be used in the next update.
As described above, the interactive system 100 may be used to recognize and track the 3D position and orientation, etc. of a foreground object (such as a hand or a finger). Using this feature, a user may interactive with the computer 104. For example, the user may click and move a finger on a surface of a desk or a table to control the cursor movement and click on the display 114, as if the user is using a mouse, without the use of an actual mouse, so as to use such a surface as a physical touch surface. With the interactive system 100, the user may also use the screen of the display 114 as if it is a touch screen, even if the screen of the display 114 is not an actual touch screen. Moreover, the user may specify a virtual surface in an open space (such as in the air) as a virtual touch surface, i.e., an imaginary touch surface. By moving the finger relative to the virtual touch surface, the user may interact with the computer 104 as if there is an actual touch surface at the position of the virtual touch surface. In addition, by combining with eye position tracking (detection of 3D positions of the user's eyes using, for example, head tracking technology), a direct correlation between the user's perceived finger position and the position on the screen of the display 114 may be created. Hereinafter, such an interaction is also referred to as a 2.5D touch interaction, and the surface, either a physical touch surface, a virtual touch surface, or a display screen, mentioned above for realizing the interaction is also referred to as a touch interactive surface. Consistent with embodiments of the disclosure, a 2.5D touch interaction may include information such as, for example, the 2D projected position of a foreground object, such as a finger tip, on the touch interactive surface, the distance between the foreground object, such as a finger tip, and the touch interactive surface, and the 3D direction of a foreground object, such as a finger, relative to the normal direction of the touch interactive surface.
Consistent with embodiments of the disclosure, the 2.5D touch interaction may be realized based on 3D information of a foreground object obtained as described above and by adding a hovering state of the foreground object to a standard touch interaction. The 2.5D touch interaction consistent with embodiments of the disclosure may provide the projected (x,y) position of the foreground object, such as, for example, a finger, on the touch interactive surface, as well as a distance between the foreground object and the touch interactive surface.
In some embodiments, three calibration touch points may be enough to define the touch interactive surface. In some embodiments, four or more touch points may be used to define the touch interactive surface. Using four or more touch points may increase the accuracy when the user tries to define a physical surface as the touch interactive surface. Moreover, using four or more touch points may also allow the user to define a non-planar surface as the touch interactive surface.
Since the defined touch interactive surface may be large, the interactive system 100 also allows the user to define an effective interaction area, which may then be mapped to the size of the screen of the display 114. This process is shown in
Consistent with embodiments of the disclosure, the touch interactive surface may be automatically and progressively detected by detecting the action of the user's finger hitting a surface. That is, the interactive system 100 detects events of the user's finger tapping a hard surface and automatically registers these tapping events. The interactive system 100 stores the 3D position of the finger tip in a touch-surface-calibration database when a tapping event occurs. In some embodiments, the interactive system 100 may dynamically repeat the calibration process to enhance the understanding of the surfaces in the environment. Using this method, the user may simply tap on a surface for multiple times at different places and the interactive system 100 would automatically calibrate the surface. Therefore, the interactive system 100 does not need to show instructions to guide the user, and the user does not need to wait for the interactive system 100 to tell him when to put the finger on the surface or when to move to another place on the surface. Moreover, after the calibration phase, when the user is using the input device as normal, the interactive system 100 continues to monitor tapping events and update the surface calibration when needed. Therefore, recognition of the touch interactive surface becomes more and more accurate during the user's continuous use. Moreover, when the environment has changed (e.g., the existing surface is removed, or a new surface is placed), the interactive system 100 automatically updates the touch interactive surface, by merging new tapping events with existing database.
Below, a method for detecting a tapping event is described. Consistent with embodiments of the disclosure, 3D position of the user's finger is tracked and a time-dependent position value is recorded. To detect a tapping event, the time-dependent position value is converted to a speed value by differentiation.
In some embodiments, a moving window is used to detect the following conditions: 1) the speed drops from a high value (higher than a first speed threshold) to a very small value (lower than a second speed threshold close to zero) within a very short period of time (shorter than a first time threshold), and 2) the speed keeps at the very small value for a time period longer than a certain period of time (longer than a second time threshold). If both conditions are satisfied, then it is determined that a tapping event has occurred.
When the user's finger hits a hard surface, sometimes the finger may continue to slide on the surface instead of coming to a full stop. In such situation, a tapping event is determined as having occurred if the following two conditions are satisfied: 1) a sudden change of finger speed in the original traveling direction is detected, and 2) the following movement of the finger is constrained in a 2D plane. This can be calculated by applying a dimension reduction method, e.g., Principal component analysis (PCA), on the 3D position data of the finger in the time window to map the trajectory from a physical 3D coordinate into a new 3D coordinate. The PCA algorithm produces the new 3D coordinate system by analyzing the 3D position data of the finger. The new 3D coordinate system is defined by three axes. Every axis in the new 3D coordinate system has an eigenvalue, related to the amount of variation of the data points along that axis. Among the three axes, the one having the smallest eigenvalue is referred to as a “minimum axis.” If the speed value in the minimum axis keeps very low (lower than a certain speed threshold) for a relatively long period of time (longer than a certain time threshold), then the time at which the sudden change of finger speed occurs is registered as a time at which a tapping event occurs.
When a new tapping event is detected, the position at which the new tapping occurs (referred to as new tapping position) is used to update the existing touch sensitive surface. Consistent with embodiments of the disclosure, if the new tapping position is consistent with the existing touch interactive surface, the new tapping position is used to increase the resolution and accuracy of the existing touch interactive surface. If the new tap position conflicts with the existing touch interactive surface (which may mean that the user has slightly moved the surface), the existing touch interactive surface is updated using the new tapping position or the existing touch interactive surface is deleted. If the new tapping position is not associated with the existing touch interactive surface, a new touch interactive surface is created.
After the interactive system 100 records the positions of the four corner points, at 4208, the interactive system 100 calculates and record the size, 3D position, and 3D orientation of the virtual touch surface. The interactive system 100 may then display the position, direction, and size of the virtual touch surface.
As one of ordinary skill in the art would have recognized, three points are enough to define a flat surface. Therefore, if the virtual touch surface is a flat surface, only three corner points are needed to define the virtual touch surface. However, these three corner points can be used together with the fourth corner point to define a quadrilateral as an interactive area. After the virtual touch surface and the interactive area are defined, the interactive system 100 will only detect and respond to the action of an object within or above this interactive area.
When manually defining the fourth corner point, sometimes it may not be easy for the user to “touch” a point within the flat surface defined by the other three corner points. In some embodiments, a vertical projection of the user's touch point on the flat surface may be used as the fourth corner point.
As compared to a physical touch screen on a computer monitor, the virtual touch surface has certain advantages. For example, for laptop and desktop PC users, the distance to the touch screen is far, and the angle is close to vertical (70 degree˜80 degree). At such distance and angle, the screen is not suitable for touching—hard to reach and easy to cause fatigue. On contrast, the virtual touch surface consistent with embodiments of the disclosure may be defined to be closer to the user and at an angle that is easy to operate.
As discussed above, the interactive system consistent with embodiments of the disclosure may be used to realize a 2.5D touch interaction. Details of the 2.5D touch interaction are described below.
In some embodiments, the user's hand is used as the foreground object. The interactive system 100 uses the 3D tracking information of the hand (such as, for example, the 3D positions of finger tips and the 3D cylinder direction and length information of fingers) and the environment calibration data to perform a 3D to 2.5D conversion, so as to obtain 2D information such as, for example, a distance from a finger tip to a touch interactive surface defined according to, e.g., methods described above, and the direction of a finger relative to the normal of the touch interactive surface.
a*x+b*y+c*z+d+e*x̂2+f*ŷ2+ . . . =0 (6)
At 4602, the positions of all the calibration points are plugged into the following error function to find an error value:
err=sum[sqr(a*x+b*y+c*z+d+e*x̂2+f*ŷ2+ . . . )] (7)
In some embodiments, a regression method is used to find best values for parameters a, b, c, d, e, f . . . that minimize the error value “err”. At 4604, the x, y coordinates of the foreground object (which has a 3D position of (x,y,z)) are plugged into the polynomial surface fitting equation to calculate z′ at given x and y.
The 2.5D information obtained according to embodiments consistent with the disclosure, such as those described above, may be used in various applications. For example,
As described above, the interactive system 100 can track the position of a user's hand or finger. In some embodiments, the interactive system 100 also tracks the position of the user's eye, and combine the information about the position of the eye and the information about the position of the hand or finger for 3D/2D input.
Consistent with embodiments of the disclosure, the interactive system 100 can detect the 3D position of the user's eye in a manner similar to that described above for detecting the 3D position of the user's hand or finger. The information about the eye, the hand or finger, and the screen of the display 114 is correlated to create a “3D and 2D direct manipulation” interaction. As used in this disclosure, a “direct manipulation” refers to a manipulation that allows the user to directly manipulate objects presented to them. From the user's eye's point of view, the position of the hand or finger is the same as the position of the object being manipulated, which is displayed on a screen, e.g., a 2D position of an object presented by a conventional display device or a 3D position of an object presented by a 3D display.
With the head tracking and the hand tracking combined, the user can interact with a content on a 2D screen or with a content on a 2D screen via a virtual touch surface. The user can also interact with a 3D content presented by a 3D display. Moreover, a head mounted 3D display (HMD) may be realized.
A system consistent with embodiments of the disclosure may also include a head mounted 3D display (HMD), which enables virtual reality interaction, such as, for example, interaction with a virtual touch surface, interaction with a virtual 3D object, or virtual Interaction with a physical 3D object.
With the HMD system 5500, the user may interact with a fixed 2D display in a manner similar to those described above with respect to the scenario where an HMD is not used.
As described above, using a 3D interactive system consistent with the disclosure, such as the interactive system 100 described above, a user can define a touch interactive surface (also referred to as a touch surface). As described above, the touch interactive surface may be a virtual surface defined in an open space (e.g., an air touch plane, such as, for example, the virtual touch surface shown in
As indicated above, a touch interactive surface may be defined using three or more corner points, such as the four corner points, Point3D—1, Point3D—2, Point3D—3, and Point 3D—4 shown in
z=Ax+By+C (8)
where x, y, and z are spatial coordinates (also referred to as 3D coordinates) in a 3D coordinate system defined in the space (also referred to as a space coordinate system), and A, B, and C are coefficients that need to be determined. The origin of the space coordinate system may be positioned at, for example, a point on the 3D interactive system, such as a middle point between two imaging sensors 304 of the sensing device 300 of the interactive system 100. The 3D coordinates (x,y,z) of a point in the space may be determined using methods consistent with embodiments of the disclosure, such as the methods described above in connection with
In some embodiments, the touch interactive surface can be determined by using three of the four corner points to determine, for example, a normal vector of the touch interactive surface:
Normal Vector=Vector(Point3D—2,Point3D—1)×Vector(Point3D—2,Point3D—3) (9)
where Vector(Point3D—2, Point3D—1) represents a vector from corner point Point3D—2 to corner point Point3D—1, Vector(Point3D—2, Point3D—3) represents a vector from corner point Point3D—2 to corner point Point3D—3, and “×” means cross product.
In some embodiments, the touch interactive surface can be determined using a Singular Value Decomposition (SVD) method to fit 3D plane parameters, i.e., 3D positions of calibration points, such as the corner points in
In the above, the touch interactive surface and the vectors are expressed in the space coordinate system. Using the above vectors, a 2D coordinate system defined on the touch interactive surface (also referred to as a touch surface coordinate system) may be established. This 2D coordinate system may use, for example, corner point Point3D—2 as the Origin, i.e., the origin of the touch surface coordinate system. A Right Vector and an Up Vector calculated as follows may be defined as the coordinate axes of the touch surface coordinate system:
Right Vector=Point3D—2−Point3D—1 (10)
Up Vector=Right Vector×Normal Vector (11)
where Normal Vector is calculated according to, for example, one of the methods discussed above, and may be expressed as Normal Vector=(−A,−B,1). The calculated Right Vector and Up Vector are schematically shown in
With the touch surface coordinate system defined above, any given 3D point P(x,y,z) may be projected to the touch interactive surface using a mapping function M. The coordinates (u,v) of the projection point in the touch surface coordinate system, as well as the distance d from the 3D point P(x,y,z) to the touch interactive surface, may be determined using the mapping function M:
(u,v,d)=M(x,y,z) (12)
Consistent with embodiments of the disclosure, first, the 3D point P(x,y,z) is projected to a 3D point P′(x′,y′,z′) on the touch interactive surface:
P′=P−(Normal Vector*d) (13)
where “*” means scalar multiplication, and the distance d may be calculated as follows:
d=(P−Origin)·Normal Vector (14)
where “·” means dot product (also referred to as scalar product). Note in Eqs. (13) and (14), the calculations are still performed in the space coordinate system, and therefore the coordinates of each point, i.e., each of points P, P′, and Origin, are the 3D coordinates of such point in the space coordinate system.
Then, the 2D coordinates of P′ in the touch surface coordinate system, i.e., (u,v), are calculated by first calculating a vector Vec according to Eq. (15) below and then calculating u and v by dot product as in Eqs. (16) and (17), respectively.
Vec=(P′−Origin) (15)
u=Vec·Right Vector (16)
v=Vec·Up Vector (17)
In Eqs. (15)-(17), the points and vectors are still expressed in the space coordinate system, but the calculating results, i.e., u and v, are the 2D coordinates of the projection point P′ in the touch surface coordinate system.
The 2D coordinates (u,v) of the projection point P′ in the touch surface coordinate system may then be converted to 2D coordinates of a point in a 2D coordinate system on the display screen (also referred to as an S coordinate system in this disclosure), as discussed below.
First, using the mapping function M discussed above, the 3D coordinates (x,y,z) of each of the four corner points on the touch interactive surface can be converted to 2D coordinates (u,v) in the touch surface coordinate system, i.e.:
P1′(u,v)=M(Point3D—1) (18)
P2′(u,v)=M(Point3D—2) (19)
P3′(u,v)=M(Point3D—3) (20)
P4′(u,v)=M(Point3D—4) (21)
Note here the distance d for each of P1′, P2′, P3′, and P4′ is zero (0), because these points are on the touch interactive surface. In some embodiments, such as the embodiments discussed above, since Point3D—2 is used as the Origin for the touch surface coordinate system, P2′(u,v) is P2′(0,0). The results of such conversions are schematically shown in
In some embodiments, the four corner points on the touch interactive surface correspond to the four corners on the display screen, which may be expressed as S1(0,0), S2(W,0), S3(W,H), and S4(0,H), respectively. In some embodiments, the S coordinate system may be defined according to actual physical dimensions of the screen, in which W and H represent actual physical width and height of the screen, respectively, and thus have dimension of length (with a unit of, for example, inch or mm). In some embodiments, the S coordinate system may be defined according to the pixel numbers of the screen (which would be dimensionless) rather than physical dimensions. For example, if the screen has a resolution of 1920 by 1080, then the positions of the corners are (0,0), (1919,0), (0,1079), and (1919,1079), respectively. Similarly, a point on the screen may have a position of, for example, (800,500). In some embodiments, the S coordinate system may be defined so that the four corner points are expressed as S1(0,0), S2(1,0), S3(1,1), and S4(0,1), respectively, and a point on the screen is expressed as a percentage or fraction of the width and a percentage or fraction of the height. For example, coordinates (0.5,0.5) represent the center point of the screen.
The correspondences between the corner points on the touch interactive surface and the corner points on the display screen are schematically illustrated in
Using the above correspondences, a homography transform matrix, H, that maps 2D coordinates (u,v) in the touch surface coordinate system to 2D coordinates (X,Y) in the S coordinate system can be obtained. That is, the H matrix can be used to transform any point P′(u,v) on the touch interactive surface to a corresponding point s(X,Y) in the S coordinate system on the screen:
s(X,Y)=perspectiveTransform(P′(u,v),H) (22)
According to the embodiments discussed above, to transform the projection point P′ to a corresponding point on the screen, first, the 3D coordinates of the projection point P′ in the space coordinate system are converted to the 2D coordinates of the projection point P′ in the touch surface coordinate system, and then such 2D coordinates are transformed to the 2D coordinates of the corresponding point in the S coordinate system. In some embodiments (such as embodiments discussed below), however, the step of converting the 3D coordinates of the projection point P′ in the space coordinate system to the 2D coordinates of the projection point P′ in the touch surface coordinate system can be omitted.
Moreover, in the embodiments discussed above, the projection of the 3D point P onto the touch interactive surface includes an orthogonal projection, i.e., a line connecting the 3D point P (a physical point) and the projection point P′ (a virtual point) is perpendicular to the touch interactive surface. In some embodiments, however, the projection can be a non-orthogonal projection, i.e., the line connecting the 3D point P and the projection point P′ is not perpendicular to the touch interactive surface. For example, the projection may be performed assuming an imaginary light source located at a certain distance away from the touch interactive surface is illuminating the 3D point P, and the projection point P′ is the shadow of the 3D point P on the touch interactive surface. As another example, the 3D point can be projected to the touch interactive surface by projecting a line from one of the user's eyes, or from a point in the middle between the user's two eyes, through the 3D point P to the touch interactive surface, and the intersection point between the line and the touch interactive surface is the projection point P′.
Consistent with embodiments of the disclosure, a distance D, as shown in
Consistent with embodiments of the disclosure, the projection of the 3D point P on the object to the projection point P′ on the touch interactive surface may be expressed as:
(x′,y′,z′;D)=PF(x,y,z) (23)
where x, y, z represent the 3D coordinates of the 3D point P in the space coordinate system, x′, y′, and z′ represent the 3D coordinates of the projection point P′ in the space coordinate system, and PF is a projection function.
After the projection point P′ is located, its position (expressed in terms of the 3D coordinates in the space coordinate system) on the touch interactive surface is mapped to a position on the actual display screen of the computer, i.e., a point in the S coordinate system. A 2D position indicator is displayed on the screen to mimic the position of the projection point P′.
As described above, the touch interactive surface and the interactive area on the touch interactive surface are defined by four corner points. In some embodiments, the positions of the four corner points in the space coordinate system can be expressed as (x′0,y′0,z′0), (x′1,y′1,z′1), (x′2,y′2,z′2), and (x′3,y′3,z′3), respectively. As discussed above, the four corner points on the touch interactive surface can be mapped to four corners of the screen having positions (0,0), (W,0), (W,H), and (0,H), respectively, defined in the S coordinate system.
Based on the 3D coordinates of the four corner points on the touch interactive surface and the 2D coordinates of the corresponding four corners on the screen, a mapping function F can be obtained by, for example, a fitting method. Using this mapping function F, any point on the touch interactive surface having a position (x′,y′,z′) can be mapped to a corresponding point on the screen having a position (X,Y):
(X,Y)=F(x′,y′,z′) (24)
In some embodiments, the mapping function F may map a point on the touch interactive surface onto the screen proportionally to the four corner points. As a consequence, the position of the 2D position indicator on the screen can be determined using the mapping function F based on the position of the projection point P′ on the touch interactive surface that corresponds to the 3D point P on the object.
Consistent with embodiments of the disclosure, a graphic representation of the object, such as a finger, may also be generated and displayed on the screen.
Consistent with embodiments of the disclosure, the graphic representation moves with the object, e.g., the finger. The object shadow and the object indicator together create a realistic sensation to the user of the object's position and its distance to the touch interactive surface. Transparencies of the object shadow and/or the object indicator may change with changing the distance between the object and the virtual touch surface. For example, the transparencies of the object shadow and/or the object indicator may reduce when the object moves towards the touch interactive surface, and may become zero, i.e., the object shadow and/or the object indicator become opaque, when the object “touches” the touch interactive surface. In addition, sizes of the object shadow and/or the object indicator may also change with changing the distance between the object and the touch interactive surface. For example, the sizes of the object shadow and/or the object indicator may decrease with decreasing the distance.
In some embodiments, the distance between the object and the touch interactive surface may be represented by the distance d (or D) described above.
In
Consistent with embodiments of the disclosure, the position of the object shadow and that of the object indicator may be determined by the position (X,Y) of the 2D position indicator and the distance d (or the distance D). As used in this disclosure, the position of the object shadow or the object indicator may be a point on the object shadow or the object indicator that corresponds to a physical point on the object. For example, as shown in
In the example shown in
In the example shown in
Consistent with embodiments of the disclosure, when the interactive system 100 detects that the user has performed a “click” or a “tap” action, the 2D position indicator may animate, e.g., change size, color, or shape, to confirm detection of the “click” or the “tap” action. As used in the disclosure, a “click” or a “tap” action may be a sudden move of the object, e.g., the finger, toward the touch interactive surface. When such an action is detected, the position of the 2D position indicator is not changed, although the position of the object has actually changed. As a result, a more precise control may be realized.
In addition to the sizes and positions of the object shadow and the object indicator, other visual effects may also be added, such as the 3D direction of the object, which may be determined according to methods discussed above. In some embodiments, the 3D direction (in free space) of the object may be converted to a 3D direction relative to the touch interactive surface. The direction of the object shadow and/or that of the object indicator may be dynamically modified according to the object's 3D direction relative to the touch interactive surface.
Moreover, the position of the object, such as the finger, may be combined with the position of the user's head to provide a more realistic approach to render the object shadow and object indicator. In some embodiments, the position of the user's head, (xH,yH,zH), is assumed and set by the interactive system 100. In some embodiments, the interactive system 100 also includes a head tracker, which dynamically provides the position (xH,yH,zH) of the user's head. Based on (xH,yH,zH) and the object's position information, e.g., the position of the 2D position indicator (X,Y) and the distance d (or the distance D), the size, angle, and position of the object shadow and the object indicator can be determined.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
This is a continuation-in-part application of application Ser. No. 14/034,286, titled “Interactive Input System and Method,” filed Sep. 23, 2013, which is based upon and claims the benefit of priority from Provisional Application No. 61/811,680, titled “3D and 2D Interactive Input System and Method,” filed on Apr. 12, 2013, and Provisional Application No. 61/841,864, titled “3D and 2D Interactive Input System and Method,” filed on Jul. 1, 2013. This application is also based upon and claims the benefit of priority from Provisional Application No. 61/869,726, titled “3D and 2D Interactive Input System and Method,” filed on Aug. 25, 2013. The entire contents of the above-referenced applications are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61811680 | Apr 2013 | US | |
61841864 | Jul 2013 | US | |
61869726 | Aug 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14034286 | Sep 2013 | US |
Child | 14462324 | US |