The present disclosure relates to human-computer interaction systems. More specifically, this disclosure pertains to methods and systems for use of a conventional video recorder such as a webcam for two-dimensional (2D) and three-dimensional (3D) pointing and command inputs in a computing system relying on passive light detection for both 2D and 3D modes. Embodiments of a pointing/input device for use in the methods and systems are disclosed.
The operation of a conventional mechanical or optical pointing or input device such as a mechanical or optical computer mouse is well known in the art. By use of these devices, the user can select files, programs, or actions from lists, groups of icons, etc., and can “gesturally” move files, programs, etc. issue commands or map to specific actions, for example in drawing programs.
As examples, a mechanical computer mouse relies on one or more wheels and/or balls to track movement or displacement information relative to forward-backward and left-to-right movement of the computer mouse, for example by interrupting infrared beams of light directed at light sensors to create pulses representative of wheel or ball movement. Simple logic circuits interpret the relative timing of the pulses to indicate which direction the wheel(s) or ball(s) is moving, which is then converted by driver software into motion of a visual indicator such as a pointer, cursor, or cross-hair along X and Y axes of a computing device display screen.
An optical computer mouse replaces the mechanical mouse wheels or balls with one or more visible or invisible light sources such as light-emitting diodes (LEDs), laser diodes, infra-red light, etc. to detect movement of the mouse relative to an underlying surface such as a mouse pad. The inertial/gyroscopic computer mouse uses a tuning fork or other accelerometer to detect rotary movement for every axis supported, most commonly using 2 degrees of rotational freedom and being insensitive to spatial translation. The user need only perform small wrist rotations to move a pointer or cursor on a display screen.
Almost all modern 2D pointing devices utilize an active approach for detecting movement of the device. The underlying technology of modern surface-independent (meaning that a specific surface type is not required, although some type of surface is) 2D pointing/input devices such as optical mice (see
Even though a special surface such as a mouse-pad is not needed by a modern optical mouse, a surface is still required for operation of the mouse. If a suitable operating surface is not available and an alternative such as a touchpad or trackball is also not available, a conventional optical mouse cannot be used. In turn, certain tasks often done with pointing/input devices such as a computer mouse are difficult to impossible to accomplish with alternative pointing/input devices such as touchpads or trackballs. For example, use of drawing programs without a computer mouse can be difficult if not impossible. Likewise, tasks such as two-dimensional (2D) or 3D sculpturing or drawing, “flying” in multi-dimensional space (for example, three-dimensional space defined by X, Y, and Z axes) such as during gaming, etc. would be difficult to accomplish using a conventional touchpad, trackball, etc. Still more, personal computers (PCs) are not merely tools for surfing the internet and sending e-mail in the modern world. Increasingly, PCs serve as digital media centers to view photos, listen to music, and watch video clips, films and TV shows. Indeed, notebook, laptop, and desktop computers are rapidly replacing the home entertainment centers.
Likewise, the modern television is no longer just a TV, offering integrated Internet capabilities and set-top boxes that offer more advanced computing ability and connectivity than a contemporary basic TV set. The modern “smart” TV can deliver content from computers or network attached storage devices, such as photos, movies and music. These devices also provide access to Internet-based services including traditional broadcast TV channels, pre-recorded programs, video-on-demand, electronic program guides, interactive advertising, personalization, voting, games, social networking, and other multimedia applications. All of these require a remote control-like device that can provide cursor control, i.e. a pointing/input device as is known for computing devices. Unfortunately, traditional television remote controls cannot conveniently provide such functionality. As noted above, a conventional pointing/input device such as a computer mouse requires a desk or other hard surface to function.
For this reason, attempts have been made to adapt the familiar computer mouse to operate in the air or “on the fly,” to avoid the need for a surface over which to translate the mouse for operation. Indeed, 3D pointing has long been a desired feature in human-computer interaction to allow tasks that are not possible with a 2D pointing device, such as 3D sculpturing or space navigating. However, 3D pointing technology has not reached a stage that is considered both reasonably affordable and manipulative.
For 3D pointing, it is necessary to identify the location of the pointing device with respect to a reference point in a 3D space. Unlike 2D pointing which mainly uses an active approach, 3D pointing has been attempted using both active and passive approaches. The approach taken depends on whether the pointing device includes a displacement detection system that works in 3D space. The optical displacement detection system of an optical mouse can only work on a surface due to the operating mechanism summarized above; it cannot work if suspended in 3D space.
In the active approach, typically an imager such as an IR camera is integrated into the pointing device to detect lights from an IR emitter of a console such as the console of a gaming device, and calculate spatial coordinates for the pointing device accordingly. The Wii® Remote marketed by Nintendo® falls within that category. A problem with this approach is that the pointing device spatial coordinates can only be calculated when its imager has a direct line of sight to a sensor bar associated with the gaming device console.
Another active type of 3D mouse uses a tuning fork or other accelerometer to detect rotary movement for every axis supported. Logitech® and Gyration's inertial mice (also called gyroscopic mice) fall in this category. The most common models work using 2 degrees of rotational freedom. An operator uses wrist rotations to move the cursor. The inertial mouse is insensitive to spatial translations. More recently, an inertial mouse was developed, equipped with g-sensors (pairs of accelerometers extended over a region of space) to calculate the mouse position, orientation and velocity; hence such a mouse can provide at least 9 spatial parameters for pointing purposes. However, the price of such an inertial mouse is quite high; usually 10 times more than the price of a typical optical mouse.
For pointing devices that do not include a distance measuring component, a passive approach has been evaluated requiring a separate component to measure the distance between the pointing device and, for example, a gaming device or base station, or to identify the location of the pointing device with respect to the gaming device or base station. All gesture-based pointing device approaches, such as the Kinect® device marketed by Microsoft®, belong to this latter category. In this case, the fingers or the hands of a user play the role of a pointing device and a special imaging device is required to identify the locations of the fingers or hands of the user. Three-dimensional mice such as 3Dconnexion/Logitech's® SpaceMouse® in the early 1990s and Kantek's® 3D RingMouse® in the late 1990s, also known as bats, flying mice or wands, also fall in this category. As an example, the RingMouse® was tracked by a base station through ultrasound. This approach has been found to provide insufficient resolution.
Still other attempts have been made to implement passive detection of a pointing device location by combining pointing and imaging functionalities in a single device. In one such device, a digital camera mounted into the housing of a computer mouse includes a mode selection system to switch the device between a 2D mouse function and a digital camera function (see
To date, the present inventors are unaware of any attempts to use a single-lens imaging device to capture the motion and clicking activities of an “on the fly” pointing device for 2D and 3D pointing in a 3D human-computer interaction system.
To solve the foregoing problems and address the identified need in the art, the present disclosure provides a human-computer interaction system supporting 2D and 3D pointing in “air mode.” In the following, for easy reference the term “air mode” refers to the operation of a pointing/input device in the air, i.e. 3D space, and the term “surface mode” refers to the operation of a pointing/input device on a surface such as a desk. The system uses a single lens imaging device and software to process light emitted from a pointing/input device and to determine therefrom a position and/or angle of the pointing/input device. Systems and methods incorporating these devices are provided. In particular, the present disclosure provides systems and methods via which such “air mode” pointing and command input can be achieved using substantially conventional single lens imaging devices such as standard webcams.
In one aspect, a human-computer interface system is provided including at least one pointing/input device and an imaging device operably connected to a computing device. A pointing/input device is provided which lacks an internal displacement detection system. Instead, the pointing/input device includes two sets of actuable visible point light sources, with each set emitting light having a wavelength defining a predetermined color. The individual point light sources of a set of point light sources are typically aligned with one another. The predetermined color of the first visible point light source set is different from the predetermined color of the second visible point light source set.
The at least one pointing/input device is held or moved in a three-dimensional space disposed within a field of view of the imaging device. The imaging device, which may be a conventional webcam, captures a plurality of sequential image frames each including a view of a position of the pointing/input device (determined by the actuated sets of visible point light sources) within the imaging device field of view. The two sets of aligned visible point light sources are differently actuated according to whether 2D pointing/input or 3D pointing/input are desired, or whether a specific command such as “drag and drop” is to be executed.
Then, from the captured plurality of sequential image frames, the different actuation of the sets of visible point light sources is interpreted by one or more computer program products as corresponding to a particular pointing, movement, or input command. A visual marker is then rendered on a graphical user interface, the visual marker being mapped to the particular pointing, movement, or input command.
One or more computer program products are provided including executable instructions for calculating a 2D or a 3D position and/or motion and/or orientation or angle of a pointing/input device in a captured image including a view of the pointing/input device, and mapping that position and/or orientation to a corresponding visual marker position in a graphical user interface. Likewise, particular combinations of activated point light sources of the pointing/input device can be mapped to particular pointing and/or input commands. Acquired sequential digital image frames are converted to digital data by an imaging device sensor, and analyzed to determine as needed a position, a depth, and an orientation/angle of particular sets of activated point light sources or individual activated point light source combinations on the pointing/input device. From this information, a visual marker such as a cursor is rendered on a graphical user interface, corresponding to a calculated 2D or 3D position and/or motion of the pointing/input device moved in three-dimensional space, and to various input commands of the pointing/input device.
In another aspect, methods are provided for executing pointing and/or input commands using differently actuated sets of aligned visible point light sources of a pointing/input device according to the present disclosure. Sequential images including a position and actuated visible point light source pattern are captured by an imaging device, typically a single lens imaging device such as a conventional webcam. The captured sequential images including a view of the pointing/input device are used to calculate a position, motion, and/or input command of the pointing/input device and to map that position, motion, and/or input command of the pointing/input device to a corresponding visual marker such as a cursor displayed in a graphical user interface. In this manner, 2D and 3D pointing and command input are made possible using only a single lens imaging device such as a webcam, and further using a pointing/input device that does not require translation over a surface as is the case with, e.g. a conventional optical mouse.
In yet another aspect, a pointing/input device is provided including first and second sets of aligned visible point light sources. The first set of aligned visible point light sources emits light in a wavelength defining a first predetermined color. The second set of visible point light sources emits light in a wavelength defining a second predetermined color that is different from the first predetermined color. Actuators are provided allowing differently actuating the two sets of visible point light sources and also individual point light sources within the two sets. The different light patterns emitted by differently actuating the two sets of visible point light sources and/or of individual point light sources are captured in sequential image frames and interpreted by one or more computer program products as motion, pointing, and/or command inputs of the pointing/input device.
These and other embodiments, aspects, advantages, and features of the present invention will be set forth in the description which follows, and in part will become apparent to those of ordinary skill in the art by reference to the following description of the invention and referenced drawings or by practice of the invention. The aspects, advantages, and features of the invention are realized and attained by means of the instrumentalities, procedures, and combinations particularly pointed out in the appended claims. Unless otherwise indicated, any patent and/or non-patent citations discussed herein are specifically incorporated by reference in their entirety into the present disclosure.
The accompanying drawings incorporated in and forming a part of the specification, illustrate several aspects of the present invention, and together with the description serve to explain the principles of the invention. In the drawings:
In the following detailed description of the illustrated embodiments, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration, specific embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. Also, it is to be understood that other embodiments may be utilized and that process, reagent, materials, software, and/or other changes may be made without departing from the scope of the present invention.
The present disclosure relates to a human-computer interaction system 10 that allows 2D and 3D pointing operations in air mode, i.e. without any requirement for translating a pointing device over a surface to measure a distance displacement thereof. The system 10 comprises a specialized pointing/input device 14, an imaging device 12, and at least one light tracking computer program. The imaging device 12 may be connected as a separate peripheral to a computing device 16 by wired means such as universal serial bus (USB) cables 17 (see
The imaging device 12 is typically a single lens imager such as a conventional webcam, although use of multi-view imaging devices is contemplated. The imaging device 12 includes a digital video recorder operatively coupled to an image sensor which encodes images for later decoding by the computing device 16. Any suitable video recorder which is or can be configured for use with computing devices 16 is contemplated, such as a conventional webcam or other recorder or recorder configuration for providing digital data representative of captured image frames showing a particular view. However, for each captured image frame typically only one view of a taken scene will be used in the light tracking process even if a multi-view imaging device is used in the imaging process. A number of suitable image sensors are known in the art and are contemplated for inclusion in the present system 10, including without limitation conventional charge-coupled device (CCD) or complementary metal oxide semiconductor (CMOS) technology. The resolution of the imaging device 12 will typically be at least VGA level, i.e., 640×480, although it may be higher.
In one exemplary embodiment, the imaging device 12 is installed as a component of or a peripheral to a keyboard of a computing device 16 such as a laptop or notebook computer (see
In one embodiment (see
The pointing/input device 14 includes a series of actuable point light sources which when actuated allow the system 10 to interpret various patterns of actuated point light sources or sets of point light sources as specific pointing or input commands. In an embodiment, a first set of aligned point light sources 30 and a second set of aligned point light sources 32 are provided on a surface of the pointing/input device 14. The individual point light sources of first and second sets 30, 32 are typically aligned one with another as shown in
However, use of other colors is contemplated, and the skilled artisan can readily derive the corresponding wavelengths of the visible spectrum corresponding to alternative colors. For example, a green point light source set 30 could be provided when an RGB (red-green-blue) color model is used. In such an instance, image frames would be converted into an HSV (hue, saturation, and value) color model, i.e. a cylindrical-coordinate representation of the RGB color model, and an intensity of the new color in the image could be computed.
The pointing/input device 14 also includes various interior mechanisms, including batteries 34 (or an alternative power source as described above), main switch 36 and additional switches 38, 40, and 42, and various circuits (shown generally as control circuit 44) (see
For purposes of the following examples, the point light sources of first set 30 will be referenced herein as blue LEDs 30a, 30b and the point light sources of second set 32 will be referred to as red LEDs 32a, 32b, and the various calculations and computations will be tied to use of red and blue point light sources as described. However, as discussed above alternate colors are contemplated and alternate types of point light sources may be easily adapted to the invention, and so are contemplated for use herein. As summarized above, the pointing/input device 14 is adapted for both 2D and 3D pointing. For 2D pointing mode, the operator activates the imaging device 12 and then activates the pointing/input device 14 using main switch 36. This actuates a middle blue LED 30a. The operator O then holds the pointing/input device 14 in his/her hand with the front side of the device facing and within a vertical and horizontal field of view of the imaging device 12, as shown in
The operator O moves the screen cursor (not shown) around by moving the pointing/input device 14 around in 3D space, and conveniently performs clicking or dragging operations by pushing/holding the corresponding buttons 24, 26 the same as for an ordinary computer mouse. The operator O can move the pointing/input device 14 in any direction as long as the first and second point light source sets 30, 32 of the pointing/input device 14 are facing the imaging device 12 and the pointing/input device 14 is within the horizontal and vertical fields of view of the imaging device 12. When the operator moves the pointing/input device 14 in the air, the images taken by the imaging device 12 will be processed by one or more computer program products to determine a 2D location of the middle LED 30a. That information is then used to determine the location of a cursor (not shown) on graphical user interface 18.
Computer program product(s) and calculations for performing this tracking job for 2D pointing are described in detail in the present assignee's co-pending U.S. utility patent application Ser. No. 14/089,881 for “Algorithms, software, and an interaction system that support the operation of an on the fly mouse,” the entirety of which is incorporated herein by reference. Briefly, one or more computer program products include executable instructions for calculating a position of activated middle LED 30a in a captured image including a view of the pointing/input device 14, and mapping that middle LED 30a position to a corresponding visual marker position in a graphical user interface 18. Acquired digital image frames are converted to digital data by an imaging device 12 sensor, and analyzed to determine regions of increased color intensity corresponding to a position of the middle blue LED 30a in the image frame. The data may be subjected to one or more filtering steps to remove areas of lesser color intensity, and to remove areas displaying a color that is other than the predetermined blue color of middle LED 30a. Data representative of a location of the middle LED 30a are scaled in a non-linear fashion to render a visual marker such as a cursor on the graphical user interface 18, corresponding to a calculated position and/or motion of the middle LED 30a moved in three-dimensional space.
The use of the pointing/input device 14 for left click, right click, drag, etc. operations will now be described. For a left click operation, actuating left button 26 of pointing/input device 14 actuates left switch 42 which in turn actuates a left LED 32b of the second set of visible point light sources 32, which as noted above in the depicted embodiment emits a red light. The left LED 32b will remain in an “on” status as long as the left button 26 is not released. Therefore, by processing the corresponding images including an activated blue middle LED 30a and an activated red left LED 32b, the pointing computer programs interpret the combination of activated point light sources and the length of time of activation of left LED 32b as a left click command, a double click command, or a drag command. Right click functions are performed and processed similarly, except that operator O actuates right button 24 to activate right (red) LED 32a.
It will be appreciated that the two red LEDs 32a, 32b are not activated simultaneously, therefore the software differentiates left click and right click commands by the position (in a captured image) of the activated red LED relative to the middle blue LED 30a. Thus, activation of the left-side (relative to the middle blue LED 30a) red LED 32b is interpreted as a left click command, and activation of the right side LED 32a is interpreted as a right click command. To execute a double click command, it is required only to detect a “click,” i.e. a separate event of activating a point light source, twice within a constant predetermined time period, for example 100 ms. The following actions are interpreted as a drag command: (a) activate left (red) LED 32a; (b) track the activated left (red) LED 32a; and (c) inactivate left (red) LED 32a. The process of tracking an activated red point light source is substantially as described supra for tracking the middle blue LED 30a. Hence, all familiar pointing functions of a computer mouse can be performed by the pointing/input device 14 in 2D mode except these functions are performed in the air, instead of requiring a surface over which the device 14 is translated.
For operation of the pointing/input device in 3D mode, in the depicted embodiment operator O actuates a 3D button 46 which in turn actuates switch 40 (see
When in 3D mode, drag commands are interpreted differently from those described above for the pointing/input device 14 in 2D mode. A left click and/or drag is still performed by pushing and holding the left button 26 of the pointing/input device 14. As summarized above, operator O actuates left switch 42 which in turn actuates a left red LED 32b of the second set of visible point light sources 32, which is then seen by the imaging device 12. However, instead of moving the window or object pointed to by the cursor to a new location as described for the pointing/input device 14 in 2D mode, in 3D mode this action causes the computer program to change the dimension of the window or the drawing pointed to by the cursor based on a calculated depth of the left red LED 32b of the pointing/input device 14, until the left button 26 is released. Similarly, by pushing and holding the right button 24 to actuate right red LED 32a, the computer program uses a calculated angle of the right red LED 32a of the pointing/input device 14 to change the orientation of an object or a photo rendered on a graphical user interface 18.
Exemplary calculations used by the one or more computer program products for using captured images showing the variously actuated point light sources of the first and second sets 30, 32 of visible point light sources will now be described. For this process, one assumption is that a pinhole of the imaging device 12 (Ō) is the center of perspective projection (see
Certain variables in the calculations are as follows:
Given:
Ō=(0, 0, 0): center of projection
V1V2V3: three aligned vertices Vi=(xi, yi, zi), i=1, 2, 3, with V2 being the midpoint of V1 and V3, and the distance between V1 and V2 is w. V1V2V3 is not perpendicular to the projection plane P.
To calculate Vi from
|
Hence,
(see
On the other hand, since Ō,
V1=t·Ō
for some t>0, and
V3=s·Ō
for some s>0. t and s are to be determined. Hence, as the midpoint of V1 and V3, V2 can also be expressed as
V2=½(t
Using the fact that Ō
The condition that
(5) is an important result. From (5), s can be expressed as
s=pt/q (6)
Substituting (6) into (3) for s and using the fact that V1V3 is a line segment of length 2w, we have
Solving this equation for t, we get
By substituting (7) into (6), we get
Hence, using (7) for t in (2) and (8) for s in (3), we get V1 and V3. As the midpoint of V1 and V3, V2 can be computed using (4). After acquiring the coordinates of the three aligned blue LED lights 30a, 30b as explained above, a vector in 3D mode can be calculated by using the formula V3−V1=<X3−X1, Y3−Y1, Z3−Z1>. Therefore, the orientation or angle of the pointing/input device 14 can be defined as the angle between the vector V3−V1 and the positive x-axis, and calculated from the above information.
Thus, by use of digital data rendered from captured images including the activated aligned point light sources of first point light source 30, a depth of each point light source and so a depth and/or orientation of pointing/input device 14 in 3D space may be calculated as described and rendered as a cursor on a graphical user interface 18. In turn, the relative depths of each of the point light sources of aligned point light source set 30 are used to determine an orientation or angle of the pointing/input device 14 in 3D space. Those calculated depths/orientations are then interpreted as particular pointing or command inputs of the pointing/input device 14 as described, and rendered on a graphical user interface 18 of a computing device 16.
Summarizing, the present disclosure provides a pointing/input system including an input or pointing device 14 which allows pointing and command input in 2D and 3D mode, without requiring a direct connection to a computing device 16 or a surface over which pointing/input device 14 must be translated. All standard functions of the pointing/input device 14 such as left click, right click, drag and drop, etc. are performed using buttons and actions corresponding to those with which a user of a conventional computer mouse is familiar. The pointing/input device 14 is inexpensive and simple, requiring only sets of aligned visible point light sources and simple circuitry. Advantageously, the disclosed system 10 is likewise economical, simple, and likely already available in many homes but for the pointing/input device 14 and software. But for the pointing/input device 14 and software, for additional hardware the system 10 requires only a computing device 16 and a conventional imaging device 12 such as a standard webcam, and having no requirement for any specific wired or wireless connection (such as wiring or cabling, or a specialized IR or other signal) between the pointing/input device 14 and the imaging device 12. Exemplary advantages of the disclosed system include allowing an operator to point and/or input gesture commands to a computing device, a “smart” television, and the like in either 2D or 3D mode, transitioning between these modes by simple switch actuations. Still further, the system of the present disclosure can be readily retrofitted to existing computing devices as long as the devices support operation of an integrated or peripheral imaging device such as a webcam.
One of ordinary skill in the art will recognize that additional embodiments of the invention are also possible without departing from the teachings herein. Thus, the foregoing description is presented for purposes of illustration and description of the various aspects of the invention, and one of ordinary skill in the art will recognize that additional embodiments of the invention are possible without departing from the teachings herein. This detailed description, and particularly the specific details of the exemplary embodiments, is given primarily for clarity of understanding, and no unnecessary limitations are to be imported, for modifications will become obvious to those skilled in the art upon reading this disclosure and may be made without departing from the spirit or scope of the invention. Relatively apparent modifications, of course, include combining the various features of one or more figures with the features of one or more of other figures. All such modifications and variations are within the scope of the invention as determined by the appended claims when interpreted in accordance with the breadth to which they are fairly, legally and equitably entitled.
This utility patent application claims the benefit of priority in U.S. Provisional Patent Application Ser. No. 61/865,630 filed on Aug. 14, 2013, the entirety of the disclosure of which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5532777 | Zanen | Jul 1996 | A |
5909209 | Dickinson | Jun 1999 | A |
6057540 | Gordon et al. | May 2000 | A |
6072496 | Guenter et al. | Jun 2000 | A |
6525306 | Bohn | Feb 2003 | B1 |
6643396 | Hendriks et al. | Nov 2003 | B1 |
7274800 | Nefian et al. | Sep 2007 | B2 |
7849421 | Yoo et al. | Dec 2010 | B2 |
8269721 | Lin | Sep 2012 | B2 |
8467612 | Susca et al. | Jun 2013 | B2 |
20090315825 | Cauchi | Dec 2009 | A1 |
20100017115 | Gautama | Jan 2010 | A1 |
20110169734 | Cho et al. | Jul 2011 | A1 |
20110310230 | Cheng | Dec 2011 | A1 |
20150022449 | Cheng et al. | Jan 2015 | A1 |
Number | Date | Country |
---|---|---|
2226707 | Jan 2013 | EP |
Entry |
---|
Pranav Mistry; Mouseless, the ‘invisible’ computer mouse (w/Video); Phys.org; Jul. 8, 2010; 2 pages; http://phys.org/news197792915.html. |
Pranav Mistry and Pattie Maes; “Mouseless”; MIT Media Laboratory, Oct. 2010, pp. 441-442. |
“Mouse (computing”; Wikipedia; Sep. 25, 2013; 8 pages; http://en.wikipedia.org/wiki/Mouse—(computing). |
Freanz Madritsch; “CCD-Camera Based Optical Tracking for Human-Computer Interaction”; Proceedings 1st European Conference on Disability, Virtual Reality and Associated Technologies, Maidenhead; pp. 161-170 (1996). |
Richard Hartley, et al.; “Triangulation”; Computer Vision and Image Understanding, vol. 68, No. 2, pp. 146-157 (1997); Article No. IV970547. |
Philips; “uWand technology”; How uWand Works—Intuitive3 Remotes for SmartTV; 1 page printed on Sep. 16, 2014; http://www.uwand.com/how-uwand-works.html. |
Wii from Wikipedia; 33 pages printed on Sep. 16, 2014; http://en.wikipedia.org/wiki/Wii. |
U.S. Appl. No. 14/089,881, filed Nov. 26, 2013 entitled Algorithms, Software and an Interaction System That Support the Operation of an on the Fly Mouse. |
Number | Date | Country | |
---|---|---|---|
20150049021 A1 | Feb 2015 | US |
Number | Date | Country | |
---|---|---|---|
61865630 | Aug 2013 | US |