This description relates to a user interface system and method associated with a computing device.
Many known computing devices can have several mechanisms through which a user may interact with (e.g., trigger) one or more functions of the computing device. For example, dedicated user interface devices such as keyboards, mouse devices, touch screen displays and/or so forth, through which a user may interact with a computing device to perform one or more computing functions, can be connected with and/or integrated into the computing device. Such user interface devices can require a user of the computing device to work within multiple working regions associated with the computing device. For example, a mouse may be located on a surface adjacent a computing device and a keyboard may be located on the computing device itself. Thus, the user must move his or her hand(s) between two different working regions while changing between a keyboard function (e.g., typing) and a cursor function (e.g. mousing). Such user interface devices may be cumbersome to use and/or may not produce results at a desirable speed and/or level of accuracy. Thus, a need exists for a system and methods to allow a user of a computing device to work within a single unified working region.
In one general aspect, a computer program product can be tangibly embodied on a non-transitory computer-readable storage medium and include instructions that, when executed, are configured to perform a process. The instructions can include instructions to detect a gesture defined by an interaction of a user within a working volume defined above a surface, such as a surface above a keyboard portion of a computing device. Responsive to detecting the gesture, a gesture cursor control mode can be initiated within the computing device such that the user can manipulate the cursor by moving a portion of a hand of the user within the working volume. A location of the portion of the hand of the user relative to the surface can be identified within the working volume and a cursor can be positioned within a display portion of the computing device at a location within the display portion corresponding to the identified location of the portion of the hand of the user within the working volume.
In another general aspect, a computer-implemented method can include detecting at a computing device a gesture defined by an interaction of a user within a working volume defined above a surface. Based on detecting the gesture, a gesture cursor control mode within the computing device can be initiated such that the user can manipulate the cursor by moving a portion of a hand of the user within the working volume. A location of the portion of the hand of the user relative to the surface within the working volume can be identified and a cursor can be positioned within a display portion of the computing device at a location with the display portion corresponding to the identified location of the portion of the hand of the user within the working volume.
In yet another general aspect, a system can include instructions recorded on a non-transitory computer-readable medium and executable by at least one processor, the system can include a gesture classification module and a gesture tracking module. The gesture classification module is configured to detect a gesture defined by an interaction of a user within a working volume associated with a computing device. The working volume is defined above a surface. The gesture classification module is further configured to trigger initiation of a gesture cursor control mode when the gesture matches a predetermined gesture signature stored within the computing device. The gesture tracking module is configured to identify a position of a portion of a hand of the user within the working volume relative to the surface and position a cursor within a display portion of the computing device at a location corresponding to the position of the portion of the hand of the user within the working volume. The gesture tracking module is configured to move the cursor within the display portion of the computing device to correspond to movement of the portion of the hand of the user within the working volume when the computing device is in the gesture cursor control mode.
The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims.
A virtual touch user interface system as described herein can use virtual touch input (e.g., gestures) and hand/finger gesturing on a surface, such as the surface of a keyboard portion of a computing device, to enable efficient and ergonomic text entry and selection/manipulation of user interface elements of the computing device. Using a capture device, such as a 3D camera, and recognition software, the selection and manipulation of user interface elements can be triggered using gestures by a user without using a physical input device, such as, for example, a mouse, touchpad or touch screen. The surface of a keyboard portion of the computing device and the working space or volume above the surface can be used for both text entry and selection and manipulation of user interface elements such that minimal hand motion is needed by a user. In other words, the user can work within a single unified working space to switch between one mode of user interaction (e.g., text entry) to another mode of user interaction (e.g., mousing or cursor control).
As described herein, modes of operation of a computing device can be triggered and operated by a virtual touch user interface system and methods. For example, a system and methods for changing between a text based (e.g. keyboard) control mode of operation and a gesture cursor control mode of operation of a computing device is described herein. The text based control mode of operation allows a user of the computing device to perform text entry or typing functions using, for example, a keyboard portion of the computing device. The gesture cursor control mode of operation of the computing device allows a user to maneuver and position a cursor within a display portion of the computing device by moving a portion of the user's hand (e.g., a finger tip) within a working space or region defined above a surface, such as the surface of the keyboard portion of the computing device or a surface next to the computing device. Thus, the user can control the cursor without physical contact with a separate input device such as a mouse, touchpad, trackpad or touch screen.
As shown in
In some implementations, the computing device 120 can represent a cluster of devices. In such an implementation, the functionality and processing of the computing device 120 (e.g., one or more processors 132 of the computing device 120) can be distributed to several computing devices of the cluster of computing devices.
In some implementations, one or more portions of the components shown in the computing device 120 in
The components of the computing device 120 can be configured to operate within an environment that includes an operating system. In some implementations, the operating system can be configured to facilitate, for example, classification of gestures by the gesture classification module 130.
In some implementations, the computing device 120 can be included in a network. In some implementations, the network can include multiple computing devices (such as computing device 120) and/or multiple server devices (not shown). Also, although not shown in
The memory 134 of the computing device 120 can be any type of memory device such as a random-access memory (RAM) component or a disk drive memory. The memory 134 can be a local memory included in the computing device 120. Although not shown, in some implementations, the memory 134 can be implemented as more than one memory component (e.g., more than one RAM component or disk drive memory) within the computing device 120. In some implementations, the memory 134 can be, or can include, a non-local memory (e.g., a memory not physically included within the computing device 120) within a network (not shown). For example, the memory 134 can be, or can include, a memory shared by multiple computing devices (not shown) within a network. In some implementations, the memory 134 can be associated with a server device (not shown) on a client side of a network and configured to serve several computing devices on the client side of the network.
The display portion of the computing device 120 can be, for example, a liquid crystal display (LCD), a liquid emitting diode (LED) display, television screen, or other type of display device. In some implementations, the display portion can be projected on a wall or other surface or projected directly into an eye of the user. The optional keyboard portion of the computing device 120 can include, for example, a physical keyboard (e.g., includes physical keys that can be actuated by a user), a virtual keyboard (e.g., includes a touchscreen or sensing area), an optically projected keyboard (e.g., a projected display of a keyboard on a surface), or an optical detection keyboard (e.g., optically detects hand and/or finger motion of a user. In some implementations, the keyboard portion can also include various input devices, such as for example a touchpad or trackpad. In some implementations, the keyboard portion can be a device that can be electrically coupled to the computing device 120 (e.g., wired device). In some implementations, the keyboard portion can be integral with the computing device 120 (e.g., such as with a laptop). In some implementations, the keyboard portion can be wi-fi enabled to communicate wirelessly with the computing device 120. In further implementations, the computing device 120 can perform its functions without a keyboard portion using solely the virtual touch interface described in this document and/or other means of user interaction.
As used herein, a working volume may be the space or region above a surface associated with or near the computing device that is visible to the capture device 122. The working volume can be, for example, a working space or region in which a users of the computing device 120 places their hands during operation of the computing device 120, such as above a keyboard portion of the computing device. In other embodiments, the working volume may be a table surface proximate the computing device.
The capture device 122 can be, for example, a device configured to provide 3-dimensional (3D) information associated with the working volume defined above the keyboard portion of or a surface proximate to the computing device 120. For example, the capture device 122 can be, a camera, such as, for example, a 3D camera or a stereo camera (e.g., two or more cameras). In some implementations, the capture device 122 can be, for example, an above-the-surface sensing device (e.g., using infrared (IR) or ultrasound sensors embedded in the keyboard), or a time-of-flight camera (e.g., a range imaging camera system that the known speed of light and measures the time-of-flight of a light signal between the camera and the subject being imaged). In some implementations, the capture device 122 can be a monocular vision camera, in which case advanced computer vision algorithms are used to interpret the spatial structure of the scene. The capture device 122 can be a separate component that can be coupled to the computing device 120 or can be integrated or embedded within the computing device 120. For example, the capture device 122 can be embedded into a bezel portion of the computing device 120 along a top edge above the display portion of the computing device 120. In some implementations, the capture device 122 can be disposed below the display portion of the computing device 120. For example, the capture device 122 can be embedded within a lower bezel portion of the computing device 120.
The capture device 122 can be used to capture or collect 3D information (e.g., imaging data) associated the working volume defined above a surface, such as the surface of the keyboard portion of the computing device. The 3D information can be used to, for example, identify hand and/or finger motions of the user, for example, gesture inputs or interactions by the user as described in more detail below. The 3D information can be used by the gesture tracking module 128 and the gesture classification module 130 to identify a gesture input or interaction by a user of the computing device 120, and determine if the gesture input matches a gesture signature 136 stored within the memory 134. For example, one or more gesture signatures 136 can be predefined and stored within the memory 134 of the computing device 120.
In some implementations, a gesture signature 136 can be defined to trigger a change of an operational mode of the computing device 120 from a text based control mode of operation to a gesture cursor control mode of operation (or vice-versa) of the computing device 120. For example, in some implementations, a gesture signature 136 can include a prerecorded and stored gesture signature 136 that includes a clapping motion of a user's hands, and when a user performs a gesture interaction that matches that gesture signature 136, the system can change the mode of operation of the computing device 120 from the text based control mode of operation to the gesture cursor control mode of operation of the computing device 120.
In some implementations, a gesture input or interaction (also referred to herein as a “gesture”) by a user can be any type of non-electrical communication with the computing device 120. In some implementations, the gesture can include any type of non-verbal communication of the user such as a hand motion or hand signal of a user that can be detected by, for example, the capture device 122 of the computing device 120. In some implementations, detection of a gesture can be referred to as registration of the gesture, or registering of the gesture.
A gesture signature 136 can be, for example, a prerecorded and stored visual hand or finger motion of the user that can be used to trigger a function within the computing device 120. A gesture signature 136 can include a prerecorded and stored path or trajectory of the motion of a user's hand or a portion of a user's hand. A gesture signature 136 can be, for example, a special hand gesture to trigger a change of mode of operation (as discussed above), such as clapping or waving of the user's hands, a hovering gesture (e.g., the user's hand or finger is hovering or disposed over the surface), a click gesture (e.g., the user brings a finger and thumb together or the user taps a finger on the surface), a drag gesture (e.g., the user moves a finger along the surface). It should be understood that these are just example gestures and gesture signatures, as other gestures and gesture signatures can also be included.
When the computing device 120 is in the gesture cursor control mode of operation, the 3D information provided by the capture device 122 can be used to identify a location within the working space of a portion of a user's hand (e.g. a finger tip) and allow the user to maneuver and position a cursor within the display portion of the computing device 120 using that portion of the user's hand. In other words, rather than using a physical input device, such as, for example, a mouse or a trackpad or touchpad, to move the cursor, the user can move a portion of the user's hand, such as a finger tip, within the working volume to maneuver and position the cursor. When the text based control mode of operation is activated, the user can enter text (e.g., type) using, for example, the keyboard portion of the computing device 120. In some implementations, the computing device 120 may also include a physical input device such as a mouse or trackpad or touch pad, and can use the physical input device to maneuver the cursor while in the text based control mode of operation if desired.
In some implementations, the mode of operation of the computing device 120 can be changed by pressing or touching a selected portion (e.g., a selected key) of the surface (e.g. keyboard portion of the computing device 120). In some implementations, the same event (e.g., a gesture or actuating a special key) can be used to switch between the gesture cursor control mode of operation and the text based control mode of operation. In some implementations, the mode of operation can be changed when a time out occurs. For example, if the computing device 120 is in the gesture cursor control mode, the mode can be changed automatically to the text based control mode of operation after a predetermined time period. In some implementations, the text based control mode of operation can automatically be triggered when, for example, a text field within the display portion of the computing device 120 is selected while in the gesture cursor control mode. In some implementations, the gesture cursor control mode of operation can be automatically triggered when the cursor is moved out of a text field within the display portion of the computing device 120. For example, after the user has entered desired text into a text field and moves out of that text field, the gesture cursor control mode can be automatically triggered.
When the computing device 120 is in the gesture cursor control mode of operation, the gesture tracking module 128 can track the movement of a selected portion of the user's hand (e.g., finger tip) within the working volume above the surface, such as the keyboard portion of the computing device 120, and based on the location of the selected portion of the user's hand provide selection and manipulation of the a cursor within the display portion of the computing device 120. The gesture tracking module 128 can localize the position of the portion of the user's hand (e.g., finger tip) within the 3D working volume and estimate a distance from that position to the surface, such as a surface of the keyboard portion of the computing device 120. Thus, the gesture tracking module 128 can track and monitor the location of the portion of the user's hand (e.g., finger tip) relative to the surface. The gesture tracking module 128 can map the location or position of the selected portion of the user's hand to the display portion of the computing device 120 to provide absolute cursor positioning, rather than relative cursor positioning that is typically provided by a mouse or touchpad. In other words, there is a fixed, constant mapping between the working volume (e.g. region or space above the surface of the keyboard portion) and the display portion of the computing device, which allows the user to immediately position the cursor at the intended position, rather than having to consider the current position of the mouse cursor and navigating it in a relative manner to the desired position within the display portion of the computing device 120. In alternative implementations, the gesture cursor control mode can be implemented using such known relative positioning of the cursor motion.
The mapping between the user's 3D working volume and the 2D display region of the graphical interface may take different forms. In one implementation, the mapping takes the form of a 90 degree rotation around the axis of the display bezel followed by a projection, such that a forward-backward motion of the user's hand is mapped to an up-down motion on the display. In another implementation, the mapping takes a curved (or warped) form to better match the anatomy of the human hand. Here, for example, a curved motion of the finger tip during a “click” down motion towards the surface would be warped, so that the cursor does not move during the “click” but rather remains stationary on top of the currently selected interface element. In yet another implementation, the mapping is translated and scaled, such that a smaller region on the surface is mapped to the display or a larger region, or a region translated to the side. In further implementations, the scaling and translation parameters of the mapping adapt to the user's behavior during use.
The gesture cursor control mode of operation may also allow the user to perform click and drag functions by moving the portion of the user's hand along a surface, for example a surface of the keyboard portion of the computing device 120. For example, the user can move the selected portion of the user's hand (e.g., finger tip) to a location on the surface of the keyboard portion of the computing device 120 (e.g., at a non-zero distance from the surface of the keyboard portion), and the proximity to the surface of the keyboard portion can be detected to trigger a virtual touch event. For example, if the user wants to select an element on the display portion of the computing device, the user can, for example, point a finger tip to the element (e.g., the finger tip is hovering within the working volume) to place the cursor at a desired location on the display portion, and then move the finger tip to the surface to trigger a select function. The user can move the finger tip along the surface and a continuous dragging action can be performed. For example, the user can drag or move the selected element within the display portion of the computing device 120. In some implementations, the select function can be triggered when the user performs a particular gesture interaction. For example, a user gesture such as touching an index finger to a thumb can be a gesture interaction that triggers a select function.
To terminate the gesture cursor control mode of operation of the computing device 120 and trigger the text based control mode of operation, the user can perform a special gesture (as discussed above to trigger the gesture cursor control mode of operation), use a special key of the keyboard portion, or use a special portion of the surface to trigger the change. When in the text based control mode of operation, the user can key in text, use a mouse or touchpad or trackpad (if included on the computing device), and otherwise use the various functions provided on a text entry device (i.e. a keyboard portion) of the computing device 120 in a typical manner.
In some implementations, in operation, the capture device 122 can bring in raw data (e.g., imaging data) associated with the working volume and provide the raw data to the segmentation module 124. The segmentation module 124 can distinguish between the foreground and background of the raw imaging data and remove static parts of the imaging data, leaving only the dynamic parts of the imaging data. For example, the segmentation module 124 can identify the motion of the hand of the user within the working volume. The segmentation module 124 can then provide the segmented data to the pixel classification module 126. The pixel classification module can use the information provided by the segmentation module 124 to identify and classify various parts of the 3D information (e.g., imaging data). For example, the pixel classification module 126 can assign a class to individual pixels within the imaging data, such as for example, pixels associated with a hand, a finger, a finger tip, etc. The classification results provided by the pixel classification module 126 can be provided to the gesture tracking module 128. The segmentation module 124 and the pixel classification module 126 can each include any hardware and/or software configured to facilitate the processing of the 3D information provided by the capture device 122.
The gesture tracking module 128 can accumulate the classification results (from the pixel classification module 126) over time and construct a path or trajectory of the movement of a preselected portion of the user's hand (e.g., a finger tip) within the working volume. For example, the capture device 122 can collect 3D information associated with the working volume every, 30, 40, 50, 60, etc. times per second, and that information can be provided to the gesture tracking module 128 for each frame. The gesture tracking module 128 can accumulate the 3D information (e.g., imaging data) to construct a path or trajectory of the movement of the preselected portion of the user's hand (e.g., finger tip), and associate with the path various features related to the position and movement of the portion of the user's hand, such as distance from the surface, velocity, acceleration, etc. The gesture tracking module 128 can include any hardware and/or software configured to facilitate processing of the motion of the portion of the user's hand.
The constructed path(s) and associated features can be analyzed by the gesture classification module 130 to determine an associated gesture signature that matches the path of motion of the selected portion of the user's hand. For example, the path can be associated with a gesture input or interaction by the user as described above, and that gesture interaction can be compared to stored gesture signatures 136 within the memory 134 of the computing device 120.
The gesture classification module 130 can be configured to process (e.g., detect, analyze) one or more gesture interactions by a user with the computing device 120. The gesture classification module 130 can be configured to, for example, detect a gesture (i.e., a gesture interaction), define a representation of the gesture and/or trigger initiation of a gesture cursor control mode of the computing device 120 in response to the gesture. The gesture classification module 130 can include any hardware and/or software configured to facilitate processing of one or more gesture interactions associated with the computing device 120.
As discussed above, the capture device 122 can collect 3D information associated with the working volume, for example, every, 30, 40, 50, 60, etc. times per second, and the above described loop through the various modules can be processed for each frame (e.g., each image). In some implementations, the hardware and/or software of the gesture classification module 130 can be configured to actively monitor for a gesture interaction (e.g., actively scan or sample), or can be configured to passively detect a gesture interaction. For example, the capture device 122 can be configured to periodically capture/generate/process images to continuously monitor for an interaction (e.g., a hand signal) with respect to the computing device 120 that could be a gesture interaction.
In some implementations, the computing device 120 can include a special classifier module (not shown) that is separate from the gesture classification module 130 and that can be used to trigger the gesture cursor control mode of operation. For example, a special classifier module can receive imaging data from the capture device 122 and identify and compare a gesture provided by a user to a stored gesture signature. In such an implementation, the special classifier module compares the imaging information directly with stored gesture signature images.
The computing device 220 also includes a virtual user input system (also referred to herein as “system”) that includes a capture device 222 embedded within a top bezel portion 243 of the computing device 220. The capture device 222 can be, for example, a 3D camera or other device configured to provide 3D information as described above for computing device 120. The capture device 222 is shown embedded in a top left corner of the bezel portion 243, but as discussed above, the capture device 222 can alternatively be disposed at a different location along the top bezel portion 243 or along a bottom bezel portion 245 of the computing device 220.
Although not shown in
As shown in
The 3D information collected by the capture device 222 can be used to, for example, identify hand and/or finger motions of a user, for example, gesture inputs or interactions by the user as described above for capture device 122. The 3D information can be used by the gesture tracking module and the gesture classification module to identify a gesture input or interaction by a user of the computing device 220, and determine if the gesture input matches a gesture signature predefined and stored within the memory of the computing device 220.
The computing device 220 can provide the user with two modes of interaction with the user while the user's hands remain within the working volume 238. Specifically, as discussed above for computing device 120, the computing device 220 can be switched or changed between a text based control mode of operation and a gesture cursor control mode of operation.
In this implementation, when the user desires to perform a mousing function, the user can perform or provide a gesture interaction or input to trigger the computing device 220 to change to the gesture cursor control mode of operation. For example, as shown in
As discussed above, when the computing device 220 is in the gesture cursor control mode of operation the user can manipulate and position a cursor 248, shown in
The user can also perform various functions, such as for example, select, drag and drop functions while in the gesture cursor control mode. For example, the user can move the finger tip F to a surface of the keyboard portion 240 (as shown in
In alternative implementations, rather than a special actuation key (e.g., 244) to trigger the change to the text based control mode of operation, a gesture interaction by the user can be used. The gesture interaction can be the same as or different than the gesture designated to trigger the gesture cursor control mode of operation. In some alternative implementations, the computing device 220 can use one or more special actuation key(s) to trigger both the text based control mode of operation and the gesture cursor control mode of operation.
At 352, based on the detected gesture interaction of the user, a gesture cursor control mode can be triggered within the computing device such that the user can manipulate cursor within a display portion (e.g., 242) of the computing device by moving a selected portion of the hand of the user (e.g., finger) within the working volume. For example, as described herein, the gesture classification module can compare the gesture interaction of the user to stored gesture signatures and if the gesture interaction matches a stored gesture signature configured to trigger a change to the gesture cursor control mode of operation, the gesture cursor control mode is triggered.
At 354, a location of the portion of a hand of the user (e.g., a finger tip) relative to the surface, such as the keyboard portion of the computing device, can be identified within the working volume. For example, the gesture tracking module can identify and track a position of the portion of the hand of the user based on the 3D information associated with the working volume provided by the capture device. At 356, a cursor can be positioned within the display portion of the computing device at a location within the display portion corresponding to the identified location of the portion of the hand of the user within the working volume. In other words, the gesture tracking module can map the location of the portion of the hand of the user to the display portion to provide absolute positioning of the cursor within the display portion.
At 454, an input can be received based on a selection by the user of a predefined portion of the surface (e.g. the keyboard portion of the computing device). For example, a selected key of the keyboard portion can be designated as a special actuation key (e.g., 244) configured to trigger a keyboard control mode of operation of the computing device. At 456, based on the input received, the gesture cursor control mode is terminated and the text based control mode of the computing device is triggered such that the user can type or enter text, etc. using the keyboard portion of the computing device.
Implementations of the various techniques described herein may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. Implementations may implemented as a computer program product, i.e., a computer program tangibly embodied in an information carrier, such as a machine-readable storage device (computer-readable medium), for processing by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program, such as the computer program(s) described above, can be written in any form of programming language, including compiled or interpreted languages, and can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be processed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
Method steps may be performed by one or more programmable processors executing a computer program to perform functions by operating on input data and generating output. Method steps also may be performed by, and an apparatus may be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the processing of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. Elements of a computer may include at least one processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer also may include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory may be supplemented by, or incorporated in special purpose logic circuitry.
To provide for interaction with a user, implementations may be implemented on a computer having a display device, e.g., a cathode ray tube (CRT) or liquid crystal display (LCD) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
Implementations may be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation, or any combination of such back-end, middleware, or front-end components. Components may be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet.
While certain features of the described implementations have been illustrated as described herein, many modifications, substitutions, changes and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the scope of the implementations. It should be understood that they have been presented by way of example only, not limitation, and various changes in form and details may be made. Any portion of the apparatus and/or methods described herein may be combined in any combination, except mutually exclusive combinations. The implementations described herein can include various combinations and/or sub-combinations of the functions, components and/or features of the different implementations described.
This application claims priority under 35 U.S.C. §119 to Provisional Patent Application Ser. No. 61/598,598, entitled “VIRTUAL TOUCH USER INTERFACE SYSTEM AND METHODS” filed on Feb. 14, 2012. The subject matter of this earlier filed application is hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
61598598 | Feb 2012 | US |