Electronic devices such as smart phones, tablet computers, gaming devices, etc. often include two-dimensional graphical user interfaces (“GUIs”) that provide for interaction and control of the device. A user controls the device by performing gestures or movements on or about icons or “keys” represented on the GUI or moving the device in space (e.g., shaking or turning the device).
Some electronic devices also provide for three-dimensional (“3D”) displays. A 3D display provides an illusion of visual depth to a user for 3D content displayed or rendered by the 3D display. Although 3D displays are well-known, user interfaces for 3D displays are primitive and often require interaction with conventional input devices, such as touch screens, to control a 3D enabled display device.
Accordingly, there is a need in the art for improving 3D user interface control.
Embodiments of the present invention provide a 3D user interface processing system for a device that may include at least one sensor, a 3D display, and a controller. The controller may include a memory, which may store instructional information, and a processor. The processor may be configured to receive sensor data from the sensor(s) and to interpret sensor data according to the instructional information. The processor may generate a user interface command(s) and transmit the command(s) to the 3D display. The processor may also generate host commands to control and/or execute applications within a host system of the device.
The UI controller 110 may include a processor 112 and a memory 114. The processor 112 may control the operations of the UI controller 110 according to instructions stored in the memory 114. The memory 114 may store gesture definition libraries 114.1, UI maps 114.2, and command definition libraries 114.3, which may enable user control or manipulation of workspaces for the 3D display 130.1, and/or control functions or applications of the display device 100. The memory 114 may be provided as a non-volatile memory, a volatile memory such as random access memory (RAM), or a combination thereof.
The UI controller 110 may be coupled to the host system 170 of the device. The UI controller 110 may receive instructions from the host system 170. The host system 170 may include an operating system (“OS”) 172 and application(s) 174.1-174.N that are executed by the host system 170. The host system 170 may include program instructions to govern operations of the device 100 and manage device resources on behalf of various applications. The host system 170 may, for example, manage content of the 3D display 130.1. In an embodiment, the UI controller 110 may be integrated into the host system 170.
The UI controller 110 may manage user interaction for the device 100 based on the gesture definitions 114.1, UI maps 114.2, and command definitions 114.3 stored in the memory 114. User interaction with the display device 100 may be detected using various UI sensors. As illustrated in
The optical sensor 140 may detect user interactions within 3D workspaces of the device 100, which are discussed in more detail in
The signals may indicate a location of light impingement on a surface of each light sensor. The optical sensor 140 may digitize and decode the signals from the sensor(s) to calculate a three-dimensional position (i.e., using X, Y, Z coordinates) of the object. Calculation of X, Y, Z coordinates of an object in a field of view is described in the following U.S. patent applications, the contents of which are incorporated herein: U.S. patent application Ser. No. 12/327,511, entitled “Method of Locating an Object in 3D,” filed on Dec. 3, 2008; U.S. patent application Ser. No. 12/499,335, entitled “Method of Locating an Object in 3D,” filed on Jul. 8, 2009; U.S. patent application Ser. No. 12/499,369, entitled “Method of Locating an Object in 3D,” filed on Jul. 8, 2009, issued as U.S. Pat. No. 8,072,614 on Dec. 6, 2011; U.S. patent application Ser. No. 12/499,384, entitled “Method of Locating an Object in 3D,” filed on Jul. 8, 2009; U.S. patent application Ser. No. 12/499,414, entitled “Method of Locating an Object in 3D,” filed on Jul. 8, 2009; U.S. patent application Ser. No. 12/435,499, entitled “Optical Distance Measurement by Triangulation of an Active Transponder,” filed on May 5, 2009; U.S. patent application Ser. No. 12/789,190, entitled “Position Measurement Systems Using Position Sensitive Detectors,” filed on May 27, 2010; U.S. patent application Ser. No. 12/783,673, entitled “Multiuse Optical Sensor,” filed on May 20, 2010.
The touch sensor 150 may detect user touches on or near the touch screen 160. The touch sensor 150 may detect the location (or locations for multi-touch user interactions) and time of the user touch. The touch sensor 150 may be complimentary to the type of touch screen 160. For example, if the touch screen 160 is provided as a capacitive touch screen, the touch sensor 150 may be provided as a capacitive sensor (or a grid of capacitive sensors) detecting changes in respective capacitive fields. Further, the touch sensor 150 may be programmed to differentiate between desired user touch events and false positives for objects larger than a typical user interaction instrument (e.g., finger, pen, stylus). Other types of sensors may include but are not limited to resistive sensors, inductive sensors, etc.
The UI controller 110 may receive the sensor(s) results and may generate user interface commands to control or manipulate content rendered on the 3D display 130.1, via the display driver 120.1. The UI controller 110 may also generate host commands based on the sensor(s) results to control and/or execute applications in the host system 170, which may also alter content rendered on the 3D display 130.1.
In an embodiment, the UI controller 110 may also be coupled to a haptics driver 120.2, which may drive one or more haptics actuator(s) 130.2. In such an embodiment, the UI controller 110 may generate haptics commands to control the haptics actuator(s) 130.2 from the sensor(s) results. Haptics refers to the sense of touch. In electronic devices, haptics relates to providing a sensory feedback or “haptics effects” to a user. The haptics actuator(s) 130.2 may be embodied as piezoelectric elements, linear resonant actuators (“LRAs”), eccentric rotating mass actuators (“ERMs”), and/or other known actuator types. The haptics driver 120.2 may transmit drive signals to the haptics actuator(s) 130.2 causing it to vibrate according to the drive signal properties. The vibrations may be felt by the user providing various vibro-tactile sensory feedback stimuli.
For a 3D enabled display device, such as device 100, haptics effects may be generated for the device, meaning a user may feel the haptics effects with a hand opposite of the hand interacting with a 3D workspace. For example, selecting an interactive element in a 3D workspace with a finger from the user's right hand may generate a haptics effect felt by the user's left hand, which may indicate selection of the icon. Similar haptics effects may be generated for other interactions, for example, scrolling a 3D workspace left to right may generate a haptics effect which may provide a sensation of a movement from left to right.
The processor 220 may control the operations of the UI controller 200 according to instructions saved in the memory 230. The memory 230 may be provided as a non-volatile memory, a volatile memory such as random access memory (“RAM”), or a combination thereof. The processor 220 may include a gesture classification module 222, a UI search module 224, and a command search module 226. The memory 230 may include gesture definition data 232, UI map data 234, and command definition data 236. The data may be stored as look-up-tables (“LUTs”).
For example, the gesture definition data 232 may include a LUT with possible input value(s) and corresponding gesture(s). The UI map data 234 may include a LUT with possible input value(s) and corresponding icon(s). Furthermore, the command definitions 236 may include a LUT with possible gesture and icon values and corresponding user interface commands or host commands. In an embodiment, data (i.e., gesture data, UI map data, and/or command data, etc.) may be written into the memory 230 by the host system 250 or may be pre-programmed.
The gesture classification module 222 may receive the input signals from the input driver(s) 210 and may decode a gesture(s) from the input signals based on the gesture definition data 232. For example, the gesture classification module 222 may compare the input signals to stored input values in the gesture definition data 232 and may match the input signals to a corresponding stored gesture value. The gesture may represent a user action within a 3D workspace or on the touch screen as indicated by the input signals. The gesture classification module 222 may also calculate location data from the input signals and may generate location data (e.g., X, Y, Z coordinates) therefrom.
The UI search module 224 may receive the gesture and/or location data from the gesture classification module 222. The UI search module 224 may calculate a UI interaction such as an icon selection or workspace manipulation (scroll, zoom, etc.) based on the UI map data 232. For example, the UI search module 224 may compare the gesture and/or location data to stored map values in the UI map data 232 and may match the gestures and/or locations to corresponding 2D or 3D UI interactions.
The command search module 226 may receive the calculated gesture and UI interaction data, and may generate one or more commands (i.e., display commands, host commands, haptics commands, etc.) based on the command definition data 236. For example, the command search module 226 may compare the calculated gesture and UI interaction data to one or more corresponding command(s). The commands may be received by the output driver(s) 240, which, in turn, may control various output devices (i.e., 3D display, haptics device, etc.). For example, the commands may control or manipulate 3D content rendered in a 3D workspace of a 3D display (scrolling icons, zooming content for an application, etc.). In various embodiments, data from each of the modules 222-226 may also be communicated to the host system 250 to control OS functions or applications running in the host system 250. For example, swiping a finger from left to right in a 3D workspace of a document reader application to turn pages of a document being viewed with the application.
As discussed, a 3D display may create the illusion of 3D depth for content displayed or rendered on a 3D enabled display device. The content may include GUI elements such as 3D icons, buttons, etc. or 3D application content such as a map, a movie, a drawing canvas, etc. The content may be rendered in 3D workspaces or “UI layers” above the 3D display. The detection layers 1-N as described in
In an embodiment, the display device 410 may render content for a 3D workspace as the workspace may be “activated” by a user for viewing content for the workspace. As illustrated in
In an embodiment, a display device may “push” an activated 3D workspace to an outward-most viewing perspective (away from the display device) and maximize the viewing depth of the workspace's rendered content to a predetermined viewing depth.
As discussed, a user may also interact with interactive elements (e.g., 3D icons) of a 3D workspace.
The 3D workspaces may include other areas that do not include interactive elements.
In other embodiments, areas that do not include interactive elements may be designated as “active zones,” wherein user interaction within an active zone may control or manipulate rendering of a 3D workspace(s) or functions of the device 510. An active zone for UI Layer 2 is shown as the open area behind the 3D icons. A user may move their finger from left-to-right in UI Layer 2 to cause the 3D icons in that layer to scroll from left to right. In other examples, active zones may provide for activating workspaces, zooming, performing gesture commands, etc.
As discussed, a UI controller (e.g., UI controller 200 of
a)-(b) illustrate 3D workspaces 600 for a 3D enabled display device 610 according to an embodiment of the present invention. The device 610 may include a 3D mapping application to display a 3D map. The device 610 may render the 3D map in a 3D workspace, shown as “UI Layer 2.” A UI controller (i.e., UI controller 200 of
In an embodiment, a user may also select items in an application (e.g., buildings rendered in a 3D map of a mapping application) by performing a snap touch to activate content related to the items.
In various embodiments, a UI controller (i.e., UI controller 200 of
Several embodiments of the present invention are specifically illustrated and described herein. However, it will be appreciated that modifications and variations of the present invention are covered by the above teachings. In other instances, well-known operations, components and circuits have not been described in detail so as not to obscure the embodiments. It can be appreciated that the specific structural and functional details disclosed herein may be representative and do not necessarily limit the scope of the embodiments.
Those skilled in the art may appreciate from the foregoing description that the present invention may be implemented in a variety of forms, and that the various embodiments may be implemented alone or in combination. Therefore, while the embodiments of the present invention have been described in connection with particular examples thereof, the true scope of the embodiments and/or methods of the present invention should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims.
Various embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (“ASIC”), programmable logic devices (“PLD”), digital signal processors (“DSP”), field programmable gate array (“FPGA”), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (“API”), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints.
Some embodiments may be implemented, for example, using a non-transitory computer-readable medium or article which may store an instruction or a set of instructions that, if executed by a machine, may cause the machine to perform a method and/or operations in accordance with the embodiments. Such a machine may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, or the like, and may be implemented using any suitable combination of hardware and/or software. The computer-readable medium or article may include, for example, any suitable type of memory unit, memory device, memory article, memory medium, storage device, storage article, storage medium and/or storage unit, for example, memory, removable or non-removable media, erasable or non-erasable media, writeable or re-writeable media, digital or analog media, hard disk, floppy disk, Compact Disc Read Only Memory (CD-ROM), Compact Disc Recordable (CD-R), Compact Disc Rewriteable (CD-RW), optical disk, magnetic media, magneto-optical media, removable memory cards or disks, various types of Digital Versatile Disc (DVD), a tape, a cassette, or the like. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, encrypted code, and the like, implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.
This application claims priority to provisional U.S. Patent Application Ser. No. 61/470,764, entitled “Touch Screen and Haptic Control” filed on Apr. 1, 2011, the content of which is incorporated herein in its entirety.
Number | Date | Country | |
---|---|---|---|
61470764 | Apr 2011 | US |