3D USER INTERFACE CONTROL

Abstract
Techniques to provide a three-dimensional (“3D”) user interface (“UI”) processing system for a device that may include at least one sensor, a 3D display, and a controller. The controller may include a memory, which may store instructional information, and a processor. The processor may be configured to receive sensor data from the sensor(s) and to interpret sensor data according to the instructional information. The processor may generate a user interface command(s) and transmit the command(s) to the 3D display to control and/or manipulate the 3D display. The processor may also generate host commands to control and/or execute applications within a host system of the device.
Description
BACKGROUND

Electronic devices such as smart phones, tablet computers, gaming devices, etc. often include two-dimensional graphical user interfaces (“GUIs”) that provide for interaction and control of the device. A user controls the device by performing gestures or movements on or about icons or “keys” represented on the GUI or moving the device in space (e.g., shaking or turning the device).


Some electronic devices also provide for three-dimensional (“3D”) displays. A 3D display provides an illusion of visual depth to a user for 3D content displayed or rendered by the 3D display. Although 3D displays are well-known, user interfaces for 3D displays are primitive and often require interaction with conventional input devices, such as touch screens, to control a 3D enabled display device.


Accordingly, there is a need in the art for improving 3D user interface control.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of a display device according to an embodiment of the present invention.



FIG. 2 is a block diagram of a user interface (“UI”) controller according to an embodiment of the present invention.



FIG. 3 is a model of detection layers for a display device according to an embodiment of the present invention.



FIGS. 4-8 illustrate exemplary 3D workspaces and UI controls for use with embodiments of the present invention.





DETAILED DESCRIPTION

Embodiments of the present invention provide a 3D user interface processing system for a device that may include at least one sensor, a 3D display, and a controller. The controller may include a memory, which may store instructional information, and a processor. The processor may be configured to receive sensor data from the sensor(s) and to interpret sensor data according to the instructional information. The processor may generate a user interface command(s) and transmit the command(s) to the 3D display. The processor may also generate host commands to control and/or execute applications within a host system of the device.



FIG. 1 is a simplified block diagram of a display device 100 according to an embodiment of the present invention. The device 100 may include a UI controller 110, a display driver 120.1, a 3D display 130.1, an optical sensor 140, a touch sensor 150, a touch screen 160, and a host system 170. The device 100 may be embodied as a consumer electronic device such as a cell phone, PDA, tablet computer, gaming device, television, etc. The 3D display 130.1 may be embodied as a stereoscopic display or an auto-stereoscopic display.


The UI controller 110 may include a processor 112 and a memory 114. The processor 112 may control the operations of the UI controller 110 according to instructions stored in the memory 114. The memory 114 may store gesture definition libraries 114.1, UI maps 114.2, and command definition libraries 114.3, which may enable user control or manipulation of workspaces for the 3D display 130.1, and/or control functions or applications of the display device 100. The memory 114 may be provided as a non-volatile memory, a volatile memory such as random access memory (RAM), or a combination thereof.


The UI controller 110 may be coupled to the host system 170 of the device. The UI controller 110 may receive instructions from the host system 170. The host system 170 may include an operating system (“OS”) 172 and application(s) 174.1-174.N that are executed by the host system 170. The host system 170 may include program instructions to govern operations of the device 100 and manage device resources on behalf of various applications. The host system 170 may, for example, manage content of the 3D display 130.1. In an embodiment, the UI controller 110 may be integrated into the host system 170.


The UI controller 110 may manage user interaction for the device 100 based on the gesture definitions 114.1, UI maps 114.2, and command definitions 114.3 stored in the memory 114. User interaction with the display device 100 may be detected using various UI sensors. As illustrated in FIG. 1, the UI controller 110 may be coupled to UI sensor(s) such as the optical sensor 140 and the touch sensor 150 that may measure different user interactions with the 3D display 130.1 and/or the touch screen 160. The touch screen may 160 may be overlaid on the face of a display, which may be provided as a backlit LCD display with an LCD matrix, lenticular lenses, polaraziers, etc.


The optical sensor 140 may detect user interactions within 3D workspaces of the device 100, which are discussed in more detail in FIG. 4. The optical sensor 140 may detect the location (or locations for user interactions) and the time of the user movement (e.g., finger, stylus, pen, etc.) as it hovers or moves within a 3D workspace(s). The optical sensor 140 may include one or more light emitters (not shown), such as a light emitting diode (“LED”) or other similar device and one or more light sensors (not shown). During operation of the display device 100, the light emitters may emit light into the 3D workspace(s). An object or multiple objects in the field of view of the emitters may reflect light back into the light sensors, which may generate electrical impulses/signals representing the intensity of light incident thereon.


The signals may indicate a location of light impingement on a surface of each light sensor. The optical sensor 140 may digitize and decode the signals from the sensor(s) to calculate a three-dimensional position (i.e., using X, Y, Z coordinates) of the object. Calculation of X, Y, Z coordinates of an object in a field of view is described in the following U.S. patent applications, the contents of which are incorporated herein: U.S. patent application Ser. No. 12/327,511, entitled “Method of Locating an Object in 3D,” filed on Dec. 3, 2008; U.S. patent application Ser. No. 12/499,335, entitled “Method of Locating an Object in 3D,” filed on Jul. 8, 2009; U.S. patent application Ser. No. 12/499,369, entitled “Method of Locating an Object in 3D,” filed on Jul. 8, 2009, issued as U.S. Pat. No. 8,072,614 on Dec. 6, 2011; U.S. patent application Ser. No. 12/499,384, entitled “Method of Locating an Object in 3D,” filed on Jul. 8, 2009; U.S. patent application Ser. No. 12/499,414, entitled “Method of Locating an Object in 3D,” filed on Jul. 8, 2009; U.S. patent application Ser. No. 12/435,499, entitled “Optical Distance Measurement by Triangulation of an Active Transponder,” filed on May 5, 2009; U.S. patent application Ser. No. 12/789,190, entitled “Position Measurement Systems Using Position Sensitive Detectors,” filed on May 27, 2010; U.S. patent application Ser. No. 12/783,673, entitled “Multiuse Optical Sensor,” filed on May 20, 2010.


The touch sensor 150 may detect user touches on or near the touch screen 160. The touch sensor 150 may detect the location (or locations for multi-touch user interactions) and time of the user touch. The touch sensor 150 may be complimentary to the type of touch screen 160. For example, if the touch screen 160 is provided as a capacitive touch screen, the touch sensor 150 may be provided as a capacitive sensor (or a grid of capacitive sensors) detecting changes in respective capacitive fields. Further, the touch sensor 150 may be programmed to differentiate between desired user touch events and false positives for objects larger than a typical user interaction instrument (e.g., finger, pen, stylus). Other types of sensors may include but are not limited to resistive sensors, inductive sensors, etc.


The UI controller 110 may receive the sensor(s) results and may generate user interface commands to control or manipulate content rendered on the 3D display 130.1, via the display driver 120.1. The UI controller 110 may also generate host commands based on the sensor(s) results to control and/or execute applications in the host system 170, which may also alter content rendered on the 3D display 130.1.


In an embodiment, the UI controller 110 may also be coupled to a haptics driver 120.2, which may drive one or more haptics actuator(s) 130.2. In such an embodiment, the UI controller 110 may generate haptics commands to control the haptics actuator(s) 130.2 from the sensor(s) results. Haptics refers to the sense of touch. In electronic devices, haptics relates to providing a sensory feedback or “haptics effects” to a user. The haptics actuator(s) 130.2 may be embodied as piezoelectric elements, linear resonant actuators (“LRAs”), eccentric rotating mass actuators (“ERMs”), and/or other known actuator types. The haptics driver 120.2 may transmit drive signals to the haptics actuator(s) 130.2 causing it to vibrate according to the drive signal properties. The vibrations may be felt by the user providing various vibro-tactile sensory feedback stimuli.


For a 3D enabled display device, such as device 100, haptics effects may be generated for the device, meaning a user may feel the haptics effects with a hand opposite of the hand interacting with a 3D workspace. For example, selecting an interactive element in a 3D workspace with a finger from the user's right hand may generate a haptics effect felt by the user's left hand, which may indicate selection of the icon. Similar haptics effects may be generated for other interactions, for example, scrolling a 3D workspace left to right may generate a haptics effect which may provide a sensation of a movement from left to right.



FIG. 2 is a functional block diagram of a UI controller 200 according to an embodiment of the present invention. The UI controller 200 may include input driver(s) 210, a processor 220, a memory 230, and output driver(s) 240. The input driver(s) 210 may receive sensor inputs (e.g., optical sensors, touch sensors, etc.) and may generate a corresponding input signal. The sensor inputs may be coupled to the input driver(s) 210 via a serial interface such as a high speed 12C interface. The input driver(s) 210 may also control the coupled sensor operations such as when to power on, read data, etc.


The processor 220 may control the operations of the UI controller 200 according to instructions saved in the memory 230. The memory 230 may be provided as a non-volatile memory, a volatile memory such as random access memory (“RAM”), or a combination thereof. The processor 220 may include a gesture classification module 222, a UI search module 224, and a command search module 226. The memory 230 may include gesture definition data 232, UI map data 234, and command definition data 236. The data may be stored as look-up-tables (“LUTs”).


For example, the gesture definition data 232 may include a LUT with possible input value(s) and corresponding gesture(s). The UI map data 234 may include a LUT with possible input value(s) and corresponding icon(s). Furthermore, the command definitions 236 may include a LUT with possible gesture and icon values and corresponding user interface commands or host commands. In an embodiment, data (i.e., gesture data, UI map data, and/or command data, etc.) may be written into the memory 230 by the host system 250 or may be pre-programmed.


The gesture classification module 222 may receive the input signals from the input driver(s) 210 and may decode a gesture(s) from the input signals based on the gesture definition data 232. For example, the gesture classification module 222 may compare the input signals to stored input values in the gesture definition data 232 and may match the input signals to a corresponding stored gesture value. The gesture may represent a user action within a 3D workspace or on the touch screen as indicated by the input signals. The gesture classification module 222 may also calculate location data from the input signals and may generate location data (e.g., X, Y, Z coordinates) therefrom.


The UI search module 224 may receive the gesture and/or location data from the gesture classification module 222. The UI search module 224 may calculate a UI interaction such as an icon selection or workspace manipulation (scroll, zoom, etc.) based on the UI map data 232. For example, the UI search module 224 may compare the gesture and/or location data to stored map values in the UI map data 232 and may match the gestures and/or locations to corresponding 2D or 3D UI interactions.


The command search module 226 may receive the calculated gesture and UI interaction data, and may generate one or more commands (i.e., display commands, host commands, haptics commands, etc.) based on the command definition data 236. For example, the command search module 226 may compare the calculated gesture and UI interaction data to one or more corresponding command(s). The commands may be received by the output driver(s) 240, which, in turn, may control various output devices (i.e., 3D display, haptics device, etc.). For example, the commands may control or manipulate 3D content rendered in a 3D workspace of a 3D display (scrolling icons, zooming content for an application, etc.). In various embodiments, data from each of the modules 222-226 may also be communicated to the host system 250 to control OS functions or applications running in the host system 250. For example, swiping a finger from left to right in a 3D workspace of a document reader application to turn pages of a document being viewed with the application.



FIG. 3 illustrates a model 300 of user input detection areas, labeled “detection layer 1”-“detection layer N,” for a 3D enabled display device 310 according to an embodiment of the present invention. The detection layers 1-N may be associated with interactive spatial volumes for detecting user interactions with the device 310. As illustrated, the spatial volumes for the detection layers 1-N may emanate from the device 310 at varying angles, as provided by optical sensors (not shown) for the device 310. The varying angles may provide for detecting user interactions not only directly above the device 310, but also around the perimeter of the device 310. Although the detection layers 1-N are shown as flat spatial volumes, in practice, they may be curved radial spatial volumes.


As discussed, a 3D display may create the illusion of 3D depth for content displayed or rendered on a 3D enabled display device. The content may include GUI elements such as 3D icons, buttons, etc. or 3D application content such as a map, a movie, a drawing canvas, etc. The content may be rendered in 3D workspaces or “UI layers” above the 3D display. The detection layers 1-N as described in FIG. 3 may be calibrated to overlap with the UI layers to detect user interactions with the UI layers. A user may interact within the 3D workspaces to manipulate GUI elements rendered in a workspace (e.g., scrolling, moving, or selecting GUI elements) or to control applications rendered in a workspace. FIGS. 4(a)-(d) illustrate 3D workspaces 400 for a 3D enabled display 410 device according to an embodiment of the present invention. As illustrated in FIG. 4(a), the 3D workspaces may be rendered as one or more 3D workspaces, shown here as “UI Layer 1” and “UI Layer 2.” The 3D workspaces may extended in an outward manner away from the display device 410.


In an embodiment, the display device 410 may render content for a 3D workspace as the workspace may be “activated” by a user for viewing content for the workspace. As illustrated in FIG. 4(b) a user may activate or “maximize” a 3D workspace by performing a double-tap gesture in the desired workspace. For FIG. 4(b), UI layer 2 is shown as a “current” working layer for the user, while UI layer 1 is shown as a desired “next” working layer that the user may wish to maximize. The user may perform the double-tap gesture in UI layer 1 to cause the display device 410 to render the content associated with UI layer 1.


In an embodiment, a display device may “push” an activated 3D workspace to an outward-most viewing perspective (away from the display device) and maximize the viewing depth of the workspace's rendered content to a predetermined viewing depth.


As discussed, a user may also interact with interactive elements (e.g., 3D icons) of a 3D workspace. FIG. 4(c) illustrates a plurality of 3D icons rendered in UI Layer 1 for the device 410. A user may perform a snap touch within a 3D icon rendered in UI Layer 1 (move their finger quickly forward and back resembling touch of an element on a touch screen) to select the corresponding icon.


The 3D workspaces may include other areas that do not include interactive elements. FIG. 5 illustrates a 3D workspace 500 for a 3D enabled display device 510 according to an embodiment of the present invention. For example, 3D icons may be spaced apart from each other by predetermined distances. The separation distance may be an inactive area between the 3D icons. Further, other areas of the workspaces may be unoccupied by content. In various embodiments, these areas may be designated as “dead zones,” wherein user interaction with the dead zones does not control or manipulate the workspace/device. As illustrated in FIG. 5, the device 510 may render a plurality of 3D icons in a 3D workspace, labeled “UI Layer 2.” Dead zones for UI Layer 2 are indicated as the space between the 3D icons.


In other embodiments, areas that do not include interactive elements may be designated as “active zones,” wherein user interaction within an active zone may control or manipulate rendering of a 3D workspace(s) or functions of the device 510. An active zone for UI Layer 2 is shown as the open area behind the 3D icons. A user may move their finger from left-to-right in UI Layer 2 to cause the 3D icons in that layer to scroll from left to right. In other examples, active zones may provide for activating workspaces, zooming, performing gesture commands, etc.


As discussed, a UI controller (e.g., UI controller 200 of FIG. 2) may also provide for 3D UI controls for user gestures performed within a 3D workspace to manipulate content rendered in the workspace. The UI controller may include gesture definitions and UI maps which may enable control and/or interaction with 3D content associated with various applications rendered on a 3D display.



FIGS. 6(
a)-(b) illustrate 3D workspaces 600 for a 3D enabled display device 610 according to an embodiment of the present invention. The device 610 may include a 3D mapping application to display a 3D map. The device 610 may render the 3D map in a 3D workspace, shown as “UI Layer 2.” A UI controller (i.e., UI controller 200 of FIG. 2) may enable user control of the 3D map through gestures, selections, etc. to manipulate views of the map or select items in the map. As illustrated in FIG. 6(a), the user may perform a “spreading” gesture to “zoom-out” the 3D map. As illustrated in FIG. 6(b), the user may perform a “pinching” gesture to “zoom-in” the 3D map.


In an embodiment, a user may also select items in an application (e.g., buildings rendered in a 3D map of a mapping application) by performing a snap touch to activate content related to the items.



FIG. 7 illustrates another 3D workspace 700 for a 3D enabled display device 710 according to an embodiment of the present invention. The device 710 may include a 3D keyboard application to display a 3D keyboard. The device 710 may render the 3D keyboard in 3D workspaces, shown as “UI Layer 1” and “UI Layer 2.” The 3D keyboard may be rendered in UI Layer 2, while 3D icons for text input commands such as “Caps Lock,” “Delete,” or “Space” may be rendered in UI Layer 1. A user may input text by swiping a finger through 3D letter icons of the 3D keyboard or by “tapping” the letter icons in UI Layer 2. The user may select the text input commands by performing a snap touch on corresponding 3D icons for the commands.


In various embodiments, a UI controller (i.e., UI controller 200 of FIG. 2) may also include gesture definitions associated with 3D UI controls to control operations associated with host system functions of a 3D enabled display device. FIG. 8 illustrates another 3D workspace 800 for a 3D enabled display device 810 according to an embodiment of the present invention. As illustrated in FIG. 8, the device 810 may provide for user inputs through gestures performed in a 3D workspace, shown as “UI Layer 1.” For example, a user performing an ‘L’ gesture in UI Layer 1 to lock the device 810. In an embodiment, a user may also perform predetermined gesture commands for inputting various words or phrases.


Several embodiments of the present invention are specifically illustrated and described herein. However, it will be appreciated that modifications and variations of the present invention are covered by the above teachings. In other instances, well-known operations, components and circuits have not been described in detail so as not to obscure the embodiments. It can be appreciated that the specific structural and functional details disclosed herein may be representative and do not necessarily limit the scope of the embodiments.


Those skilled in the art may appreciate from the foregoing description that the present invention may be implemented in a variety of forms, and that the various embodiments may be implemented alone or in combination. Therefore, while the embodiments of the present invention have been described in connection with particular examples thereof, the true scope of the embodiments and/or methods of the present invention should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims.


Various embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, application specific integrated circuits (“ASIC”), programmable logic devices (“PLD”), digital signal processors (“DSP”), field programmable gate array (“FPGA”), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, application program interfaces (“API”), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, heat tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or performance constraints.


Some embodiments may be implemented, for example, using a non-transitory computer-readable medium or article which may store an instruction or a set of instructions that, if executed by a machine, may cause the machine to perform a method and/or operations in accordance with the embodiments. Such a machine may include, for example, any suitable processing platform, computing platform, computing device, processing device, computing system, processing system, computer, processor, or the like, and may be implemented using any suitable combination of hardware and/or software. The computer-readable medium or article may include, for example, any suitable type of memory unit, memory device, memory article, memory medium, storage device, storage article, storage medium and/or storage unit, for example, memory, removable or non-removable media, erasable or non-erasable media, writeable or re-writeable media, digital or analog media, hard disk, floppy disk, Compact Disc Read Only Memory (CD-ROM), Compact Disc Recordable (CD-R), Compact Disc Rewriteable (CD-RW), optical disk, magnetic media, magneto-optical media, removable memory cards or disks, various types of Digital Versatile Disc (DVD), a tape, a cassette, or the like. The instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, encrypted code, and the like, implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.

Claims
  • 1. A three-dimensional (“3D”) user interface (“UI”) processing system for a device, comprising: a 3D display device to display 3D images;at least one sensor to detect a user interaction event with the 3D display device; anda controller, including a memory to store instructional information related to interaction events; anda processor to receive sensor data from the at least one sensor and to interpret the sensor data according to the instructional information.
  • 2. The system of claim 1, wherein the at least one sensor is an optical sensor.
  • 3. The system of claim 1, wherein the at least one sensor is a touch screen sensor.
  • 4. The system of claim 3, wherein the touch screen sensor is capacitive.
  • 5. The system of claim 3, wherein the touch screen sensor is resistive.
  • 6. The system of claim 1, further comprising a haptics output device.
  • 7. The system of claim 1, wherein the instructional information includes gesture definitions, UI maps, and command definitions, and wherein to interpret the sensor data includes calculating a gesture from the sensor data based on the gesture definitions, and calculating a user interaction from the sensor data based on a UI map.
  • 8. The system of claim 7, wherein the calculated gesture and user interaction are reported to a host system.
  • 9. The system of claim 7, further comprising the processor to correlate the calculated gesture and user interaction to a UI command, to generate the UI command, and to transmit the UI command to the 3D display device.
  • 10. The system of claim 9, wherein the UI map includes data for 3D interactive elements where the UI command is generated if the user interaction is with a 3D interactive element.
  • 11. The system of claim 7, further comprising the processor to correlate the calculated gesture and user interaction to a host command, to generate the host command, and to transmit the host command to a host system.
  • 12. The system of claim 7, further comprising a haptics output device and the processor to correlate the calculated gesture and user interaction to a haptics command, to generate the haptics command, and to transmit the host command to the haptics output device.
  • 13. The system of claim 7, wherein the UI map includes an active zone and a dead zone wherein the UI command is generated if the user interaction is with the active zone and no UI command is generated if the user interaction is with a dead zone.
  • 14. The system of claim 13, further comprising a host system and if the user interaction is with the active zone, the processor to correlate the calculated gesture and user interaction to a host command, to generate the host command, and to transmit the host command to the host system.
  • 15. The system of claim 13, further comprising a haptics output device and if the user interaction is with the active zone, the processor to correlate the calculated gesture and user interaction to a haptics command, to generate the haptics command, and to transmit the haptics command to the haptics output device.
  • 16. An electronic device, comprising: a display;an input device to capture operator commands; anda processing system to interpret data from the input device and to display workspaces on the display, wherein the workspaces are linked to each other by a navigation model having at least one dimension laterally with respect to a face of the display and another dimension having depth with respect to the face of the display, andwhen the processing system recognizes input data corresponding to a navigation command, the processing system identifies a new workspace according to the navigation model and the navigation command and causes the new workspace to be displayed.
  • 17. The device of claim 16, wherein the input device is at least one optical sensor.
  • 18. The device of claim 16, wherein the workspaces include 3D interactive elements where the navigation command is generated if the user interaction is with a 3D interactive element.
  • 19. An electronic device, comprising: a stereoscopic display to simulate a three-dimensional (“3D”) image;an input device to capture operator commands; anda processing system to interpret data from the input device and to display a 3D workspace on the display, wherein the 3D workspace includes a plurality of interactive elements distributed across at least two 3D workspace layers at different depths, andthe processing system interprets input data corresponding to operator interaction with the interactive elements displayed in the workspace.
  • 20. The device of claim 19, wherein the input device is at least one optical sensor.
  • 21. The device of claim 19, wherein the processing system includes gesture definitions, workspace maps, and command definitions, and wherein to interpret data from the input device includes calculating a gesture from the sensor data based on the gesture definitions, calculating a user interaction with the interactive elements from the sensor data based on a workspace map, and correlating the calculated gesture and user interaction to a UI command.
  • 22. A method of controlling a three-dimensional (“3D”) user interface, comprising: receiving a sensor input based on a user interaction with the 3D user interface;processing the sensor input;generating a user interface command based on the processed sensor input and stored instructions; andcontrolling operation of the 3D user interface based on the user interface command.
  • 23. The method of claim 22, wherein the processing includes determining a gesture, location, and corresponding interaction event of the user interaction based on stored gesture definitions and user interface maps.
  • 24. The method of claim 23, further comprising generating a host command and controlling operation of a host system based on the host command.
  • 25. The method of claim 23, further comprising generating a haptics command and controlling a haptics output device based on the haptics command.
  • 26. The method of claim 23, wherein user interaction with the 3D user interface includes interaction with 3D elements rendered in the 3D user interface.
RELATED APPLICATIONS

This application claims priority to provisional U.S. Patent Application Ser. No. 61/470,764, entitled “Touch Screen and Haptic Control” filed on Apr. 1, 2011, the content of which is incorporated herein in its entirety.

Provisional Applications (1)
Number Date Country
61470764 Apr 2011 US