The present disclosure relates to methods and systems for controlling a head-up display and, more particularly, to systems and related processes for interaction mechanisms between a user input device and a head-up display, optionally located in a vehicle or user-wearable device.
While head-up displays have been used in automotive applications for many years, they typically create a single plane of visual information at a fixed perceived depth ahead of the user, for example, as a holographic image in the windscreen.
Accordingly, a typical head-up display (e.g., a multiplanar display, holographic display, or a semi-translucent display) arranges visual information on a single plane, that enables the user to see, in the same field of view of their current action, the visual information. The head-up display of the present disclosure comprises a plurality of graphical elements stacked at respective perceived depths (from a user's perspective) providing an additional degree of freedom in layout. When combined with the disclosed user interface navigation methods, the present disclosure enables the user to navigate through depth planes in a head-up display and easily indicate a choice to the apparatus and see which choice is currently selected without being distracted from the display. Such displays may be deployed in fixed, wearable, mobile devices, or automotive applications. If a head-up display for a user is paired with a scroll wheel (e.g., a scroll wheel on a watch, on a mobile device, on a remote control, on a ring, on a fob, or a steering wheel) then the scroll wheel can be used in a natural-feeling way to navigate among and select items such as streaming sources, songs, text messages, alerts/notifications, weather forecasts, and other items stacked in depth.
Demonstrations of head-up displays in automotive applications have typically been driving-focused, for example placing alphanumeric information on a plane in the driver's field of view and then locating navigation arrows or hazard alerts on the road farther ahead. However, it is desirable to use head-up displays for non-driving-related user interfaces as well, for example selecting songs, streaming content, radio stations, or climate control, given that there would be no need for users to take their eyes off the road and look at a dashboard screen. In addition, the multiple depth planes are used for efficient navigation among choices to find a selection.
In view of the foregoing, the present disclosure provides systems and related methods that provide a head-up display (e.g., a multiplanar display, holographic display, or a translucent display) that shows a set of selectable items (e.g. songs, items of streaming content, messages, vehicle information, real-world objects, or the like), each selectable item located on a separate depth plane, and a user input device (e.g., a scroll wheel, stepped input mechanism, or touchpad) is used to navigate through the set and select a desired item.
In a first approach, there is provided a method for providing a user interface. A plurality of graphical elements are generated for display on the heads-up display. For example, a graphical element may comprise a plurality of pixels arrayed in a two-dimensional matrix displaying information for the user, that is generated on a plane of the user's field of view. Each plane of the display is at a different perceived depth to a user, sometimes referred to as a depth plane. For example, each graphical element is arranged such that, from a user's perspective, each graphical element, or group of graphical elements, has a perceived depth that is different to the next. Accordingly, each graphical element is generated at one of a range of perceived depths on the head-up display. An indication that one of the graphical elements is currently selected is displayed. For example, a plane can be highlighted with a marker; a change to, for example, color, contrast, brightness, or the like; a visual non-static indication, such as blinking, flashing, or the like. A user input device, such as a wheel or the like, in communication with the heads-up display, generates an interaction signal, when operated by the user, that is received by the head-up display device (more particular a transceiver or controller of the head-up display device). The user interface navigation signal that is received progresses the selection through the graphical elements in order of depth in response to user actuation of a navigation control. An action associated with the currently selected graphical element is performed in response to user actuation of an activation control of the user input device.
In some examples, a current context of the nearest graphical element is determined.
In some examples, method further comprises selecting a first plane comprising at least one graphical element to be designated the position of a selectable item. In some examples, the method further comprises receiving a selection signal, from the user input device, of a graphical element on said designated plane. For example, the user may wish to interact with a particular graphical element (or group of graphical elements) and can do so by cycling through (e.g., rotating through) the graphical elements until a desired graphical element is in the interactive or selectable item position. The user can then interact with the graphical element in this position (e.g., a first given plane). In some examples, the method further comprises, in response to receiving the selection signal, entering a sub-menu of the graphical element. For example, the currently selected graphical element may be a music element that, upon selection, allows the user to select a particular artist or song.
In some examples, the number of graphical elements to be generated is greater than a number of displayable graphical elements on the head-up display, and rotating the selection through the graphical elements' planes comprises selecting a subset of the plurality of graphical elements to be displayed on the head-up display. For example, if five graphical elements, to be displayed on separate depth planes, each comprising a different context and information to be displayed to the user, exist but the head-up display only permits four planes to be viewed at a time, one plane will not be displayed. However, when navigating, a subset (e.g., four) of the total planes (e.g., five) is displayed to the user, upon interaction with the user input device, a second subset is displayed to the user. Accordingly, in some examples, the method further comprises receiving a second user interface navigation signal. The second user interface navigation signal selects a second subset of the plurality of graphical elements to be displayed on the head-up display.
In some examples, the head-up display is a holographic display. In some examples, the head-up display device is installed in a vehicle. For example, graphical images dynamically registered, and dynamically updated, upon a windscreen of a subject vehicle representing essential vehicle information. In some examples, the head-up display device is a user-wearable device.
In some examples, the user input device is a stepped control device, wherein each step corresponds to a movement in depth of the head-up display. For example, the user input device may be a manual controller supported on a center console or on a steering wheel and is configured to change information data on a head-up display in response to a manual input from a user, causing rotation of the input device past stepped control detents generates a signal for the head-up display. In some examples, the user input device is one of a wheel, a physical button, a switch, a touchpad, a direct-drive motor, or a trigger.
In some examples, the user input device comprises software-controllable detents, wherein each detent corresponds to a movement in depth of the head-up display. For example, the user input device may be a manual controller supported on a center console or on a steering wheel and is configured to change information data on a head-up display in response to a manual input from a user, causing rotation of the input device past software-controllable detents generates a signal for the head-up display.
In some examples, the user input device comprises a controllable haptic actuator, wherein each haptic pulse corresponds to navigating in depth of the head-up display. For example, a haptic actuator may be incorporated for creating selective resistance to rotation of the user input device about the scroll axis. The haptic actuator can take any of the known forms and be structured according to any of the known techniques for providing haptic feedback effects to the user input device.
Accordingly, and in some examples, the method further comprises providing haptic feedback in response to the received user interface navigation signal between each plane of the head-up display. In some examples, the haptic feedback is provided by at least one of vibration, force feedback, air vortex rings, or ultrasound. The majority of vibration-based haptics use a type of eccentric rotating mass actuator, consisting of an unbalanced weight attached to a motor shaft. Force feedback devices typically use motors to manipulate the movement of an item held by the user. Air vortex rings are commonly donut-shaped air pockets made up of concentrated gusts of air. Focused ultrasound beams are often used to create a localized sense of pressure on a finger without touching any physical object.
In some examples, each graphical element, or group of elements, of the display has a different context. For example, the head-up display can utilize information inputs from a plurality of sensors and data modules to monitor, vehicle operation, vehicle operational environment, infotainment systems, and/or navigation systems; each of these inputs can be represented by a graphical element and therefore each element can have a different context.
In some examples, the method further comprises calculating (i.e., determining) a priority score of a graphical element, or group of graphical elements, based on a current action of the user. For example, the priority score determining may be further based on; actively extracting input from the user, interpreting the user's intention, resolving the ambiguity between competing interpretations, requesting and receiving clarifying information if necessary, and performing (i.e., initiating) actions based on distinguished intent. Further, determining the priority score may be carried out, or assisted by, the use of an intelligent automated assistant, configured to carry out any or all of the aforementioned actions. In addition, in some examples, the method further comprises ordering each graphical element, or group of graphical elements, according to the determined priority scores. In some examples, the method further comprises arranging each graphical element (or group of graphical elements) of the display at a perceived depth based on a monotonic scale of the priority scores.
In some examples, the method further comprises retrieving, from a storage device, a haptic feedback profile of the user. In some examples, the method further comprises displaying, based on the context of the first plane, a haptic feedback control user interface. In addition, the method further comprises updating the feedback profile of the user based on a selection from the control user interface and the context of the first plane. In some examples, the feedback profile comprises a user preference of at least one of an intensity parameter, a density parameter, or a sharpness parameter. In some examples, the method further comprises adjusting at least one of; an intensity parameter, a density parameter, or a sharpness parameter. For example, the user may adjust the intensity of a haptic feedback actuator within the user input device to be less intense. In response to the user adjusting such a parameter, this information can be used to update the user feedback profile.
In some examples, the stepped user input device comprises a stepped control device, wherein each detent corresponds to moving in depth of the head-up display.
In some examples, the stepped user input device comprises software-controllable detents, wherein each detent corresponds to moving in depth of the head-up display.
In some examples, the user input device further comprises a controllable haptic actuator, wherein each haptic pulse corresponds to navigating in depth of the head-up display.
In some examples, the controllable haptic actuator is configured to provide at least one of: vibration, force feedback, air vortex rings, or ultrasound.
In some examples, the stepped user input device is one of: a scroll wheel, a physical button, a switch, a touchpad, a direct-drive motor, or a trigger.
In some examples, the head-up display is configured for a vehicle, or configured to be installed in a vehicle. In some examples, the head-up display is configured for a user-wearable device, or configured to be installed in a user-wearable device. In some examples, the head-up display is replaced by a non-mobile or fixed display device, or configured to be installed in a fixed display device, a fixed display device is a display device that is alternative to anything mobile or wearable. For example, it may be integrated within the dash or console of a vehicle (e.g., installed in the vehicle in permanent or semi-permanent manner), however it is not necessarily limited to a vehicular display. The disclosure herein is compatible with a number of different sorts of displays, and does not require to be in a head-up configuration, in addition, for example, the display does not have to be “see-through”.
In another approach, there is provided a non-transitory computer-readable medium, having instructions recorded thereon for controlling a head-up display. When executed, the instructions cause a method to be carried out, the method (and therefore instructions) comprise generating, on a head-up display device, a head-up display including a plurality of graphical elements, each graphical element being generated at one of a range of depths on the head-up display; displaying an indication that one of the graphical elements is currently selected; receiving a user interface navigation signal, from a user input device, to rotate the selection through the graphical elements in order of depth in response to user actuation of a navigation control; and performing an action associated with the currently selected graphical element in response to user actuation of the user input device.
In another approach, there is provided a device for providing a user interface, comprising a control module and a transceiver module configured to generate, on a head-up display device, a head-up display including a plurality of graphical elements, each graphical element being generated at one of a range of depths on the head-up display; display an indication that one of the graphical elements is currently selected; receive a user interface navigation signal, from a user input device, to rotate the selection through the graphical elements in order of depth in response to user actuation of a navigation control; and perform an action associated with the currently selected graphical element in response to user actuation of the user input device.
In another approach there is provided a system for controlling a head-up display, the system comprising: means for generating, on a head-up display device, a head-up display including a plurality of graphical elements, each graphical element being generated at one of a range of depths on the head-up display; means for displaying an indication that one of the graphical elements is currently selected; means for receiving a user interface navigation signal, from a user input device, to rotate the selection through the graphical elements in order of depth in response to user actuation of a navigation control; and means for performing an action associated with the currently selected graphical element in response to user actuation of the user input device.
In another approach there is provided an apparatus for providing a user interface, the apparatus comprising a display device arranged to display an image including a plurality of graphical elements displayed at different apparent depths to the user, the image including a visual indication that one of the graphical elements is currently selected; a receiver for receiving a user interface navigation signal from a user interface navigation control; a display controller arranged in operation to update the image to move the visual indication to graphical elements at an adjacent apparent depth in response to the receipt of the user interface navigation signal; a receiver for receiving an activation signal from an activation control; and a transmitter arranged in operation to respond to the activation signal by transmitting a control command which depends upon which graphical element is currently selected.
In another approach there is provided a user interface for an apparatus, the interface comprising: a head-up display in which different planes are displayed at different apparent depths to a user; a stepped user input device, to receive an input from the user; wherein the head-up display is arranged to highlight a currently selected depth plane; and wherein the stepped user input device is in communication with the head-up display and arranged to step through the depth planes as the user moves the input device through the steps.
In another approach a method of providing a user interface, the method comprising: generating, on a display device, a display including a plurality of graphical elements, each graphical element being displayed at one of a plurality of perceived depths on the display; displaying an indication that one of the perceived depths is currently selected; receiving a user interface navigation signal, from a user input device, to progress the selection through the perceived depths in order of depth in response to user actuation of a navigation control; and performing an action associated with the currently selected depth plane in response to user actuation of an activation control of the user input device. For example, the display may be a 3D augmented reality display, or a multiplanar 3D display. For example, the present disclosure would equally apply to devices comprising multiplanar displays, such as 3D displays on a smartwatch or the like.
Accordingly, there is presented herein methods, systems, and apparatus for controlling a display (e.g., a multiplanar display, 3D AR display, or head-up display) and, more particularly, to systems and related processes for interaction mechanisms between a user input device and said display.
The above and other objects and advantages of the disclosure will be apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings, in which:
In more detail,
In the embodiment of
With regard to
As shown in
With regard to
By way of example, referring to
With regard to
In any combination of the aforementioned navigation, selection or highlighting action, it is intended that a user interacts with a user input device 110.
A haptic actuator (not shown) can be particularly effective in assisting a user 720 to navigate the various planes of a multiplanar head-up display, with minimal distraction to the task or action they are currently undertaking if any. For example, in the case of driving, a user 720 can interact with an input device on a center console or on a steering wheel, navigate through the menus and planes of the head-up display, all without taking their eyes off the road. The haptic actuator is controllable to various resistive sequences that can be applied to the user input device 710, when interacted with by a user 720, to help the user 720 intuitive navigate through the plurality of graphical elements on the head-up display. A software-based user input device, with software-controllable detents, gives the impression of mechanical attributes common in typical control wheels on/in various devices.
Through repetitive use of the head-up display and user input device, a user will come to associate a given haptic feedback pattern with a particular selection, so that navigation through the various screens and presentations can be accomplished entirely through touch and feel. The haptic actuator located within the user input device can be further associated with an audible system, which provides sounds, such as chimes, music, or recorded messages, as the various selections and screens are navigated through the user input device 710.
By way of example,
For example,
In some examples, the haptic feedback profile can be stored in storage. Each user of the device comprising the head-up display, can configure their own haptic feedback preferences and store that in a user profile. In this way, a user of the head-up display, for example, a driver, can see upon glancing at the foremost plane of the head-up display, or even by memory, that movement from a first plane to a second plane is two “clicks” away in the clockwise direction, the user input device can be rotated, without looking, through two clicks of the felt detents (either software-controlled or mechanically-fixed detent) thus allowing the driver to confidently operate, select, and interface with the head-up display and know that their selection has been made without the need to take their eyes from the road.
To convert the path drawn by the user's input into a navigation amount that can be used to navigate throughout the plurality of graphical elements 100 a length of the path may be computed. In one example, the navigation amount is the overall length of the path. When the path is a 2-dimensional path, a 2-dimensional grid may be used to compute the length of the path. Using a 2-dimensional path, the length of the path may include at least one loop 1008 to increase its length (e.g., a finger going 3 cm to the right then 1 cm to the left means the path has a length of 4 cm). Therefore, the user might not need to use a whole dimension of the touchpad to input the navigation amount, but can easily do it on a small, localized portion of the user input device. In one example, the navigation interval is the length of the path going in one direction, such as left and right (e.g., a finger going 3 cm to the right then 1 cm to the left means the path has a length of 2 cm to the right and not a length of 4 cm). This allows movement forward (i.e., advancing through the graphical elements) and backward (i.e., reverse back through the graphical elements) based on the direction of the path. In this case, the loop would have a null effect (or close to null) on the length of the path. In one embodiment, the navigation interval is the length of a 1-dimensional projection 1010 of the 2-dimensional path (e.g., a projection orthogonal to the scroll bar 900 or parallel with the scroll bar 900). In one example, the navigation interval is the length of the projection of the path based on the direction of the path, such as left and right (e.g., a projection, of a path, going 3 cm to the right then 1 cm to the left means the projection of the path has a length of 2 cm to the right).
In some examples, loop 1008 may indicate that the user wishes to select a more precise navigation interval or enter a submenu (e.g., a selection). Put another way, loop 1008 activates a slower scrubbing speed when navigating, for example, a submenu, allowing the user to make a more granular selection of their intended navigation interval. In some examples, the substantially linear sections of the path correlate the distance of the path with one scaling parameter and the substantially circular (e.g., loop 1008) sections of the path correlate the distance of the path with a second scaling parameter.
With reference to
With reference to
With reference to
With regard to
Likewise, in some examples, the method further comprises calculating (i.e., determining) a priority score of each plane of the display based on a current action of the user. In addition, in some examples, the method further comprises ordering each plane of the display according to the determined priority scores. In some examples, the method further comprises arranging each plane of the display at a perceived depth based on a monotonic scale of the priority scores. For example, the plane with the highest priority score is arranged closer to the user, as indicated by the closest marker 902 abutting stop 910. Moreover, the plane with the lowest priority score is arranged farthest from the user, as indicated by the rearmost marker 908 abutting stop 910. Further, as shown in
The examples given with reference to
The vehicle 1600 includes a steering wheel and a central column, wherein the user input device 710 may be disposed. The vehicle may comprise an information system for the vehicle, which may operate in addition to, or in lieu of, other instruments and control features in the vehicle.
The vehicle may also comprise a computer for handling informational data, including vehicle data. The computer also includes other necessary electronic components known to those skilled in the art, such as a memory, a hard drive, communication interfaces, a power supply/converter, digital and analog converters, etc. The computer is connected to vehicle systems that provide the vehicle data which corresponds to the operation of the vehicle and associated vehicle systems. Examples of these vehicle systems, include, but are not limited to, an engine controller, a climate control system, an integrated cellular phone system, a sound system (radio), a global positioning system (GPS) receiver, and a video entertainment center (such as a DVD player). Examples of vehicle data provided by the vehicle systems include, but are not limited to, vehicle speed, engine RPM, engine oil pressure, engine coolant temperature, battery voltage, vehicle maintenance reminders, climate control system settings, outside temperature, radio settings, integrated cellular phone settings, compass headings, video images, sound files, digital radio broadcasts, state of charge of both high and low voltage batteries (e.g., 48V hybrid battery, 12V infotainment battery, etc.), and navigational information. All of the former information data, vehicle data, and vehicle systems may have a corresponding graphical element that may be represented on the head-up display.
The informational data handled by the computer can also include external data from a network external to the vehicle. In this case, an external wireless interface would be operatively connected to the computer to communicate with the network for sending and receiving external data. External data may include, but is not limited to, internet web pages, email, and navigational information.
The head-up display device 1625 emits light that enters the user's eye by reflecting off the windscreen of the vehicle 1600. This gives a holographic image in the windscreen that the user can see. The head-up display device is configured to provide a perceived depth of the plurality of graphical elements 100 from the user's 720 perspective.
For example, plane 1710 is a weather plane, as indicated by weather icon 112. The weather plane 1710 contains a plurality of displayable data 1714A-C comprising, for example, windscreen, precipitation, and temperature data. The second plane 1720 is a navigation plane, as indicated by the navigation icon 122. The navigation plane 1720 contains a plurality of displayable data 1724A-C comprises, for example, speed limit information, navigation instructions, and the estimated time of arrival. The third plane 1730 is a vehicle information plane, as indicated by vehicle information icon 132. The vehicle information plane 1730 contains a plurality of displayable data 1734A-C comprising, for example, a settings submenu, a communication submenu, and volume control. Accordingly, user 710 can quickly see at a glance a plurality of information relating to many vehicle systems. In some examples, the displayable data is only present on the foremost plane, and only the icons are displayable from the other planes, to prevent a cluttered head-up display and detracted from the user's action, for example driving.
A second plane, such as planes 1720 may be configured to represent non-essential vehicle information in a fixed location upon the head-up display of the vehicle 1600. For example, the second plane 1720 may describe the time and the ambient temperature. The first plane, for example, plane 1710, can then be configured to describe more important information, relative to the user's current need, such as engine speed and vehicle speed.
In some examples, each of the elements of the plurality of graphical elements 100, and the planes themselves have a configurable location, which can be saved as a preferred location in the head-up display. The preferred location may be based on a preferred gaze location wherein the preferred gaze location corresponds to the center of the road. Hence, the dynamically registered preferred location is displayed in an area on the windscreen head-up display 850 such that the dynamically registered preferred location is at a location of lesser interest than the preferred gaze location where the operator is currently—or should be—gazing (e.g., center of the road) while minimizing head movement and eye saccades for an operator of the vehicle to view the vehicle information contained in the first graphic 910. Hence, the dynamically registered preferred location can be displayed at the preferred gaze location or offset from the preferred gaze location.
At step 2002, the head-up display device generates a head-up display including a plurality of graphical elements. For example, a graphical element may comprise a plurality of pixels arrayed in a two-dimensional matrix displaying information for the user, that is generated on a plane of the user's field of view. Each plane of the display is at a different perceived depth to a user, sometimes referred to as a depth plane. For example, each graphical element is arranged such that, from a user's perspective, each graphical element, or group of graphical elements, has a perceived depth that is different to the next. Accordingly, each graphical element is generated at one of a range of depths on the head-up display.
At step 2004, the head-up display device displays an indication that one of the graphical elements is currently selected. For example, a plane can be highlighted with a marker; a change to, for example, color, contrast, brightness, or the like; a visual non-static indication, such as blinking, flashing, or the like.
At step 2006, the head-up display device receives a user interface navigation signal to rotate the selection through the graphical elements in order of depth in response to user actuation of a navigation control. At step 2008, the head-up display device performs an action associated with the currently selected graphical element in response to user actuation of the user input device.
At step 2102, the head-up display device detects a current action of the user. At step 2104, the head-up display device determines a priority score of each graphical element, based on a current action of the user For example, the priority score determining (or calculation) may be further based on; actively extracting input from the user, interpreting the user's intention, resolving the ambiguity between competing interpretations, requesting and receiving clarifying information if necessary, and performing (i.e., initiating) actions based on distinguished intent. Further, determining the priority score may be carried out, or assisted by, the use of an intelligent automated assistant, configured to carry out any or all of the aforementioned actions.
At step 2106, the head-up display device orders each graphical element according to the determined priority scores. At step 2108, the head-up display device arranges each plane of the display on a monotonic scale (e.g., a logarithmic scale) of the priority scores. At step 2110, a waiting period may be initiated before process 2100 reverts to step 2102. If the waiting period is not initiated, process 2100 may revert to step 21002 immediately or process 2100 may end.
At step 2202, the head-up display device provides haptic feedback in response to the received user interface navigation signal between each plane of the head-up display. In some examples, the haptic feedback is provided by at least one of vibration, force feedback, air vortex rings, or ultrasound. The majority of vibration-based haptics use a type of eccentric rotating mass actuator, consisting of an unbalanced weight attached to a motor shaft. Force feedback devices typically use motors to manipulate the movement of an item held by the user. Air vortex rings are commonly donut-shaped air pockets made up of concentrated gusts of air. Focused ultrasound beams are often used to create a localized sense of pressure on a finger without touching any physical object.
At step 2204, the head-up display device retrieves a haptic feedback profile of the user. At step 2206, the head-up display device displays a haptic feedback control user interface. At step 2208, the head-up display device updates the feedback profile of the user based on a selection from the control user interface and the context of the first plane.
At step 2302, the head-up display device adjusts an intensity parameter. At step 2304, the head-up display device adjusts a density parameter. At step 2304, the head-up display device adjusts a sharpness parameter.
User device 2402 may include a head-up display 2412 and a speaker 2414 to display content visually and audibly. In addition, to interact with a user, user device 2402 includes a user interface 2416 (which may be used to interact with the plurality of graphical elements 100 disclosed herein). The user interface 2416 may include a scroll wheel, a physical button, a switch, a touchpad, a direct-drive motor, or a trigger. The user interface 2416 is connected to the I/O path 2406 and the control circuitry 2404.
Control circuitry 2404 may be based on any suitable processing circuitry such as processing circuitry 2408. As referred to herein, processing circuitry should be understood to mean circuitry based on one or more microprocessors, microcontrollers, digital signal processors, programmable logic devices, field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), etc., and may include a multi-core processor (e.g., dual-core, quad-core, hexa-core, or any suitable number of cores), a cloud based compute unit, or even a supercomputer. In some examples, processing circuitry may be distributed across multiple separate processors or processing units, for example, multiple of the same type of processing units (e.g., two Intel Core i7 processors) or multiple different processors (e.g., an Intel Core i5 processor and an Intel Core i9 processor).
A memory may be an electronic storage device provided as storage 2410, which is part of control circuitry 2404. Storage 2410 may store instructions that, when executed by processing circuitry 2408, perform the processes described herein. As referred to herein, the phrase “electronic storage device” or “storage device” should be understood to mean any device for storing electronic data, computer software, or firmware, such as random-access memory, read-only memory, hard drives, solid-state devices, quantum storage devices, or any other suitable fixed or removable storage devices, and/or any combination of the same. Nonvolatile memory may also be used (e.g., to launch a boot-up routine and other instructions). The user device 2402 may be a smartphone, a tablet, an e-reader, a laptop, a smart TV, etc.
Computing configuration 2400 may also include a communication network 2418 and a server device 2420. The user device 2402 may be coupled to the communication network 2418 to communicate with the server device 2420. The communication network 2418 may be one or more networks including the Internet, a mobile phone network, mobile voice or data network (e.g., a BLUETOOTH, Wi-Fi, WiMAX, Zigbee, GSM, UTMS, CDMA, TDMA, 3G, 4G, 4G LTE, 5G or other wireless transmissions as described by the relevant 802.11 wireless communication protocols), mesh network, peer-to-peer network, cable network, or other types of communication network or combinations of communication networks.
In some examples, server device 2420 may include control circuitry 2422 and an input/output (I/O) path 2424. Control circuitry 2404 may include processing circuitry 2426, and storage 2428, which may similar to those already discussed in relation to the user device 2402. Server device 2420 may be a content provider for the user device 2402, such as plane data or information, plane configuration, user haptic feedback profile data, etc.
In some embodiments, the content navigation system comprises the user device 2402, whether the content is being streamed from the server or being retrieved from the storage 2410. Alternatively, the content navigation system is distributed over the user device 2402 and the server device 2420.
In some examples, the user-controlled system device 2520 is coupled to the system controller 2530 and, in some examples, the head-up display 2510 (not shown). In some examples, the user-controlled system device 2520 is adapted to receive a user interface navigation signal, from a user input device, to progress the selection through the graphical elements in order of depth in response to user actuation of a navigation control.
In some examples, the system controller 2530 is communicatively coupled to the head-up display 2510 and the user-controlled system device 2520. In some examples, the system controller 2530 is configured to perform an action associated with the currently selected graphical element in response to user actuation of an activation control of the user input device or the user-controlled system device 2520. In some examples, the system controller 2530 instructs the head-up display to display a plurality of graphical elements, each graphical element being displayed at one of a range of perceived depths on the head-up display, and display an indication that one of the graphical elements is currently selected. In some examples, the system controller 2530 is configured to progress the selection through the graphical elements in order of depth in response to user actuation of a navigation control.
In some examples, the head-up display apparatus may further comprise a transceiver module (not shown) which communicates with a user input device, such as user input device 710 of
The systems and processes discussed above are intended to be illustrative and not limiting. One skilled in the art would appreciate that the actions of the processes discussed herein may be omitted, modified, combined, and/or rearranged, and any additional actions may be performed without departing from the scope of the invention. More generally, the above disclosure is meant to be exemplary and not limiting. Only the claims that follow are meant to set bounds as to what the present disclosure includes. Furthermore, it should be noted that the features and limitations described in any one embodiment may be applied to any other embodiment herein, and flowcharts or examples relating to one embodiment may be combined with any other embodiment appropriately, done in different orders, or done in parallel. In addition, the systems and methods described herein may be performed in real-time. It should also be noted that the systems and/or methods described above may be applied to, or used in accordance with, other systems and/or methods. In this specification, the following terms may be understood given the below explanations:
All of the features disclosed in this specification (including any accompanying claims, abstract, and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive.
Each feature disclosed in this specification (including any accompanying claims, abstract, and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
The invention is not restricted to the details of any foregoing embodiments. The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract, and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed. The claims should not be construed to cover merely the foregoing embodiments, but also any embodiments which fall within the scope of the claims.
Throughout the description and claims of this specification, the words “comprise” and “contain” and variations of them mean “including but not limited to”, and they are not intended to (and do not) exclude other moieties, additives, components, integers or steps. Throughout the description and claims of this specification, the singular encompasses the plural unless the context otherwise requires. In particular, where the indefinite article is used, the specification is to be understood as contemplating plurality as well as singularity, unless the context requires otherwise.
All of the features disclosed in this specification (including any accompanying claims, abstract, and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive. The invention is not restricted to the details of any foregoing embodiments. The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract, and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed.
The reader's attention is directed to all papers and documents which are filed concurrently with or previous to this specification in connection with this application and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference.