The present invention relates generally to a system and method of controlling auxiliary vehicle functions and, more particularly, to a system and method of displaying contextual control images indicating respective actions to be performed in response to a user input received in a respective context of one or more auxiliary vehicle functions.
Current vehicle designs enable vehicles to perform a variety of auxiliary vehicle functions, such as functions that do not directly control actuation of the vehicle for movement. Such auxiliary vehicle functions include audio playback, accepting or declining a phone call, receiving and transmitting audio during a phone call, presenting and dismissing indications of alerts and/or warnings, repositioning one or more components (e.g., seats, mirrors, camera-emulated mirrors, steering wheel, etc.) of the vehicle, and/or activating a driver assistance function (e.g., adaptive cruise control and other autonomous or semi-autonomous driving modes). In some embodiments, one or more of these auxiliary functions are controlled by the user via buttons, switches, soft buttons displayed on a display of the vehicle, and/or voice controls.
Some auxiliary functions are controlled using buttons or switches mounted to a steering wheel of the vehicle. Although buttons and switches mounted to the steering wheel can be convenient to the driver while operating the vehicle, there is a limited amount of space for buttons and switches on one steering wheel, thereby limiting the number of functions that can be controlled by buttons and switches on the steering wheel.
The present invention relates generally to a system and method of controlling auxiliary vehicle functions and, more particularly, to a system and method of displaying contextual control images indicating respective actions to be performed in response to a user input received in a respective context of one or more auxiliary vehicle functions. In some embodiments, a vehicle includes one or more touchpads on the vehicle steering wheel.
In accordance with one embodiment, in response to receiving a user input at one of the touchpads, the vehicle performs a corresponding action depending on the region of the touchpad in which the input was received and the context in which the vehicle is currently operating. For example, in one configuration (which may be designated as a default configuration), one of the touch pads optionally controls audio content playing on a speaker of the vehicle and one of the touch pads optionally activates a driver assistance mode. When a phone call (or other forms of communication) is received at a mobile phone (or other types of communication device) in communication with the vehicle, the user is able to enter an input at one of the touch pads for accepting or rejecting the phone call. During a communication session, such as a phone call or other types of real-time session including video sessions, the user is able to enter an input at one of the touch pads for volume up, volume down, mute, and end call. In response to the presentation of a vehicle warning, the user is able to enter an input at one of the touch pads for dismissing the vehicle warning. While navigating a vehicle settings menu, the user can use the touch pad to scroll the menu, return to a higher level of menu hierarchy, and make a selection. When adjusting a vehicle setting (e.g., the position of a component of the vehicle, such as the steering wheel), the user is able to use the touch pad to control the position of the component and confirm the new position. While operating in a driver assistance mode, the user is able to enter an input to control the parameters of the mode, such as increasing or decreasing following distance and increasing or decreasing maximum speed while operating the vehicle with adaptive cruise control.
In accordance with another embodiment, when the vehicle is in a context in which a non-default input can be entered at the touch pads, the vehicle displays, on a display screen of the vehicle (e.g., the HUD), a contextual control image indicating which actions will be performed in response to various inputs entered at one or more of the touch pads on the steering wheel. In this way, the steering wheel can accept an input for a variety of auxiliary functions with a reduced number of buttons.
In the following description, references are made to the accompanying drawings that form a part hereof, and in which it is shown by way of illustration specific examples that can be practiced. It is to be understood that other examples can be used and structural changes can be made without departing from the scope of the disclosed examples. Further, in the context of this disclosure, “autonomous driving” (or the like) can refer to autonomous driving, partially autonomous driving, and/or driver assistance systems.
Vehicle control system 100 further includes an on-board computer 110 that is coupled to the cameras 106, sensors 107, GNSS receiver 108, map information interface 105, and communication system 150 and that is capable of receiving outputs from the sensors 107, the GNSS receiver 108, map information interface 105, and communication system 150. The on-board computer 110 is capable of transmitting information to the AR driving glasses to cause the AR driving glasses to display one or more images, generate one or more tactile alerts, change lens tint, and/or change lens focus. Additional functions of the AR glasses controlled by the on-board computer 110 are possible and are contemplated within the possession of this invention. On-board computer 110 includes one or more of storage 112, memory 116, and a processor 114. Processor 114 can perform the methods described below with reference to
In some embodiments, the vehicle control system 100 is connected to (e.g., via controller 120) one or more actuator systems 130 in the vehicle and one or more indicator systems 140 in the vehicle. The one or more actuator systems 130 can include, but are not limited to, a motor 131 or engine 132, battery system 133, transmission gearing 134, suspension setup 135, brakes 136, steering system 137 and door system 138. The vehicle control system 100 controls, via controller 120, one or more of these actuator systems 130 during vehicle operation; for example, to control the vehicle during fully or partially autonomous driving operations, using the motor 131 or engine 132, battery system 133, transmission gearing 134, suspension setup 135, brakes 136 and/or steering system 137, etc. Actuator systems 130 can also include sensors that send dead reckoning information (e.g., steering information, speed information, etc.) to on-board computer 110 (e.g., via controller 120) to determine the vehicle's location and orientation. The one or more indicator systems 140 can include, but are not limited to, one or more speakers 141 in the vehicle (e.g., as part of an entertainment system in the vehicle), one or more lights 142 in or on the vehicle, one or more displays 143 in the vehicle (e.g., as part of a control or entertainment system in the vehicle) and one or more tactile actuators 144 in the vehicle (e.g., as part of a steering wheel or seat in the vehicle). The vehicle control system 100 controls, via controller 120, one or more of these indicator systems 140 to provide visual and/or audio indications, such as an indication that a driver will need to take control of the vehicle, for example.
In some embodiments, the first touch pad 210 and the second touch pad 220 comprise touch sensors (e.g., capacitive touch sensors, resistive touch sensors, piezoelectric touch sensors), buttons, or other suitable mechanisms for detecting user input at each of a plurality of input regions 211-219 and 221-229. Rather than assigning a specific function to each input region 211-219 and 221-229 of the touch pads 210 and 220, in some embodiments, the operation associated with each input region 211-219 and 221-229 changes depending on the operation context of the vehicle.
For example, when an incoming call is received at a mobile phone in communication with the vehicle (e.g., via communication 150), inputs received at touch pad 210 or 220 perform operations related to the incoming phone call (e.g., accept the call or decline the call), as will be described in more detail below with reference to
In some embodiments, touch pad 210 or 220 are able to operate to control content displayed on the HUD 251. For example, in response to receiving a vehicle alert (e.g., an indication of a system malfunction or required maintenance), the HUD 251 optionally displays a visual indication of the alert and a contextual control image indicating that one of the touch pads 210 or 220 is configured to accept an input to dismiss the alert, as will be described below with reference to
In some embodiments, touch pad 210 or 220 are associated with default operations when the vehicle is not performing an operation—such as receiving or participating in a phone call, receiving an alert, navigating a menu, repositioning one or more components, or other situations that use one or both touch pads 210 or 220 to receive user input—that causes the touch pads 210 and 220 to be reconfigured. Touch pad 210 optionally controls media content (e.g., accepts inputs for increasing the volume, decreasing the volume, skipping backwards, or skipping ahead) playing on a speaker (e.g., speaker 141) and/or controls a voice input user interface (e.g., digital assistant) of the vehicle by default, as will be described in more detail below with reference to
If the vehicle 200 does not detect 406 a valid user input (e.g., an input at an input region of the touch pad 210 associated with one of the operations related to the incoming phone call), after some amount of time (e.g., a predetermined amount of time (e.g., 2 seconds, 5 seconds, etc.) or when the incoming call ceases (e.g., the call goes to voicemail or the caller hangs up)) the vehicle 200 ceases 408 to display the indication of the incoming call and the contextual control image.
If the vehicle 200 detects 410 a user input for declining the call (e.g., an input at the left input region 217 of the touch pad 210, as shown in
If the vehicle 200 detects 412 a user input for accepting the call (e.g., an input at the right input region 213 of the touch pad 210, as shown in
As shown in
As shown in
Vehicle control menu 710 optionally includes a selected item 712 (e.g., “Steering wheel”). While vehicle control menu 710 is displayed on the HUD 251, touch pad 210 is optionally configured to accept user input for navigating the menu. For example, an operation for selecting a menu item is optionally associated with a right input region 213 of the touch pad 210, an operation for moving backwards in the menu hierarchy is optionally associated with the left input region 217 of the touch pad 210, an operation for scrolling down is optionally associated with the bottom region 215 of the touch pad, and an operation for scrolling up is optionally associated with the top region 219 of the touch pad 210. Other operations and configurations are possible. As shown in
Other operations and configurations are possible. For example, other vehicle 200 components, such as one or more mirrors or camera-simulated mirrors (e.g., side mirrors or side cameras, rear view mirror or camera), one or more seats (e.g., the driver's seat, the passenger's seat, or one or more back seats), and other components can be adjusted in a similar manner using the touch pad 210. Other settings, such as display (e.g., HUD 251, instrument cluster 230, etc.) brightness, climate control, clock, and audio playback (e.g., volume, balance, fade, bass, etc.) can also be adjusted by navigating the vehicle control menu 710 and operating the touch pad 210.
In some embodiments, after a predetermined amount of time has passed since the last user input at touchpad 220, the vehicle ceases to display the contextual control image 1160 while remaining in the adaptive cruise control mode, as shown in
If the vehicle 200 does not detect 1206 a valid user input (e.g., an input at an input region of the touch pad 220 associated with one of the operations related to the driver assistance operations) after some amount of time (e.g., 2 seconds, 5 seconds, etc.) the vehicle 200 ceases 1208 to display the contextual control image 1150.
If the vehicle 200 detects 1206 a valid user input, the vehicle 200 determines which input region of the touch pad 210 detected the user input and whether the input matches a predetermined characteristic (e.g., a press, a force press, etc.), and performs the operation in accordance with the input region where the input was detected and characteristic of the input. For example, if the vehicle detects a press at the center input region 221 of the touchpad 220, adaptive cruise control is optionally activated and if the vehicle detects a force press at the center input region 221 of the touchpad 220, AutoDrive is optionally activated. After activating the respective driver assistance function, the vehicle 200 displays 1210, on the HUD a contextual control image 1160, as shown in
In some embodiments, when a user input at touch pad 210 or 220 is received while the contextual control image is not displayed, the vehicle 200 optionally displays the contextual control image without performing the associated operation. If the vehicle 200 detects a user input while the contextual control image is displayed, the vehicle optionally performs the associated operation. In other words, in some embodiments, the user enters a first user input at the touch pad 210 or 220 to present the contextual control image without performing the associated operation and can cause the vehicle 200 to perform a further operation in response to a user input received at the touch pad 210 or 220 while the contextual control image is being displayed. This manner of forgoing performing operations in response to the touch pad 210 or 220 unless the contextual control image is displayed prevents accidental user inputs.
In some embodiments, the vehicle 200 performs the associated operation in response to an input at the touch pad 210 or 220 even when no contextual control image is being displayed. This manner of performing the operation in response to the touch pad 210 or 220 in the absence of the contextual control image reduces the number of user inputs required to perform an operation using the touch pad 210 or 220.
In some embodiments, the user is able to select a vehicle setting to control whether or not the vehicle 200 performs an operation associated with the touch pad 210 or 220 when a user input is received at the touch pad 210 or 220 while the contextual control image is not displayed. These settings can optionally be set differently for each type of vehicle operation that can be controlled at touch pad 210 or 220. For example, the user may prefer to require two inputs to activate ACC, but may wish to be able to control the media player with one input.
Therefore, according to the above, some embodiments of the disclosure are related to a vehicle comprising:a first touch pad comprising a plurality of input regions; a heads-up display (HUD), the HUD comprising a projector and a windshield of the vehicle; one or more processors operatively coupled to the first touch pad and the HUD; a memory including instructions, which when executed by the one or more processors, cause the one or more processors to perform a method comprising: receiving an indication of a first user input at an input region of the first touch pad; in response to receiving the indication of the first user input, performing a first operation in accordance with the input region of the first touch pad; performing a second operation; in response to the second operation, displaying, on the HUD, a first contextual control image, the first contextual control image comprising a representation of the first touch pad and an image indicating a third operation associated with the first input region of the first touch pad, the third operation different from the first operation; receiving an indication of a second user input at the first input region of the first touch pad; and in response to receiving the indication of the second user input, performing the third operation in accordance with the input region of the first touch pad. Additionally or alternatively, in some embodiments, the method further comprises: receiving an indication of an incoming phone call, in response to receiving the indication of the incoming phone call, displaying, on the HUD, a second contextual control image comprising an image indicating an answering operation and an image indicating a declining operation, wherein the image indicating a respective one of the answering operation and the declining operation is visually associated with a respective input region of the first touch pad, receiving an indication of a third user input at the respective input region of the first touch pad, and in response to the third user input, performing the respective one of the answering operation and the declining operation in accordance with the respective input region of the first touch pad. Additionally or alternatively, in some embodiments, the method further comprises: during a phone call of a phone operatively coupled to the one or more processors: displaying, on the HUD, a second contextual control image comprising an image indicating a volume up operation, an image indicating a volume down operation, an image indicating a mute call operation, and an image indicating an end call operation, wherein the image indicating a respective one of the volume up operation, the volume down operation, the mute call operation, and the end call operation is visually associated with a respective input region of the first touch pad, receiving an indication of a third user input at the respective input region of the first touch pad, and in response to the third user input, performing the respective one of the volume up operation, the volume down operation, the mute call operation, and the end call operation in accordance with the respective input region of the first touch pad. Additionally or alternatively, in some embodiments, the method further comprises: while playing audio content on a speaker operatively coupled to the one or more processors: receiving an indication of a third user input at a respective input region of the first touch pad, and in response to the third user input, performing a respective operation associated with the respective input region of the first touch pad, the respective operation being one of a volume up operation, a volume down operation, a skip ahead operation, a skip backwards operation, wherein the HUD does not display an image indicating the volume up operation, an image indicating the volume down operation, an image indicating the skip ahead operation, an image indicating the, or an image indicating the skip backwards operation while the indication of the third user input is received. Additionally or alternatively, in some embodiments, the method further comprises: displaying, on the HUD, a plurality of menu items, each menu item associated with a setting of the vehicle, while displaying the plurality of menu items, receiving an indication of a third user input at a respective input region of the first touch pad, and in response to the third user input, performing a respective operation associated with the respective input region of the first touch pad, the respective operation being one of a scrolling operation, a navigate backwards operation, and a selection operation. Additionally or alternatively, in some embodiments, the method further comprises: receiving a third user input for adjusting a position of a component of the vehicle, the component being one of a steering wheel, a mirror, and a seat, in response to receiving the third user input, displaying, on the HUD, a second contextual image comprising an image indicating a first adjustment operation on the component of the vehicle, an image indicating a second adjustment operation on the component of the vehicle, and an image indicating a confirm operation, wherein the image indicating a respective one of the first adjustment operation, the second adjustment operation, and the confirm operation is visually associated with a respective input region of the first touch pad, receiving an indication of a fourth use input at the respective input region of the first touch pad, and in response to the fourth user input, performing the respective one of the first adjustment operation, the second adjustment operation, and the confirm operation in accordance with the respective input region of the first touch pad. Additionally or alternatively, in some embodiments, the vehicle further comprises a second display, and the method further comprises: receiving an indication of a vehicle warning, in response to receiving the indication of the vehicle warning: displaying, on the second display, a first visual indication of the vehicle warning, and concurrently displaying, on the HUD, a second visual indication of the vehicle warning and a second contextual control image comprising an image indicating a dismiss operation, wherein the image indicating the dismiss operation is visually associated with a respective input region of the first touch pad, receiving an indication of a third user input at the respective input region of the first touch pad, and in response to the third user input, ceasing to display the second visual indication of the vehicle warning and the second contextual control image without ceasing to display the first indication of the vehicle warning. Additionally or alternatively, in some embodiments, the vehicle further comprises a second touch pad comprising a plurality of input regions, wherein the method further comprises: receiving an indication of a third user input at an input region of the second touch pad, in response to receiving the indication of the third user input, displaying, on the HUD, a second contextual control image comprising a representation of the second touch pad and an image indicating a first driver assistance function and an image indicating a second driver assistance function, wherein a respective one of the first driver assistance function and the second driver assistance function is associated with a first characteristic of an input at the input region of the second touch pad, receiving an indication of a fourth user input at the input region of the second touch pad, the fourth user input having the first characteristic, and in response to receiving the fourth user input, performing the respective one of the first driver assistance function and the second driver assistance function in accordance with the first characteristic of the fourth user input. Additionally or alternatively, in some embodiments, in response to receiving the fourth user input: the vehicle enters an adaptive cruise control driving mode in accordance with the first characteristic of the fourth user input, and the method further comprises: displaying, on the HUD, a third contextual control image comprising a representation of the second touch pad and an image indicating an increase speed operation, an image indicating a decrease speed operation, an image indicating an increase following distance operation, and an image indicating a decrease following distance operation, wherein the image indicating a respective one of the increase speed operation, the decrease speed operation, the increase following distance operation, and the decrease following distance operation is visually associated with a respective input region of the second touch pad, receiving an indication of a fifth user input at the respective input region of the second touch pad, and in response to the fifth user input, performing the respective one of the increase speed operation, the decrease speed operation, the increase following distance operation, and the decrease following distance operation in accordance with the respective input region of the second touch pad.
Therefore, according to the above, some embodiments of the disclosure are related to a vehicle comprising: a first touch pad comprising a plurality of input regions; a heads-up display (HUD), the HUD comprising a projector and a windshield of the vehicle; one or more processors operatively coupled to the first touch pad and the HUD; a memory including instructions, which when executed by the one or more processors, cause the one or more processors to perform a method comprising: receiving an indication of a first user input at a first input region of the first touch pad; in response to receiving the indication of the first user input, performing a first operation in accordance with the first input region of the first touch pad; performing a second operation; in response to the second operation, displaying, on the HUD, a first contextual control image, the first contextual control image comprising a representation of the first touch pad and an image indicating a third operation associated with the first input region of the first touch pad, the third operation different from the first operation; receiving an indication of a second user input at the first input region of the first touch pad; and in response to receiving the indication of the second user input, performing the third operation in accordance with the input region of the first touch pad. Additionally or alternatively, in some embodiments, the second operation comprises receiving an indication of an incoming phone call, the first contextual control image comprises an image indicating an answering operation and an image indicating a declining operation, wherein the image indicating a respective one of the answering operation and the declining operation is visually associated with the first input region of the first touch pad, and the first operation is different than receiving the indication of the incoming phone call. Additionally or alternatively, in some embodiments, the second operation comprises executing a phone call of a phone operatively coupled to the one or more processors, during the phone call, the first contextual control image comprises an image indicating a volume up operation, an image indicating a volume down operation, an image indicating a mute call operation, and an image indicating an end call operation, wherein the image indicating a respective one of the volume up operation, the volume down operation, the mute call operation, and the end call operation is visually associated with the first input region of the first touch pad, and the first operation is different than executing the phone call. Additionally or alternatively, in some embodiments, the first operation comprises one of a skip ahead operation of audio content playing on a speaker operatively coupled to the one or more processors, a skip backwards operation of the audio content playing on the speaker, a volume up operation, and a volume down operation, and the HUD does not display an image indicating the volume up operation, an image indicating the volume down operation, an image indicating the skip ahead operation, or an image indicating the skip backwards operation while the indication of the first user input is received. Additionally or alternatively, in some embodiments, the second operation comprises displaying, on the HUD, a plurality of menu items, each menu item associated with a setting of the vehicle, the first contextual control image comprises an image indicating a scrolling operation, an image indicating a navigate backwards operation, and an image indicating a selection operation, wherein the image indicating a respective one of the scrolling operation, the navigate backwards operation, and the selection operation is visually associated with the first input region of the first touch pad, and the first operation is different than displaying, on the HUD, the plurality of menu items. Additionally or alternatively, in some embodiments, the second operation comprises adjusting a position of a component of the vehicle, the component being one of a steering wheel, a mirror, and a seat, the first contextual image comprises an image indicating a first adjustment operation on the component of the vehicle, an image indicating a second adjustment operation on the component of the vehicle, and an image indicating a confirm operation, wherein the image indicating a respective one of the first adjustment operation, the second adjustment operation, and the confirm operation is visually associated with a respective input region of the first touch pad, and the first operation is different than adjusting the position of the component of the vehicle. Additionally or alternatively, in some embodiments, the vehicle further comprises a second display, wherein: the second operation comprises receiving an indication of a vehicle warning, in response to receiving the indication of the vehicle warning the vehicle displays, on the second display, a first visual indication of the vehicle warning, in response to receiving the indication of the vehicle warning, the vehicle concurrently displays, on the HUD, a second visual indication of the vehicle warning and the first contextual control image, the first contextual control image comprises an image indicating a dismiss operation, wherein the image indicating the dismiss operation is visually associated with the first input region of the first touch pad, in response to the second user input, the vehicle ceases to display the second visual indication of the vehicle warning and the first contextual control image while continuing to display the first indication of the vehicle warning, and the first operation is different than receiving the indication of the vehicle warning. Additionally or alternatively, in some embodiments, the vehicle further comprises a second touch pad comprising a plurality of input regions, wherein the method further comprises: receiving an indication of a third user input at an input region of the second touch pad, in response to receiving the indication of the third user input, displaying, on the HUD, a second contextual control image comprising a representation of the second touch pad and an image indicating a first driver assistance function and an image indicating a second driver assistance function, wherein a respective one of the first driver assistance function and the second driver assistance function is associated with a first characteristic of an input at the input region of the second touch pad, receiving an indication of a fourth user input at the input region of the second touch pad, the fourth user input having the first characteristic, and in response to receiving the fourth user input, performing the respective one of the first driver assistance function and the second driver assistance function in accordance with the first characteristic of the fourth user input. Additionally or alternatively, in some embodiments, in response to receiving the fourth user input: the vehicle enters an adaptive cruise control driving mode in accordance with the first characteristic of the fourth user input, and the method further comprises: displaying, on the HUD, a third contextual control image comprising a representation of the second touch pad and an image indicating an increase speed operation, an image indicating a decrease speed operation, an image indicating an alter following distance operation, an image indicating an initiate third driver assistance function operation, and an image indicating a cease adaptive cruise control driving mode operation, wherein the image indicating a respective one of the increase speed operation, the decrease speed operation, the alter following distance operation, the initiate driver assistance function operation, and the cease adaptive cruise control driving mode operation is visually associated with a respective input region of the second touch pad, receiving an indication of a fifth user input at the respective input region of the second touch pad, and in response to the fifth user input, performing the respective one of the increase speed operation, the decrease speed operation, the alter following distance operation, the initiate third driver assistance function operation, and the cease adaptive cruise control driving mode operation in accordance with the respective input region of the second touch pad.
Some embodiments of the disclosure are related to a non-transitory computer-readable medium including instructions, which when executed by one or more processors, cause the one or more processors to perform a method comprising: receiving an indication of a first user input at an input region of a first touch pad; in response to receiving the indication of the first user input, performing a first operation in accordance with the input region of the first touch pad; performing a second operation; in response to the second operation, displaying, on a heads-up display (HUD), a first contextual control image, the first contextual control image comprising a representation of the first touch pad and an image indicating a third operation associated with the first input region of the first touch pad, the third operation different from the first operation; receiving an indication of a second user input at the first input region of the first touch pad; and in response to receiving the indication of the second user input, performing the third operation in accordance with the input region of the first touch pad. Additionally or alternatively, in some embodiments, the second operation comprises receiving an indication of an incoming phone call, the first contextual control image comprises an image indicating an answering operation and an image indicating a declining operation, wherein the image indicating a respective one of the answering operation and the declining operation is visually associated with the first input region of the first touch pad, and the first operation is different than receiving the indication of the incoming phone call. Additionally or alternatively, in some embodiments, the second operation comprises executing a phone call of a phone operatively coupled to the one or more processors, during the phone call, the first contextual control image comprises an image indicating a volume up operation, an image indicating a volume down operation, an image indicating a mute call operation, and an image indicating an end call operation, wherein the image indicating a respective one of the volume up operation, the volume down operation, the mute call operation, and the end call operation is visually associated with the first input region of the first touch pad, and the first operation is different than executing the phone call. Additionally or alternatively, in some embodiments, wherein the vehicle further comprises a second display, wherein: the second operation comprises receiving an indication of a vehicle warning, in response to receiving the indication of the vehicle warning the vehicle displays, on the second display, a first visual indication of the vehicle warning, in response to receiving the indication of the vehicle warning, the vehicle concurrently displays, on the HUD, a second visual indication of the vehicle warning and the first contextual control image, the first contextual control image comprises an image indicating a dismiss operation, wherein the image indicating the dismiss operation is visually associated with the first input region of the first touch pad, in response to the second user input, the vehicle ceases to display the second visual indication of the vehicle warning and the first contextual control image while continuing to display the first indication of the vehicle warning, and the first operation is different than receiving the indication of the vehicle warning. Additionally or alternatively, in some embodiments, wherein the vehicle further comprises a second touch pad comprising a plurality of input regions, wherein the method further comprises: receiving an indication of a third user input at an input region of the second touch pad, in response to receiving the indication of the third user input, displaying, on the HUD, a second contextual control image comprising a representation of the second touch pad and an image indicating a first driver assistance function and an image indicating a second driver assistance function, wherein a respective one of the first driver assistance function and the second driver assistance function is associated with a first characteristic of an input at the input region of the second touch pad, receiving an indication of a fourth user input at the input region of the second touch pad, the fourth user input having the first characteristic, and in response to receiving the fourth user input, performing the respective one of the first driver assistance function and the second driver assistance function in accordance with the first characteristic of the fourth user input.
Some embodiments of the disclosure are related to a method comprising receiving an indication of a first user input at an input region of a first touch pad; in response to receiving the indication of the first user input, performing a first operation in accordance with the input region of the first touch pad; performing a second operation; in response to the second operation, displaying, on a heads-up display (HUD), a first contextual control image, the first contextual control image comprising a representation of the first touch pad and an image indicating a third operation associated with the first input region of the first touch pad, the third operation different from the first operation; receiving an indication of a second user input at the first input region of the first touch pad; and in response to receiving the indication of the second user input, performing the third operation in accordance with the input region of the first touch pad. Additionally or alternatively, in some embodiments, the second operation comprises receiving an indication of an incoming phone call, the first contextual control image comprises an image indicating an answering operation and an image indicating a declining operation, wherein the image indicating a respective one of the answering operation and the declining operation is visually associated with the first input region of the first touch pad, and the first operation is different than receiving the indication of the incoming phone call. Additionally or alternatively, in some embodiments, the second operation comprises executing a phone call of a phone operatively coupled to the one or more processors, during the phone call, the first contextual control image comprises an image indicating a volume up operation, an image indicating a volume down operation, an image indicating a mute call operation, and an image indicating an end call operation, wherein the image indicating a respective one of the volume up operation, the volume down operation, the mute call operation, and the end call operation is visually associated with the first input region of the first touch pad, and the first operation is different than executing the phone call. Additionally or alternatively, in some embodiments, the second operation comprises receiving an indication of a vehicle warning, in response to receiving the indication of the vehicle warning the vehicle displays, on the second display, a first visual indication of the vehicle warning, in response to receiving the indication of the vehicle warning, the vehicle concurrently displays, on the HUD, a second visual indication of the vehicle warning and the first contextual control image, the first contextual control image comprises an image indicating a dismiss operation, wherein the image indicating the dismiss operation is visually associated with the first input region of the first touch pad, in response to the second user input, the vehicle ceases to display the second visual indication of the vehicle warning and the first contextual control image while continuing to display the first indication of the vehicle warning, and the first operation is different than receiving the indication of the vehicle warning. Additionally or alternatively, in some embodiments, the vehicle further comprises a second touch pad comprising a plurality of input regions, wherein the method further comprises: receiving an indication of a third user input at an input region of the second touch pad, in response to receiving the indication of the third user input, displaying, on the HUD, a second contextual control image comprising a representation of the second touch pad and an image indicating a first driver assistance function and an image indicating a second driver assistance function, wherein a respective one of the first driver assistance function and the second driver assistance function is associated with a first characteristic of an input at the input region of the second touch pad, receiving an indication of a fourth user input at the input region of the second touch pad, the fourth user input having the first characteristic, and in response to receiving the fourth user input, performing the respective one of the first driver assistance function and the second driver assistance function in accordance with the first characteristic of the fourth user input.
Some embodiments of the disclosure are related to means for receiving an indication of a first user input at an input region of a first touch pad; means for performing a first operation in accordance with the input region of the first touch pad in response to receiving the indication of the first user input; means for performing a second operation; means for displaying, on a heads-up display (HUD), in response to the second operation, a first contextual control image, the first contextual control image comprising a representation of the first touch pad and an image indicating a third operation associated with the first input region of the first touch pad, the third operation different from the first operation; means for receiving an indication of a second user input at the first input region of the first touch pad; and means for performing the third operation in accordance with the input region of the first touch pad in response to receiving the indication of the second user input.
Although examples have been fully described with reference to the accompanying drawings, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of examples of this disclosure as defined by the appended claims.