This application generally relates to various functionality associated with interactions between a portable device and a vehicle head unit.
Today, many car manufacturers offer various components in the head unit of a vehicle, such as displays, speakers, microphones, hardware input controls, etc. Some head units also support short-range communications with external devices such as smartphones, for example. However, head units generally support only very limited communication schemes, such as a direct connection between the head unit and a smartphone via a Bluetooth® link.
Moreover, a modern automotive user interface (UI) in the head unit of the vehicle can include hardware buttons, speakers, a microphone, and a screen to display warnings, car status updates, navigation directions, digital maps, etc. As more and more functions become accessible via the head unit of a car, developers of new functionality face the challenge of providing the corresponding controls in a safe and intuitive manner. In general, hardware buttons on a head unit are small, and operating these buttons can be distracting for the driver. On the other hand, when the head unit includes a touchscreen, large software buttons take up valuable screen real estate (while small software buttons are difficult to operate for the same reason as small hardware buttons).
Furthermore, many navigation systems operating in portable devices or in vehicle head units provide navigation directions, and some of these systems generate audio announcements based on these directions. In general, the existing navigation systems generate directions and audio announcements based only on the navigation route. The directions thus include the same level of detail when the driver is close to her home as when the driver is in unfamiliar area. Some drivers find excessively detailed directions to be so annoying when they are familiar with the area that they turn off navigation or voice assistance from navigation at least for a portion of the route. As a result, they may miss advice on optimal routes (which depend on current traffic), estimations of arrival time, and other useful information. Moreover, drivers who are listening to music or the news in the car also may be annoyed by long announcements even when they are unfamiliar with the area, and a long announcement otherwise seems warranted.
Generally speaking, a “primary” portable device such as a smartphone receives data, which can include turn-by-turn directions, audio packets, map images, etc., from another, “secondary” portable device via a short-range communications link and provides the received data to the head unit of a vehicle. The primary portable device also can forward data from the head unit to the secondary device. In this manner, the primary portable device provides a communication link between the head unit and the secondary device, which for various reasons (e., security restrictions, protocol incompatibility, an exceeded limit on the number of simultaneous connections) may not be able to establish a direct connection with the head unit.
In some embodiments, automotive gesture-based UI implemented in one of the portable devices or the vehicle head unit advances an ordered or otherwise structured set of items through a viewport by a certain number in response to a “flick” or “swipe” gesture, regardless of how quickly or slowly the driver performed the gesture. For example, to allow the user to step through a list of items when only a subset of N items fit on the screen at any one time, the UI initially displays items I1, I2, . . . IN and advances the list to display items IN+1, IN+2, . . . I2N in response to a flick gesture of any velocity. Thus, the driver need not worry about flicking too fast so as to advance the list too far, or too slowly so as to not advance the list far enough and still see most of the same items on the screen. Depending on the implementation, the items can be informational cards corresponding to search results, automatic suggestions for a certain category (e.g., gas stations in the fifteen-mile radius), map tiles that make up a digital map image, etc.
In an embodiment, a navigation system is included in one of the portable devices and/or the vehicle head unit. To effectively provide navigation directions to a driver, the navigation system implemented in the portable device and/or the vehicle head unit dynamically varies the length of an individual audio instruction in view of one or more of such factors as the user's familiarity with the route, the current level of audio in the vehicle, and the current state of the vehicle (e.g., moving, stationary, showing a turn signal). The navigation system in some implementations also varies intervals between successive instructions, based on these factors. For example, when the driver is familiar with a section of the route, the navigation system may forego an audio instruction or provide a shorter audio instruction. On the other hand, when the driver is not familiar with the section of the route, the system may provide a longer audio instruction. Further, if the portable device or the head unit is currently playing music, the navigation system can reduce the duration of the audio instruction by controlling the level of detail to minimize inconvenience for the driver and the passengers.
An example embodiment of the techniques of the present disclosure is a method for efficiently providing audio navigation instructions to a head unit of a vehicle. The method includes determining, by one or more computing devices, a current operational state of the head unit. The method further includes determining, by the one or more computing devices, a certain maneuver in a navigation route which a driver of the vehicle is following. Still further, the method includes generating, by the one or more computing devices, an audio instruction that describes the maneuver and causing the audio instruction to be provided to the head unit via a communication link. Generating the audio instruction includes selecting a level of detail of the audio instruction based at least in part on (i) the driver's familiarity with a segment of the navigation route at which the maneuver occurs and (ii) the current operational state of the head unit.
Another embodiment of these techniques is a portable computing device including one or more processors, an interface to communicate with a head unit of a vehicle, and a non-transitory computer-readable memory storing instructions. When executed on the one or more processors, the instructions cause the portable computing device to: obtain navigation directions for navigating a driver of the vehicle to a certain destination along a navigation route, where each of the plurality of navigation directions describes a respective maneuver. The instructions further cause the portable device to determine, via the interface, an operational state of at least one of the head unit or the vehicle and, for a selected navigation direction, determine a level of familiarity of a user of the portable device with a segment of the navigation route at which the corresponding maneuver occurs, and generate an audio instruction for the selected navigation direction. To generate the audio instruction, the instructions cause the portable device to determine a level of detail of the audio instruction based at least on the determined operational state and the determined level of familiarity with the segment.
Yet another embodiment of these techniques is a computing system including a navigation service module, a register storing a current operational state of a head unit of the vehicle, a familiarity scoring engine, and a speech generation system. The navigation service module is configured to generate navigation directions for navigating a driver of a vehicle to a certain destination along a navigation route, wherein each of the navigation directions describes a respective maneuver. The familiarity scoring engine is configured to generate, for a selected one of the navigation directions, a familiarity metric indicative of estimated familiarity of the driver with a segment of route at which a corresponding maneuver occurs. The speech generation system is configured to (i) receive the familiarity metric and the current operational state of the head unit from the register to determine a level of detail of an audio instruction, and (ii) generate an audio instruction for the maneuver having the determined level of detail.
In another example implementation, a method for providing sets of items via an automotive user interface (UI) configured to receive gesture-based user input includes receiving an ordered set of items. The method also includes causing a first subset of the items to be displayed via the automotive UI along a certain axis, detecting that a gesture having a motion component along the axis was applied to the automotive UI, and in response to the gesture, causing a second subset of the items to be displayed via the automotive UI, so that each of the first subset and the second subset includes multiple items, and where the second subset is made up of N items that immediately follow the items in the first subset. According to this method, positioning of the second subset on the automotive UI is independent of a velocity of the motion component of the gesture.
Yet another embodiment of these techniques is a portable computing device including one or more processors, a short-range communication interface to couple the portable computing device to a head unit of a vehicle to receive input from, and provide output to, automotive user interface (UI) implemented in a head unit of a vehicle, and a non-transitory computer-readable memory storing thereon instructions. These instructions are configured to execute on the one or more processors to (i) receive an ordered plurality of items I1, I2, . . . IM, (ii) provide an initial subset of N successive items I1, I2, . . . IN to the head unit for display via the automotive UI, (iii) receive an indication of a flick gesture detected via automotive UI, and (iv) in response to the received indication, provide to the head unit a new subset of N successive items I1+O, I2+O, . . . IN+O which are offset from the initial subset by a certain fixed number O independently of a velocity of the flick gesture.
Additionally, another embodiment is a system for providing output in response to user gestures in an automotive environment. The system includes one or more processors, a user interface (UI) communicatively coupled to the one or more processors and configured to display content to a driver of a vehicle and receive gesture-based input from the driver, and a non-transitory computer-readable memory storing thereon instructions. When executed on the one or more processors, the instructions cause the one or more processors to (i) display, via the user interface, a first subset of an ordered plurality of items along an axis, (ii) detect, via the user interface, a gesture having a motion component directed along the axis, (iii) in response to the gesture, select a second subset of the ordered plurality of items for display via the user interface independently of a velocity of the motion component, where each of the first subset and the second subset includes multiple items, and where the second subset includes items that immediately follow the items in the first subset, and (iv) display the subset via the user interface.
Moreover, another embodiment of these techniques is a method for enabling data exchange between portable devices and external output devices executed by one or more processors. The method includes establishing a first short-range communication link between a first portable user device and a head unit of a vehicle, establishing a second short-range communication link between the first portable user device and a second portable user device, such that the second short-range communication link is a wireless link, and causing the first portable user device to (i) receive data from the second portable device via the second short-range communication link and (ii) transmit the data to the head unit via the first short-range communication link.
Another example embodiment of these techniques is a portable computing device including one or more processors, an interface configured to communicatively couple the portable computing device to a head unit of a vehicle and a proximate portable computing device via a first communication link and a second communication link, respectively, and a non-transitory computer-readable memory storing instructions. When executed on the one or more processors, the instructions cause the portable computing device to receive data from the proximate portable computing device via the second communication link and forward the received data to the head unit via the first communication link.
Yet another example embodiment of these techniques is a portable computing device, including one or more processors, a device interface configured to communicatively couple the portable computing device to proximate computing devices, and a non-transitory computer-readable memory storing instructions. When executed on the one or more processors, the instructions cause the portable computing device to detect a proximate portable computing device that has access to a resource on a head unit of a vehicle, where the resource includes at least one of a audio output device or a display device, establish a communication link to the proximate portable computing device via the device interface, and transmit data to the head unit of the vehicle via the communication link.
A portable device (e.g., a smartphone) directly connected to a head unit of a vehicle provides a user interface function for configuring the portable device as an access point via which other portable devices can communicate with the head unit. For convenience, the portable device directly connected to the head unit is referred to below as the primary device, and portable devices that connect to the head unit via the primary device are referred to as secondary devices. In a sense, the primary device operates as a master device and the secondary devices operate as slave devices.
In an example implementation, the primary device advertises an available resource of the head unit, such as a speaker, a screen, a physical control input, etc. If a candidate secondary device is within a certain range of the primary device, a user interface element, such as a speaker icon, appears on the screen of the candidate secondary device. The user of the candidate secondary device can then request a communication link with the master device via the user interface of the candidate secondary device. The master device can accept or reject the request from the candidate secondary device to establish a connection between the two devices.
After a connection is established, the secondary device can transmit data, such as audio packets, images representing digital maps, etc., to the primary device for forwarding to the head unit. Further, the primary device can forward commands or events entered via the head unit (e.g., “volume up”) to the secondary device. In this manner, the primary device can establish a bidirectional communication link between the secondary device and the head unit.
Further, the primary device in some cases can allow multiple secondary devices to communicate with the head unit, even when the head unit supports only one communication link with a portable device at a time. Thus, one secondary device can provide an audio stream to the head via the primary device, and another secondary device can provide navigation instructions and map images to the head unit. The primary device can be configured to implement the desired access policy for communicating with the head unit.
In an example scenario, the primary device is a smartphone connected to the head unit via a Universal Serial Bus (USB) cable. The passenger wishes to transmit turn-by-turn navigation directions to the head unit from his smartphone to take advantage of the display built into the head unit and the powerful speakers. The driver configures her smartphone to allow discovery of her smartphone by her passenger's smartphone. The passenger then operates his smartphone to locate the driver's smartphone, request and, with the driver's permission, establish a short-range smartphone-to-smartphone communication link, so that the driver's smartphone operates as the primary device and the passenger's smartphone operates as the secondary device. The passenger then launches the navigation application on his smartphone, and the driver's smartphone forwards data packets from the passenger's smartphone to the head unit.
Moreover, at least some of the techniques of this disclosure for processing gesture input in automotive UI can be implemented in an environment which includes a portable device and a vehicle with a head unit. In this example implementation, the portable device provides interactive map and navigation data to the head unit equipped with a touchscreen. The head unit detects driver's gesture-based input applied to the touchscreen and provides an indication of the detected input to the portable device, which updates the display of the map and navigation data via the touchscreen in accordance with the detected input. More particularly, in response to detecting a flick gesture, the portable device advances an ordered set of items by a certain number regardless of the speed of the flick gesture. In this manner, the portable device eliminates a high-cognitive load task and allows the driver of the vehicle to more safely paginate through lists or arrays of items with minimal distractions, and without inadvertently missing information due to excessive velocity of the gesture.
For clarity, at least some of the examples below focus on implementations in which a portable device implements gesture processing functionality but a structured set of items is displayed, and gesture input is received, via a touchscreen embedded in the head unit of a car. However, in another embodiment, the head unit receives as well as processes gesture-based input without relying on the portable device 10 or other external devices. In yet another embodiment, the user applies a flick gesture directly to the portable device, and the portable device adjusts the display of a structured set of items in response to the flick gesture without exporting the display to the head unit. More generally, the techniques of this disclosure can be implemented in one or several devices temporarily or permanently disposed inside a vehicle.
Further, although gesture-based input in the examples below is discussed with reference to touchscreen input, in general the techniques of this disclosure need not be limited to two-dimensional surface gestures. Gesture input in other implementations can include three-dimensional (3D) gestures, such as trajectories of the portable device in a 3D space that fit certain patterns (e.g., the driver making a flicking motion forward or backward while the portable device is in her hand). In these implementations, the display of a structured set of items provided via the head unit and/or the portable device can advance by a certain number of items in response to such a 3D gesture regardless of how quickly or slowly the driver flicked the portable device. Further, 3D gestures in some implementations can be detected via video cameras and/or other sensors and processed in accordance computer vision techniques.
In another embodiment, the techniques for dynamically varying length of an audio navigation instruction (as well the length of an interval between two successive audio instructions) during a navigation session can be implemented in a portable device, a head unit of a car, one or several network servers, or a system that includes several of these devices. However, for clarity, at least some of the examples below focus primarily on an embodiment in which a navigation application executes on a portable user device, generates audio navigation instructions (for simplicity, “audio instructions”) using navigation data and familiarity score signals received from one or several network servers, and provides instructions to a head unit of a car.
Referring to
In operation, the portable device 10 obtains navigation data to navigate the driver from point A to point B in the form of a sequence of instructions or maneuvers. As discussed in more detail below, the portable device 10 can receive navigation data via a communication network from a navigation service or can generate navigation data locally, depending on the implementation. Based on such factors as the driver's familiarity with the route, the current level of audio in the vehicle 12, and the current state of the vehicle 12, the portable device 10 generates audio instructions at varying levels of detail. For example, the portable device 10 can shorten or even omit certain audio instructions upon determining, with a certain degree of confidence, that the driver is very familiar with the route. As another example, the portable device can omit an audio instruction to turn left if the head unit 14 reports that the driver already activated the left turn signal.
Besides generating condensed audio instructions describing maneuvers or omitting audio instructions, the portable device 10 in some cases can adjust the intervals between audio instructions. For example, the portable device 10 can determine that descriptions of several maneuvers can be combined to direct the driver to “Highway 94” and the driver is familiar with the relevant portion of this highway, the portable device 10 can combine the several descriptions to form a single audio instruction such as, “Start out going East and turn right onto Highway 94.”
The embodiments of these techniques may require that, in order for the portable device 10 to use information related to driver's familiarity with the route and other information specific to the driver, he or she select certain settings and/or install certain applications.
The head unit 14 can include a display 18 for presenting navigation information such as a digital map. The display 18 in some implementations is a touchscreen and includes a software keyboard for entering text input, which may include the name or address of a destination, point of origin, etc. Hardware input controls 20 and 22 on the head unit 14 and the steering wheel, respectively, can be used for entering alphanumeric characters or to perform other functions for requesting navigation directions. The head unit 14 also can include audio input and output components such as a microphone 24 and speakers 26, for example. The speakers 26 can be used to play the audio instructions sent from the portable device 10.
Referring to
In operation, the second device 11 transmits data to the primary device 10, which in turn provides the transmitted data to the head unit 14. The transmitted data in the example of
The head unit 14 can include hardware input controls such as buttons, knobs, etc. These controls can be disposed on the head unit 14 or elsewhere in the vehicle 12. For example, the vehicle 12 in
The vehicle 12 also can include an audio input component such a microphone 24 and an audio output component such as speakers 26, for example. Similar to the hardware controls 20 and 22, the microphone 24 and speakers 26 can be disposed directly on the head unit 14 or elsewhere in the vehicle 12.
Referring to
The head unit 14 can include hardware input controls such as buttons, knobs, etc. These controls can be disposed on the head unit 14 or elsewhere in the vehicle 12. For example, the vehicle 12 in
Further, the vehicle 12 can include an audio input and output components such a microphone 24 and speakers 26, for example. Similar to the hardware controls 20 and 22, the microphone 24 and speakers 26 can be disposed directly on the head unit 14 or elsewhere in the vehicle 12.
Although the touchscreen 18 in the of
In an example scenario, the portable device 10 executes a mapping and navigation software module which provides a digital map partitioned into several map “tiles” to the head unit 14. Each map tile can be an image in a bitmap format, for example. The head unit 14 receives the map tiles, assembles these map tiles into a map image, and displays the map image on the touchscreen 18. For additional clarity,
When a user (typically, the driver of the vehicle 12) puts his finger on the touchscreen 18 and flicks the map image to the right, for example, the head unit 14 reports the flick gesture to the portable device 10. In response, the portable device 10 provides new map tiles to the head unit 14 for display. More specifically, the portable device 10 can advance the array of map tiles so that, regardless of how quickly or slowly the driver flicked the map image, the head unit 14 now displays tiles adjacent to the ones previously displayed on the head unit 14. This and other implementations are discussed in more detail with reference to
A first example implementation of the portable device 10 and the head unit 14 is discussed next with reference to
The set of sensors 28 can include, for example, a global positioning system (GPS) module to determine the current position of the vehicle in which the head unit 14 is installed, an inertial measurement unit (IMU) to measure the speed, acceleration, and current orientation of the vehicle, a device to determine whether or not the turn signal has been pushed up or down. etc. Although
A short-range communication unit 30B allows the head unit 14 to communicate with the portable device 10. The short-range communication unit 30B may support wired or wireless communications, such as USB, Bluetooth, Wi-Fi Direct, Near Field Communication (NFC), etc.
The processor 25 can operate to format messages transmitted between the head unit 14 and the portable device 10, process data from the sensors 28 and the audio input 24, display map images via the display 18, play audio instructions via the audio output, etc.
The portable device 10 can include a short-range communication unit 30A for communicating with the head unit 14. Similar to the unit 30B, the short-range communication unit 30A can support one or more communication schemes such as USB, Bluetooth, Wi-Fi Direct, etc. The portable device 10 can include audio input and output components such as a microphone 32 and speakers 33. Additionally, the portable device 10 includes one or more processors or CPUs 34, a GPS module 36, a memory 38, and a cellular communication unit 50 to transmit and receive data via a 3G cellular network, a 4G cellular network, or any other suitable network. The portable device 10 can also include additional sensors (e.g., an accelerometer, a gyrometer) or, conversely, the portable device 10 can rely on sensor data supplied by the head unit 14. In one implementation, to improve accuracy during real-time navigation, the portable device 10 relies on the positioning data supplied by the head unit 14 rather than on the output of the GPS module 36.
The memory 38 can store, for example, contacts 40 and other personal data of the driver. As illustrated in
The software components 42, 44, and 48 can include compiled instructions and/or instructions in any suitable programming language interpretable at runtime. In any case, the software components 42, 44, and 48 execute on the one or more processors 34. In one implementation, the navigation service application 48 is provided as a service on the operating system 42 or otherwise as a native component. In another implementation, the navigation service application 48 is an application compatible with the operating system 42 but provided separately from the operating system 42, possibly by a different software provider.
The navigation API 46 generally can be provided in different versions for different respective operating systems. For example, the maker of the portable device 10 can provide a Software Development Kit (SDK) including the navigation API 46 for the Android™ platform, another SDK for the iOS™ platform, etc.
An example implementation of the primary device 10, secondary device 11 and head unit 14 is discussed with reference to
The set of sensors 28 can include, for example, a global positioning system (GPS) module to determine the current position of the vehicle in which the head unit 14 is installed, an inertial measurement unit (IMU) to measure the speed, acceleration, and current orientation of the vehicle, a barometer to determine the altitude of the vehicle, etc. Although
Depending on the implementation, the processor 25 can be a general-purpose processor that executes instructions stored on a computer-reader memory (not shown) or an application-specific integrated circuit (ASIC) that implements the functionality of the head unit 14. In any case, the processor 25 can operate to format messages from the head unit 14 to the primary device 10, receive and process messages from the primary device 10, display map images via the display 18, play back audio messages via the audio output 26, etc.
With continued reference to
One or several short-range communication units 30A allow the primary device 10 to communicate with the head unit 10 as well as with the secondary device 11. The short-range communication unit 30A may support wired or wireless communications, such as USB, Bluetooth, Wi-Fi Direct, Near Field Communication (NFC), etc. In some scenarios, the primary device 10 establishes different types of connections with the head unit 14 and the secondary device 11. For example, the primary device 10 can communicate with the head unit 14 via a USB connection and with the secondary device 11 via a Bluetooth connection.
The memory 38 can store, for example, contacts 40 and other personal data of the user. As illustrated in
An authorization module 55 in some implementations includes the same software instructions as the authorization module 45. In other implementations, the authorization modules 45 and 55 implement the same set of functions, but include different instructions for different platforms. Example functionality of the authorization modules 45 and 55 is discussed in more detail below. Although the secondary device 11 is depicted for simplicity as having only an authorization module 55, it will be understood that the secondary device 11 can have the same or similar architecture as the primary device 10. Furthermore, although only one secondary device 11 is depicted, the described system can implement more than one secondary device.
A third example implementation of the portable device 10 and head unit 14 is briefly considered with reference to
The set of sensors 28 can include, for example, a global positioning system (GPS) module to determine the current position of the vehicle in which the head unit 14 is installed, an inertial measurement unit (IMU) to measure the speed, acceleration, and current orientation of the vehicle, a barometer to determine the altitude of the vehicle, etc. Although
Depending on the implementation, the processor 25 can be a general-purpose processor that executes instructions stored on the computer-reader memory 27 or an application-specific integrated circuit (ASIC) that implements the functionality of the head unit 14. In any case, the processor 25 can operate to format messages from the head unit 14 to the portable device 10, receive and process messages from the portable device 10, display map images via the display 18, play back audio messages via the audio output 26, etc.
The portable device 10 can include one or more short-range communication units 30A for communicating with the head unit 14. Similar to the short-range communication unit 30B, the short-range communication unit 30A can support one or more short-range communication schemes. The portable device 10 also can include one or more processors or CPUs 34, a GPS module 36, a memory 38, and a cellular communication unit 50 to transmit and receive data via a 3G cellular network, a 4G cellular network, or any other suitable network. The portable device 10 also can include additional components such as an audio input device 32, an audio output device 33, a touchscreen 31 or other user interface components, etc.
The memory 38 can store, for example, contacts 40 and other personal data of the user. As illustrated in
In one implementation, the navigation service application 48 is provided as a service on the operating system 42 or otherwise as a native component. In another implementation, the navigation service application 48 is an application compatible with the operating system 42 but provided separately from the operating system 42, possibly by a different software provider. Further, in some implementations, the functionality of the navigation service application 48 is implemented software component that operates in another software application (e.g., a web browser).
The memory 38 also can store a navigation API 46 to allow other software applications executing on the portable device 10 to access the functionality of the navigation service application 48. For example, a manufacturer of the car head unit 14 can develop an application that runs on the OS 42 and invokes the navigation API 46 to obtain navigation data, map data, etc.
In general, the software components 46 and 48 can include compiled instructions and/or instructions in any suitable programmable language interpretable at runtime. In any case, the software components 46 and 48 execute on the one or more processors 34.
As illustrated in
The portable device 10 has access to a wide area communication network 52 such as the Internet via a long-range wireless communication link (e.g., a cellular link). Referring back to
With reference to
More generally, the portable device 10 can communicate with any number of suitable servers. For example, in another embodiment, the navigation server 54 provides directions and other navigation data while a separate map server provides map data (e.g., in a vector graphics format), a traffic data provides traffic updates along the route, a weather data server provides weather data and/or alerts, an audio generation server may generate audio navigation instructions, etc.
According to an example scenario, a driver requests navigation information by pressing appropriate buttons on the head unit of the vehicle and entering a destination. The head unit provides the request to the portable device, which in turn requests navigation data from a navigation server. Referring collectively to
In other embodiments, the portable device 10 may generate a video (which can include static imagery or a video stream) of map data, for example, and transmit the video to the head unit 14. The head unit 14 may then receive touch events from the user on the display 18. In such an embodiment, the head unit 14 does not interpret the touch events and instead transmits the touch events in a “raw” format. For example, the user may tap a section of the display 18 corresponding to a point of interest to select a destination or the user may perform a series of swipe gestures to toggle through previous destinations stored on the portable device 10. The “raw” touch events may be transmitted to the portable device 10 which interprets the “raw” touch events to determine the requested navigation information from the user. For example, the portable device 10 may generate a video which includes a map of Sydney, Australia, and may transmit the video to the head unit 14. The user may then tap the upper right corner of the display 18 corresponding to the Sydney Opera House. As a result, the head unit 14 may transmit the “raw” touch event (e.g., a tap of the upper right corner of the display) to the portable device 10, and the portable device may determine that the user requested navigation directions to the Sydney Opera House based on the “raw” touch event.
It will be understood that in other implementations, the driver or a passenger can provide the destination (and, if desired the source when different from the current location) via the audio input 32 of the portable device 10 or the audio input 24 of the head unit 14. Further, the navigation service 48 in some implementations can determine directions for a route using data stored in the portable device 10.
The primary device 10 and secondary device 11 in this implementation have access to a wide-area communication network 52 such as the Internet via long-range wireless communication links (e.g., a cellular link). Referring back to
To again consider an example scenario with reference to
For further clarity, an example message sequence diagram 400 corresponding to this scenario is depicted in
As illustrated in
The authorization server 59 receives the message (402) advertising the resource and stores some or all of the identifier of the primary device 10, indications of the available resource(s), as well as the location of the primary device 10 (event 404). The secondary device 11 transmits a request for available head unit resources to the authorization server 59 (event 406). The authorization server 59 receives the request along with a device identifier of the secondary device 11 and the location of the secondary device 11. The authorization server 59 determines whether there is a primary device advertising available head unit resources within a certain range of the secondary device 11. In the illustrated scenario, the authorization server 59 determines that the primary device 10 is advertising an available head unit resource within the relevant range, and transmits a response 408 to the secondary device 11. The response 408 can indicate the available resource and the device identifier of the primary device 11.
In response to receiving the response 408 from the authorization server 59, the secondary device 11 in this example activates a UI element on the screen (event 410). For example, if the available resource advertised is a speaker, an interactive speaker icon may appear on the display of the secondary device 11. The passenger can select the speaker icon to choose to stream music from the secondary device 11 to the head unit 14 via the primary device 10.
In some embodiments, the primary device 10 also locally advertises the available resource to portable devices within a certain distance. Similarly, the secondary device 11 may attempt to discover primary devices within a proximate distance. In these embodiments, the secondary device 11 receives the transmission of the advertised available resource of the head unit 14 and transmits the device identifiers of the primary device 10 and the secondary device 11 to the authorization server 59. Turning briefly to
Referring again to the message sequence diagram of
With continued reference to the example scenario of
The driver then indicates that she gives permission to establish a connection between the primary device 10 and the secondary device 11 (event 418). The primary device in response to the event 418 transmits an authorization permission message 420 to the authorization server 59. The authorization server 59 receives the authorization permission 420 and determines connection parameters (event 422), which may include an indication of a type of connection to be established between the devices 10 and 11 (e.g., Bluetooth, Wi-Fi Direct, infrared), a time interval during the connection must be established, etc. The authorization server 59 transmits the connection parameters to the primary device 10 and the secondary device 11 (event 426).
The primary device 10 receives the connection parameters and establishes a connection with the secondary device 11 (event 428). Once the connection is established, the secondary device 11 can transmit data to the head unit 14 via the primary device 10. In some implementations, the authorization is symmetric, so that if the primary device 10 becomes a secondary device at a later time, the devices 10 and 11 can exchange data without further authorization.
With reference to
Similar to the examples above, the terms “user” and “driver” are used interchangeably, but it will be understood that navigation audio instructions can be generated, and personalized, for a passenger of the car if the passenger's portable device is used for navigation, for example.
The system of
As illustrated in
The familiarity scoring engine 62 uses the descriptions of maneuvers and the user-specific data to generate a familiarity score for each maneuver. For example, if the maneuver is reflected in the user's past driving data, and if it is also determined the user is close to home (e.g., within 2 miles), the familiarity score may be very high. In some implementations, if the familiarity score is above a certain threshold, the familiarity scoring engine 62 generates a “familiar” signal indicating that the user is familiar with the maneuver, and a “not familiar” signal indicating that the user is not familiar with the maneuver otherwise. In other implementations, the familiarity scoring engine 62 may send the “raw” familiarity score directly to the speech generation system 44.
In some cases, the familiarity scoring engine 62 can receive a signal indicative of whether the driver owns or is renting the vehicle. For example, referring back to
If the vehicle is a rental, the familiarity scoring engine 62 in some cases may categorize a location as being unfamiliar to the user. In other words, the familiarity scoring engine 62 can use this determination as one of several signals when determining whether a “familiar” or “not familiar” signal should be generated.
In addition to “familiar” and “not familiar” signals for various maneuvers, the speech generation system 44 also can receive an indication of the current state of the head unit from a register 74 and an indication of the current state of the vehicle from a register 76, at the time each audio instruction is generated. The state of the vehicle head unit 74 can be “audio playback” if the speakers of the head unit are playing music, for example. If there is no audio currently coming from the head unit, the state may be “idle.” In addition, there may be separate states depending on the volume of the audio playback such as “audio high” or “audio low.” In some implementations, depending on the volume of the audio playback, the instruction may be played at a higher or lower volume. For example, if the head unit is in state “audio low” the speech generation system 44 may generate audio instructions at a lower volume to decrease driver distraction. In the example scenario of
Referring back to
In the example scenario of
For maneuver 2, the familiarity scoring engine 62 also generates a “not familiar” signal 66. However, the state of the vehicle head unit at this time is “audio playback,” and the state of the vehicle is “vehicle moving.” In this instance, the speech generation system 44 determines the user does not have time for a lengthy instruction because the vehicle is moving, and the user is listening to music and probably does not want to be interrupted. Consequently, the speech generation system 44 generates a short audio instruction 82 which omits some of the text from the full-length description of maneuver 2.
In general, instructions can be shortened in any suitable manner, which may be language-specific. In an example implementation, the speech generation system 44 shortens audio instructions when appropriate by removing non-essential information, such as an indication of distance between the current location of the vehicle and the location of the upcoming maneuver or an indication of the road type following the proper name of the road (“Main” instead of “Main Street”). For example, a detailed audio instruction describing maneuver 2 may be “In 600 meters, turn right onto Central Street,” and the speech generation system 44 can output “Turn right onto Central” as the short audio instruction 82.
For maneuver 3, the familiarity scoring engine 62 generates a “familiar” signal 68. For example, maneuver 3 may be a part of one of the user's preferred routes, as indicated in the user profile. While the vehicle head unit is in the “idle” state, the speech generation system 44 generates a short audio instruction 84 because of the user's familiarity and the vehicle is moving. However, before generating the audio instruction, the speech generation system 44 also examines the next maneuver to determine whether both maneuvers are “familiar” to the user and, as such, can be combined into one shortened audio instruction describing both maneuvers.
Further, the familiarity scoring engine 62 generates a “familiar” signal 70 for maneuver 4. The speech generation system 44 then generates a short audio instruction 86 describing maneuver 4 and reduces the interval between the instructions 84 and 86 to zero. In other words, the speech generation system 44 combines the short instructions 84 and 86 into a single instruction. For example, a combined audio instruction 84,86 can be “Turn right onto Elm Street and merge onto Highway 34 in 500 meters.” The speech generation system 44 then may continue to look ahead to further maneuvers to potentially combine even more instructions, until there is a maneuver for which the familiarity scoring engine 62 generates a “not familiar” signal.
With continued reference to
Now referring to
Each of the items A-I can be an informational card that describes a point of interest that matches certain criteria, for example. As a more specific example, the driver may have requested that coffee shops along a path to a selection destination be displayed. Each of items A-I accordingly can include an address of the coffee shop, a photograph of the coffee shop, hours of operation, etc. The navigation service application 48 can receive data describing items A-I and organize the data into an ordered list, so that item B follows item A, item C follows item B, etc.
The pagination gesture controller 49 can update the display of subsets of the items A-I in response to gesture-based input received via the touchscreen 18. More particularly, the pagination gesture controller 49 updates display layout 102 to display layout 104 in response to a flick or swipe gesture 110, and then updates the display layout 104 to display layout 106 in response to a subsequent flick gesture 112. The swipe gestures 110 and 112 are applied in approximately the same horizontal direction, but the velocity of the swipe gesture 110 is substantially higher than the velocity of the swipe gesture 112, as represented by the represented in
In the initial display layout 102, a set of displayed items 120 includes items A, B, and C. When the user applies the relatively quick flick gesture 110, the pagination gesture controller 44 determines the direction of the gesture 110 and advances the list to display a new set 130 including items D, E, F. The user then applies the relatively slow flick gesture 112, and the pagination gesture controller 44 advances the list to display a new set 140 including items G, H, and I. Thus, in both instances the pagination gesture controller 44 ensures that a new set of items is displayed in response to a flick gesture, and that no item is skipped over when transitioning to a new set, regardless of how quick the particular instance of the flick gesture is.
The pagination gesture controller 49 in this example determines how far the list should progress in response to a flick gesture further in view of the size of the touchscreen 18 or of the viewable area currently available on the touchscreen 18. Similarly, if the user applies the flick gesture to the touchscreen on the portable device 10, the pagination gesture controller 44 can determine how many items can be displayed at a time in view of the dimensions of the touchscreen of the portable device 10. Thus, the pagination gesture controller 44 can traverse the same set of items A-I by displaying only pairs of items in response to successive flick gestures: (item A, item B) followed by (item C, item D), followed by (item E, item F), etc.
In the example of
Referring now to
The initial display layout 202 includes an array of map tiles 220, which includes a first row of tiles 1-A, 1-B, and 1C, a second row of tiles 2-A, 2-B, and 2-C, etc. In response to the relative slow flick gesture 210, the pagination gesture controller 44 displays a new array of map tiles 230, which shares only column C (i.e., map tiles 1-C, 2-C, . . . 5-C) with the array of map tiles 220 and includes new columns D and E. Further, in response to the relatively quick flick gesture 212, the pagination gesture controller 44 displays a new array of map tiles 240, which shares only column E (i.e., map tiles 1-E, 2-E, . . . 5-E) with the array of map tiles 230 and includes new columns F and G.
Similar to the scenario of
If desired, each column of map tiles in
Now referring to
The method begins at block 702, where a description of a set of maneuvers is received. Depending on the implementation, the description can be received from another device (e.g., a navigation server accessible via a communication network) or from another software component operating in the same device. The description of maneuvers can be provided in any suitable format, including an alphanumeric string in which descriptions of individual maneuvers are separated by a semicolon.
At block 704, a subset of maneuvers received at block 702 is selected. The subset in many cases includes as little as a single maneuver. However, the subset can include multiple maneuvers when the corresponding audio instructions are combined. Also, the user's familiarity with the route segment(s) corresponding to the maneuver(s) in the subset is determined at block 704, using the techniques discussed above or other suitable techniques.
At blocks 706 and 708, the state of vehicle head unit and the state of the vehicle, respectively are determined. Next, the method determines whether an audio instruction is needed at block 710 using the results of the determinations at blocks 704, 706, and 708. As discussed above, an audio instruction sometimes can be omitted. If no audio instruction is needed, the flow proceeds to block 716 to determine whether another next subset of maneuvers should be considered.
Otherwise, if it is determined that an audio navigation instruction is needed, the flow proceeds to block 712, where the duration of one or more audio instructions in the subset is determined. The method also can determine at block 712 whether the next maneuver should be considered as part of the subset, or whether there should be an interval between the audio instructions about the one or more maneuvers in the subset and the audio instruction related to the subsequent maneuver.
The method then proceeds to block 714 to generate the audio instruction for each maneuver or combination of maneuvers. At block 716, it is determined whether every maneuver has been considered as part of one of the subsets, and terminates if there are no maneuvers left. Otherwise, the flow proceeds back to block 704 to select the next subset of maneuvers.
Now referring to
The method begins at block 802, where a communication link is established between the head unit and primary device. In a typical scenario, the communication link is a short-range communication link, such as a USB, Bluetooth wireless connection, etc. Next, at block 804, it is determined whether the primary device is advertising an available resource of the head unit. For example, the advertised resource of the head unit may be a display, a speaker, a hardware input control, etc.
At block 806, it is determined whether the primary device accepts the communication link with the secondary device. In a typical scenario, the driver submits an input via the primary device to accept the communication link. At block 808, a communication link is established between the primary device and the secondary device, and the method 800 completes after block 810.
Referring to
The method begins at block 902, where a candidate primary device advertises an available resource of a head unit. At block 904, the candidate primary device receives an authorization request from the authorization server. In a typical scenario, the authorization request includes the device identifier and/or an additional descriptor of the device requesting authorization to connect. The driver may submit a user input using the primary device to accept the authorization request. In some embodiments, the primary device advertises an available resource of the head unit via a social networking service.
At block 906, the candidate primary device confirms the authorization permission request by transmitting the authorization request to the authorization server. At block 908, the candidate primary device receives connection parameters from the secondary device. Next, at block 910, the candidate primary device uses the connection parameters to establish a connection with a secondary device, and begins to operate as the primary device. Once the connection is established, at block 912 the primary device can transfer data between the head until and secondary device. Depending on the implementation, the transfer is unidirectional (e.g., from the secondary device to the head unit) or bidirectional (e.g., from the secondary device to the head unit as well as from the head to the secondary device). Further, the head unit in some embodiments receives status updates, user commands, etc. from the head unit and generates messages for the secondary device according to a communicate scheme defined between the primary and second devices. In other words, the second the primary device can implement robust functionality to support communications between the secondary device and the head unit, if desired. The method completes after block 912.
Now referring to
The method begins at block 1002, where the secondary device detects a proximate device with an available resource of head unit. In a typical scenario, the secondary device transmits a request to the authorization server requesting available resources within a proximate distance. The authorization server responds to the request by providing the secondary device of the device identifier(s) in proximate distance advertising available resources.
At block 1004, the secondary device transmits an authorization request to the authorization server including the device identifier of the primary device to which the secondary device is requesting permission to connect. Next, at block 1006, the secondary device receives connection parameters from the authorization server and establishes a connection with the primary device. At block 1008, the secondary device may exchange data with the head unit of the vehicle via the primary device. The method completes after block 1008.
Now referring to
The method begins at block 1102, where a message from a candidate primary device advertising an available resource of a head unit is received. In one implementation, the authorization server stores the device identifier of the candidate primary device as well as a descriptor of the resource being advertised. After a candidate secondary device “discovers” the candidate primary device using short-range communications or via a network server, an authorization request from the candidate secondary device is received at block 1104. The authorization request can include the device identifier of the candidate primary device to which the candidate secondary device is requesting permission to connect.
Next, at block 1106, the device identifier and available resource(s) of the proximate candidate primary device are transmitted to the candidate secondary device. At block 1108, an authorization permission message is received from the candidate primary device. For example, the user of the candidate primary device can accept the connection via the user interface. At block 1110, connection parameters are determined and, at block 1112, the connection parameters are transmitted to the primary and secondary devices. The method completes after block 1112.
At block 1202, an ordered set of items is received. As discussed above, the ordered set can be organized along a single dimension (e.g., a list of search results arranged in the order of relevance), two dimensions (e.g., an array of map tiles arranged into a grid), or a higher number of dimensions. Each item can include graphics content, text content, etc.
At block 1204, a first subset of the items is displayed via automotive UI, along at least one axis. For example, items A-I in
A gesture with a motion component along the at least one axis is received at block 1206. The gesture can be a flick gesture applied horizontally, vertically, diagonally, etc. Further, the gesture can have motion parameters in two dimensions or three dimensions. More particularly, the gesture can be detected via a touchscreen or in a 3D space in an automotive environment.
Next, at block 1208, a new subset of the items is selected for display independently of the velocity of the gesture. The new subset can be made up of several items that immediately follow the items previously being displayed. Depending on the implementation, the new subset can have some overlap or no overlap with the previously displayed subset.
The following additional considerations apply to the foregoing discussion. Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter of the present disclosure.
Additionally, certain embodiments are described herein as including logic or a number of components, modules, or mechanisms. Modules may constitute either software modules (e.g., code embodied on a machine-readable medium or in a transmission signal, wherein the code is executed by a processor) or hardware modules. A hardware module is a tangible unit capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein.
In various embodiments, a hardware module may be implemented mechanically or electronically. For example, a hardware module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
Accordingly, the term “hardware module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. As used herein “hardware-implemented module” refers to a hardware module. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where the hardware modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware modules at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time.
Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple of such hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information).
The methods 700, 800, 900, 1000, 1100, and 1200 may include one or more function blocks, modules, individual functions or routines in the form of tangible computer-executable instructions that are stored in a non-transitory computer-readable storage medium and executed using a processor of a computing device (e.g., a server, a personal computer, a smart phone, a portable device, a ‘secondary’ portable device, a vehicle head unit, a tablet computer, a head mounted display, a smart watch, a mobile computing device, or other personal computing device, as described herein). The methods 700, 800, 900, 1000, 1100, and 1200 may be included as part of any backend server (e.g., a navigation server, a familiarity scoring server, an authorization server, or any other type of server computing device, as described herein), portable device modules, or vehicle head unit modules of an automotive environment, for example, or as part of a module that is external to such an environment. Though the figures may be described with reference to the other figures for ease of explanation, the methods 700, 800, 900, 1000, 1100, and 1200 can be utilized with other objects and user interfaces. Furthermore, although the explanation above describes steps of the methods 700, 800, 900, 1000, 1100, and 1200 being performed by specific devices (such as a portable device 10, a secondary device 11, and a vehicle head unit 14), this is done for illustration purposes only. The blocks of the methods 700, 800, 900, 1000, 1100, and 1200 may be performed by one or more devices or other parts of the automotive environment.
The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules.
Similarly, the methods or routines described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented hardware modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment or as a server farm), while in other embodiments the processors may be distributed across a number of locations.
The one or more processors may also operate to support performance of the relevant operations in a cloud computing environment or as a software as a service (SaaS). For example, as indicated above, at least some of the operations may be performed by a group of computers (as examples of machines including processors), these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., application programming interfaces (APIs)).
The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the one or more processors or processor-implemented modules may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or processor-implemented modules may be distributed across a number of geographic locations.
Still further, the figures depict some embodiments of the automotive environment for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein.
Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for the automotive environment through the disclosed principles herein. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the disclosed embodiments are not limited to the precise construction and components disclosed herein. Various modifications, changes and variations, which will be apparent to those skilled in the art, may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope defined in the appended claims.
This application claims priority to and the benefit of the filing date of the following applications: U.S. Provisional Patent Application No. 61/923,484 entitled “Gesture Controls in Automotive User Interface,” filed on Jan. 3, 2014; U.S. Provisional Patent Application No. 61/923,882 entitled “Adaptive Speech Prompts For Automotive Navigation,” filed on Jan. 6, 2014; and U.S. Provisional Patent Application No. 61/924,418 entitled “Connecting Multiple Portable Devices to the Head Unit of a Vehicle,” filed on Jan. 7, 2014, the entire disclosures of each of which are hereby expressly incorporated by reference.
Number | Date | Country | |
---|---|---|---|
61923484 | Jan 2014 | US | |
61923882 | Jan 2014 | US | |
61924418 | Jan 2014 | US |