The illustrative embodiments generally relate to a system and method for interrupting an instructional prompt to signal upcoming input over a wireless communication link.
Modern technology and cost saving methodologies have lead to the implementation of numerous electronic menus, removing a live operator from a call and replacing the operator with an input driven menu. Many people have experienced this when calling, for example, a cable company, a credit card company, a phone company, etc. Even when calling a company to purchase a product, the electronic menus often occur.
Electronic menus can also be used to provide a range of informational services, for example. In one instance, a company called TELLME provides a call-in service where the caller can obtain information ranging from weather to news to sports score updates.
Electronic menus used to be primarily dual tone multiple frequency (DTMF) tone controlled. That is, a user was prompted to enter the number 1, 2, 3, etc. Entering a specific number activated a DTMF tone that corresponded to a particular menu choice.
As voice recognition technology has improved, some electronic menus have replaced the tone controls with voice controls. That is, instead of entering 1, the user will say “one”. Other options with the voice based menus allow the user to ask for specific “named” options. For example, a user could call their bank and, instead of entering 2 for checking information, the user could, when prompted, say “checking information.”
Hybrid menus also exist, allowing a choice between numeric entry using a keypad and voice-based entry. These might be desirable, for example, when a user is inputting a secure number in a public place, and doesn't wish to announce their social security number, for example, to bystanders.
Often times, it is also possible to interrupt the menu, by pushing an input button or speaking a command early. Menus designed to allow interruption simultaneously output information and listen for input at the same time.
In one illustrative embodiment, a vehicle communication system includes a computer processor in communication with a wireless transceiver, capable of communication with a wireless communication device and located remotely from the processor.
The system also includes at least one output controllable by the processor. As one non-limiting example, this output could be the vehicle's speakers. Also included in the system is at least one input control in communication with the processor. In this illustrative embodiment, this input is a touch-controlled input, such as a steering wheel mounted button. The input control could be any suitable input, however.
The system also comprises a microphone in communication with the processor. This microphone can be used to enter, for example, verbal commands.
In this illustrative embodiment, the processor may connect to a remote network through the wireless communication device. The remote network can be a network providing user services, and the processor may further provide playback of a voice-controllable menu, retrieved from the remote network, though the output.
When the user desires to respond to the voice-controllable menu, to input a menu selection, for example, the user may activate the first input control, and the processor may detect activation of the input control. At this point, the processor may also cease playback of the menu and begin detection for a microphone input.
In a second illustrative embodiment, an automated menu system includes persistent and/or non-persistent memory. Also, a predetermined audio menu providing selectable menu options is stored in at least one of the persistent or non-persistent memory. The system further includes a processor, in communication with the persistent and non-persistent memory, to instruct delivery of the predetermined menu over a communication link. This delivery can be to, for example, a vehicle-based communication system.
At some point, a user may desire to input a verbal command, and activate an input signaling this desire. Accordingly, the processor may detect an interrupt instruction (such as may be provided upon input activation) received over the communication link.
Once the interrupt instruction is detected, the processor may cease delivery of the predetermined menu and to begin receiving a menu option selection over the communication link.
In yet another illustrative embodiment, a method of processing a voice interactive session includes providing instructions or information to be output. This could be, for example, a voice-selectable menu. While the information is provided, a first input may be detected.
In this illustrative embodiment, detection of the first input signals a desire to input a verbal command, so the providing is ceased upon input detection. At the same time, listening for a second input begins. Typically, the second input will correspond to a menu option desired for selection.
Other aspects and characteristics of the illustrative embodiments will become apparent from the following detailed description of exemplary embodiments, when read in view of the accompanying drawings, in which:
The present invention is described herein in the context of particular exemplary illustrative embodiments. However, it will be recognized by those of ordinary skill that modification, extensions and changes to the disclosed exemplary illustrative embodiments may be made without departing from the true scope and spirit of the instant invention. In short, the following descriptions are provided by way of example only, and the present invention is not limited to the particular illustrative embodiments disclosed herein.
In the illustrative embodiment 1 shown in
The processor is also provided with a number of different inputs allowing the user to interface with the processor. In this illustrative embodiment, a microphone 29, an auxiliary input 25 (for input 33), a USB input 23, a GPS input 24 and a BLUETOOTH input 15 are all provided. An input selector 51 is also provided, to allow a user to swap between various inputs. Input to both the microphone and the auxiliary connector is converted from analog to digital by a converter 27 before being passed to the processor.
Outputs to the system can include, but are not limited to, a visual display 4 and a speaker 13 or stereo system output. The speaker is connected to an amplifier 11 and receives its signal from the processor 3 through a digital-to-analog converter 9. Output can also be made to a remote BLUETOOTH device such as PND 54 or a USB device such as vehicle navigation device 60 along the bi-directional data streams shown at 19 and 21 respectively.
In one illustrative embodiment, the system 1 uses the BLUETOOTH transceiver 15 to communicate 17 with a user's nomadic device 53 (e.g., cell phone, smart phone, PDA, etc.). The nomadic device can then be used to communicate 59 with a network 61 outside the vehicle 31 through, for example, communication 55 with a cellular tower 57.
Pairing a nomadic device 53 and the BLUETOOTH transceiver 15 can be instructed through a button 53 or similar input, telling the CPU that the onboard BLUETOOTH transceiver will be paired with a BLUETOOTH transceiver in a nomadic device.
Data may be communicated between CPU 3 and network 61 utilizing, for example, a data-plan, data over voice, or DTMF tones associated with nomadic device 53. Alternatively, it may be desirable to include an onboard modem 63 in order to transfer data between CPU 3 and network 61 over the voice band. In one illustrative embodiment, the processor is provided with an operating system including an API to communicate with modem application software. The modem application software may access an embedded module or firmware on the BLUETOOTH transceiver to complete wireless communication with a remote BLUETOOTH transceiver (such as that found in a nomadic device). In another embodiment, nomadic device 53 includes a modem for voice band or broadband data communication. In the data-over-voice embodiment, a technique known as frequency division multiplexing may be implemented when the owner of the nomadic device can talk over the device while data is being transferred. At other times, when the owner is not using the device, the data transfer can use the whole bandwidth (300 Hz to 3.4 kHz in one example).
If the user has a data-plan associated with the nomadic device, it is possible that the data-plan allows for broad-band transmission and the system could use a much wider bandwidth (speeding up data transfer). In still another embodiment, nomadic device 53 is replaced with a cellular communication device (not shown) that is affixed to vehicle 31.
In one embodiment, incoming data can be passed through the nomadic device via a data-over-voice or data-plan, through the onboard BLUETOOTH transceiver and into the vehicle's internal processor 3. In the case of certain temporary data, for example, the data can be stored on the HDD or other storage media 7 until such time as the data is no longer needed.
Additional sources that may interface with the vehicle include a personal navigation device 54, having, for example, a USB connection 56 and/or an antenna 58; or a vehicle navigation device 60, having a USB 62 or other connection, an onboard GPS device 24, or remote navigation system (not shown) having connectivity to network 61.
Further, the CPU could be in communication with a variety of other auxiliary devices 65. These devices can be connected through a wireless 67 or wired 69 connection. Also, or alternatively, the CPU could be connected to a vehicle based wireless router 73, using for example a WiFi 71 transceiver. This could allow the CPU to connect to remote networks in range of the local router 73.
The network passes commands from the vehicle to various remote applications. One example of a remote application might be TELLME, which may be included on a voice application server 207. TELLME is an exemplary voice controlled application providing news, weather, stock updates, sports updates, etc. Information flows to and from applications such as TELLME to the nomadic device 53 located in the vehicle.
In this illustrative embodiment, the system waits until it detects that a voice button has been pressed 301. One example of detection is based on a DTMF tone. In this illustrative embodiment, the DTMF tone for “*” key is sent when the voice button is pushed as a sinusoidal tone of two frequencies 941 Hz and 1240 Hz. Any DTMF tone could be used, however, or any other suitable method of detecting button input. The voice button has more than one functionality, in this illustrative embodiment, it at least signals the onset of a voice session and signals an interrupt to a played-back set of instructions. Once the voice button has been pressed, the voice session begins listening for user instruction 303.
In this illustrative embodiment, the voice session's beginning corresponds to a connection to TELLME, although any voice interactive application may be being accessed. The system checks to see if the voice button has been pressed again 305. If the voice button is pressed for a second time, the system begins listening for a command, without providing instructions. This allows sophisticated users to immediately enter a command without having to wait for familiar menus to be played back.
If the voice button is not pressed again, the system begins instruction playback 307. The instructions, for example, tell the user what menu options are available. Once the instructions have been provided, the system listens for input 309. As long as a timeout 311 has not occurred, the system checks to see if the input is valid 317. If the input is valid, the system initiates the input command 319.
If the input is not recognized, the system notifies the user that a match was not found 315 and returns to listening for input. If the timeout occurs, the system reminds the user to provide input and returns to listening for input.
While the invention has been described in connection with what are presently considered to be the most practical and preferred embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
6539078 | Hunt et al. | Mar 2003 | B1 |
6668221 | Harter, Jr. et al. | Dec 2003 | B2 |
6741931 | Kohut | May 2004 | B1 |
6842677 | Pathare | Jan 2005 | B2 |
6903652 | Noguchi et al. | Jun 2005 | B2 |
7194069 | Jones et al. | Mar 2007 | B1 |
7246062 | Knott et al. | Jul 2007 | B2 |
7337113 | Nakagawa et al. | Feb 2008 | B2 |
7565230 | Gardner et al. | Jul 2009 | B2 |
7764189 | Rubins et al. | Jul 2010 | B2 |
7783475 | Neuberger et al. | Aug 2010 | B2 |
7826945 | Zhang et al. | Nov 2010 | B2 |
7830271 | Rubins et al. | Nov 2010 | B2 |
7881940 | Dusterhoff | Feb 2011 | B2 |
8116437 | Stillman et al. | Feb 2012 | B2 |
8285453 | Schroeder et al. | Oct 2012 | B2 |
8502642 | Vitito | Aug 2013 | B2 |
20030004730 | Yuschik | Jan 2003 | A1 |
20030055643 | Woestemeyer et al. | Mar 2003 | A1 |
20030099335 | Tanaka et al. | May 2003 | A1 |
20030220725 | Harter, Jr. et al. | Nov 2003 | A1 |
20040176906 | Matsubara et al. | Sep 2004 | A1 |
20040267534 | Beiermeister et al. | Dec 2004 | A1 |
20050125110 | Potter et al. | Jun 2005 | A1 |
20050215241 | Okada | Sep 2005 | A1 |
20060142917 | Goudy | Jun 2006 | A1 |
20070005368 | Chutorash | Jan 2007 | A1 |
20070072616 | Irani | Mar 2007 | A1 |
20070255568 | Pennock | Nov 2007 | A1 |
20080070616 | Yun | Mar 2008 | A1 |
20090085728 | Catten et al. | Apr 2009 | A1 |
20090275281 | Rosen | Nov 2009 | A1 |
20100191535 | Berry et al. | Jul 2010 | A1 |
20100210254 | Kelly et al. | Aug 2010 | A1 |
20100233959 | Kelly et al. | Sep 2010 | A1 |
20100279626 | Bradley et al. | Nov 2010 | A1 |
20110003587 | Belz | Jan 2011 | A1 |
20110009107 | Guba et al. | Jan 2011 | A1 |
20110021234 | Tibbitts et al. | Jan 2011 | A1 |
20110076996 | Burton et al. | Mar 2011 | A1 |
20110084852 | Szczerba | Apr 2011 | A1 |
20110115616 | Caspe-Detzer et al. | May 2011 | A1 |
20110115618 | Catten et al. | May 2011 | A1 |
20110166748 | Schneider et al. | Jul 2011 | A1 |
20120041633 | Schunder et al. | Feb 2012 | A1 |
Number | Date | Country |
---|---|---|
20070241122 | Sep 2007 | JP |
20110088502 | May 2011 | JP |
Entry |
---|
Kermit Whitfield, “A hitchhiker's guide to the telematics ecosystem”, Automotive Design & Production, Oct. 2003, http://findarticles.com, pp. 1-3. |
Ford Motor Company, “SYNC with Navigation System,” Owner's Guide Supplement, SYNC System Version 1 (Jul. 2007). |
Ford Motor Company, “SYNC,” Owner's Guide Supplement, SYNC System Version 1 (Nov. 2007). |
Ford Motor Company, “SYNC with Navigation System,” Owner's Guide Supplement, SYNC System Version 2 (Oct. 2008). |
Ford Motor Company, “SYNC,” Owner's Guide Supplement, SYNC System Version 2 (Oct. 2008). |
International Searching Authority, International Search Report and Written Opinion for the corresponding PCT Application No. PCT/US2009/69671 mailed Mar. 2, 2010. |
Ford Motor Company, “SYNC with Navigation System,” Owner's Guide Supplement, SYNC System Version 3 (Jul. 2009). |
Ford Motor Company, “SYNC,” Owner's Guide Supplement, SYNC System Version 3 (Aug. 2009). |
Driver Focus-Telematics Working Group, Statement of Principles, Criteria and Verification Procedures on Driver Interactions with Advanced In-Vehicle Information and Communications Systems, Including 2006 Updated Sections, Jun. 26, 2006. |
Number | Date | Country | |
---|---|---|---|
20100191535 A1 | Jul 2010 | US |