The present invention generally relates to electronic devices having man-machine interfaces (MMIs), and more particularly to systems and methods for generating and providing verbal feedback messages to users of such devices in response to user interaction with the MMIs.
Head-worn electronic devices, such as headsets, are used in a variety of applications, including listening to music and communications. Modern head-worn electronic devices are versatile and often offer various functions. For example, some state-of-the-art Bluetooth enabled headsets provide users the ability to both listen to music (e.g., from a Bluetooth-enabled MP3 player) and to engage in hands-free communications with others (e.g., by using a Bluetooth-connected cellular telephone).
A typical, modern head-worn electronic device includes a variety of switches, buttons and other controls (e.g., mute on/off, volume up/down, track forward/back, channel up/down controls), which allow the user to control the device's operation. These switches, buttons and other controls are collectively referred to in the art as a man-machine interface, or “MMI.”
One problem related to head-worn electronic devices equipped with MMIs is that the user cannot see the MMI when the device is being worn. This makes interacting with the MMI difficult and cumbersome. In an attempt to avoid this problem, some prior art approaches provide tactile feedback information in the form of audible beeps or tones that are presented to the user in response to a command applied to the MMI. The beeps or tones are used to convey various messages to the user. For example, depending on the type of device and interaction involved, the beeps or tones are used to convey an acknowledgement to the user that a command has been received and accepted by the device, an acknowledgement to the user that a command has been received but rejected by the device, or used merely to provide tactile feedback to the user that a certain control of the MMI is currently being manipulated.
Unfortunately, using beeps or tones can be confusing to users. In fact, it is not uncommon for a user to confuse one MMI feedback signal with another, particularly when the beeps or tones of different MMI feedback responses are not easily distinguishable. This confusion can lead to uncertainty as to whether a commanded function or operation has been performed properly, or has even been performed at all. The level of confusion is compounded for untrained users, to which the beeps or tones may have no meaning whatsoever.
Prior art approaches also use beeps or tones in an attempt to provide users with information relating to various monitored device states. For example, beeps or tones may be used to inform the user that the device's battery is low or the device is out of range of an access point, base station or Bluetooth coupled device. Unfortunately, similar to the problems resulting from using beeps or tones for MMI feedback, using beeps or tones to report device state information can be confusing to users.
Given the foregoing drawbacks, problems and limitations of the prior art, it would be desirable to have systems and methods that generate and provide unambiguous and easily ascertainable MMI feedback and device state information to users of head-worn electronic devices.
Systems and methods for generating and providing verbal feedback messages and device state information to users of MMI-enabled head-worn electronic devices are disclosed. An exemplary head-worn electronic device includes an MMI and an acoustic verbal message generator that is configured to provide verbal acoustic messages to a wearer of the head-worn electronic device, in response to the wearer's interaction with the MMI. Since the verbal messages are provided verbally, the confusion resulting from use of beeps or tones used in prior art approaches is avoided.
In accordance with one aspect of the invention, a head-worn electronic device includes one or more detectors or sensors coupled to a microprocessor-based subsystem. The one or more detectors or sensors are configured to detect or sense event signals corresponding to monitored device states and/or commands applied to the MMI by the device user. The detected event signals are used by the microprocessor-based subsystem to generate and provide the verbal feedback and/or device state information to the user.
In accordance with another aspect of the invention, the microprocessor-based subsystem includes a memory device configured to store a plurality of verbal messages corresponding to the various MMI commands and/or information relating to the monitored device states. The verbal messages may be stored in more than one natural language (e.g., English, Chinese, French, Spanish, Korean, Japanese, etc.) with a first set of verbal messages stored according to a first natural language, a second set of verbal messages stored according to a second natural language, etc. The language of choice can be selected by a user during an initialization of the device and can be reset in a reconfiguration process. Although not required, the various sets of verbal messages in different languages can be configured to share the same data structure or memory space, so that access to a particular message entry can be conveniently accessed, irrespective of the language choice selection.
Further features and advantages of the present invention, as well as the structure and operation of the above-summarized and other exemplary embodiments of the invention, are described in detail below with respect to accompanying drawings in which like reference numbers are used to indicate identical or functionally similar elements.
Referring to
The head-worn electronic device 102 may comprise, for example, music headphones, a communications headset, or a head-worn cellular telephone. While the term “headset” has various definitions and connotations, for the purposes of this disclosure, the term is meant to refer to either a single headphone (i.e., a monaural headset) or a pair of headphones (i.e., binaural headset), which include(s) or does not include, depending on the application and/or user-preference, a microphone that enables two-way communication.
The head-worn electronic device 102 is configured to receive audio data signals (e.g., voice data signals) from an audio source 120 and/or transmit audio data signals to an audio sink 122, via a wireless link 116. The audio data signals can be encoded and/or compressed, similar to the verbal feedback messages described below. The audio source 120 may comprise any device that is capable of transmitting wired or wireless signals containing audio information to the head-worn electronic device 102. Similarly, the audio sink 122 may comprise any device that is capable of receiving wired or wireless signals containing audio information from the head-worn electronic device 102. The wireless link 116 may comprise a Digital Enhanced Cordless Telecommunications (DECT) link, a DECT 6.0 link, a Bluetooth wireless link, a Wi-Fi (IEEE 802.11) wireless link, a Wi-Max (IEEE 802.16) link, a cellular communications wireless link, or other wireless communications link (e.g., infra-red, ultrasonic, magnetic-induction-based, etc.). While a wireless head-worn device is shown as being coupled to the audio source 120 and audio sink 122 via a wireless link 116, a wired head-worn device may alternatively be used, in which case electrical wires would be connected between the head-worn electronic device 102 and the audio source 120 and audio sink 122.
The head-worn electronic device 102 also includes an MMI comprised of switches, buttons and/or other controls.
According to an aspect of the invention, verbalized feedback information (e.g., a verbal acknowledgment message, verbal prompt, a verbal message indicating the wearer's interaction with the MMI, etc.) is fed back to the user in response to the user's interaction with the controls of the MMI. According to another aspect of the invention, verbal messages informing of a change in device state are provided to the user. Changes in device states may include, for example, low battery, out-of-range of an audio source or audio sink, wireless link signal strength low, etc. The device states are detected and monitored by one or more detectors or sensors.
According to another aspect of the invention, the head-worn electronic device 102 includes a verbal message generation program module and an associated microprocessor-based (or microcontroller-based) subsystem comprising a microprocessor (e.g., an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or system on a chip (SoC)) and a memory device (e.g., an erasable programmable read only memory (EPROM), an electrically erasable programmable read only memory (EEPROM), a read only memory (ROM), or a random access memory (RAM)). As explained in more detail below, the microprocessor is configured to execute instructions provided by the verbal message generation program to generate verbal feedback messages in response to MMI commands entered by the user and/or to provide verbal device state information messages reporting changes in monitored device states.
The wireless receiver 302 is configured to receive audio data signals from an audio source 120 over a wireless link 116. The modulated RF signals are demodulated and directed to the decoder 346 via the microprocessor core 312. The decoder 346 decodes and/or decompresses the received audio data signals 310 into audio signals, which are provided to the acoustic transducer 350 to generate verbal acoustic messages for the user.
The memory device 320 of the microprocessor-based subsystem is coupled to the microprocessor core 312 via a memory I/O bus (e.g., memory address input bus 322 and memory data output bus 328). It is configured to provide memory space for data tables 324, program memory 326 for the verbal message generation program module, and the verbal messages. While only a single memory device 320 is shown as providing these functions, a plurality of memory devices can alternatively be used. Further, the verbal messages may be encoded and/or compressed before storing in the memory device 320. Any number of encoding and/or compression schemes can be used. For example, an ADPCM decoder or a CVSD may be used, as shown in
The plurality of verbal messages may be configured as a verbal message data table 330-1 in the memory device 320, as illustrated in
The detectors or sensors 315 are configured to detect and receive event signals produced by MMI commands 308 entered by the user, as well as changes to monitored device states. The microprocessor core 312 is configured to receive the event signals, and by the direction of the verbal message generation program, is operable to determine, access and retrieve the appropriate verbal messages stored in the verbal message data table 330-1 corresponding to the detected event signals. The retrieved verbal messages are decoded by the decoder 346 (if necessary) and directed to the acoustic transducer 350, which generates verbal acoustic messages for the user to hear.
According to one aspect of the invention, the verbal messages are stored in multiple different languages (e.g., English, Chinese, Spanish, French, German, Korean, Japanese, etc.), as indicated by the additional verbal message data tables 330-2, . . . , 330-N (where N is an integer greater than or equal to one) in
A data sink 314 and an audio data switch 318 are also included in the headset 30 in
Referring now to
A first step 402 in the process 40 involves an initialization procedure in which the user 110 selects a natural language, from a plurality of available natural languages, to be used to verbalize the verbal messages. After the initialization step 402 is completed, the process 40 holds in an idle state 404, in wait for an event signal for generating verbal messages.
Once an event signal is received at step 406, indicating an MMI command or change in device state, the verbal message generation process commences. Triggering of an event signal can occur automatically according to a predetermined update schedule, manually (e.g., by the user 110), by detected MMI commands entered by the user 110 (e.g., mute, mute off, volume up/down), or by a detected change in a monitored device state of the headset 102 (e.g., headset 102 coming within range or going out-of-range of the audio source 120 or audio sink 122, low battery, etc.).
In response to a detected event signal in step 406, at step 408 the memory address of the appropriate verbal message stored in the verbal message data table is determined. Once the unique memory address is determined, at step 410 the verbal message is accessed and retrieved. Next, at decision 412 it is determined whether the retrieved verbal message is in an encoded data format. If “yes”, at step 414 the retrieved verbal message is decoded and/or decompressed accordingly. The process 40 then moves to another decision 416 after step 414. If “no” at decision 412, the process 40 goes directly to the decision 416 without any decoding or decompressing process.
At decision 416, the verbal message playback mode is checked to determine whether the verbal messages are to be played back exclusively with the audio data signals received from the audio source 120. If “yes”, at 'step 418 the receive path for directing the received audio data signals to the acoustic transducer or speaker 350 is temporarily disabled or blocked, and the process 40 moves to step 420. If “no”, the process 40 moves directly to step 420, in which the retrieved verbal message corresponding to the detected event signal is converted to an verbal acoustic message that is verbalized by an acoustic transducer 350 (e.g., a speaker) to the user 110. Finally, the process 40 returns to the idle state 404, in wait for a subsequent event signal.
Although the present invention has been described with reference to specific embodiments thereof, these embodiments are merely illustrative, and not restrictive of, the present invention. Various modifications or changes to the specifically disclosed exemplary embodiments will be suggested to persons skilled in the art. For example, whereas the head-worn electronic device has been shown and described as a headset comprising a binaural headphone having a headset top that fits over a user's head, other headset types including, without limitation, monaural, earbud-type, canal-phone type, etc. may also be used. Depending on the application, the various types of headsets may include or not include a microphone for providing two-way communications. Moreover, while some of the exemplary embodiments have been described in the context of a headset, those of ordinary skill in the art will readily appreciate and understand that the methods, system and apparatus of the invention may be adapted or modified to work with other types of head-worn electronic devices. In summary, the scope of the invention should not be restricted to the specific exemplary embodiments disclosed herein, and all modifications that are readily suggested to those of ordinary skill in the art should be included within the spirit and purview of this application and scope of the appended claims.