This invention relates generally to portable or mobile computer terminals and more specifically to mobile terminals having speech functionality for executing, directing, and assisting tasks using voice or speech.
Wearable, mobile and/or portable computers or terminals are used for a wide variety of tasks. Such mobile computers allow the workers or users using or wearing them (“users”) to maintain mobility, while providing the worker with desirable computing and data-processing functions. Furthermore, such mobile computers often provide a communication link to a larger, more centralized computer system that directs the activities of the user and processes any user inputs, such as collected data. One example of a specific use for a wearable/mobile/portable computer is a voice-directed system that involves speech and speech recognition for interfacing with a user to direct the tasks of a user and collect data gathered during task execution.
An overall integrated system may utilize a central computer system that runs a variety of programs, such as a program for directing or assisting a user in their day-to-day tasks. A plurality of mobile computers is employed by the users of the system to communicate (usually in a wireless fashion) with the central system. The users perform manual tasks according to voice instructions and information they receive through the mobile computers, via the central system. The mobile computer terminal also allow the users to interface with the central computer system using voice, such as to respond to inquiries, obtain information, confirm the completion of certain tasks, or enter data.
In one embodiment, mobile computers having voice or speech capabilities are in the form of separate, wearable units. The computer is worn on the body of a user, such as around the waist, and a headset device connects to the mobile computer, such as with a cord or cable or possibly in a wireless fashion. In another embodiment, the mobile computer might be implemented directly in the headset. In either case, the headset has a microphone for capturing the voice of the user for voice data entry and commands. The headset also includes one or more ear speakers for both confirming the spoken words of the user and also for playing voice instructions and other audio that are generated or synthesized by the mobile computer. Through the headset, the workers are able to receive voice instructions or questions about their tasks, to receive information about their tasks, ask and answer questions, report the progress of their tasks, and report working conditions, for example.
The mobile speech computers provide a significant efficiency in the performance of the workers tasks. Specifically, using such mobile computers, the work is done virtually hands-free without equipment to juggle or paperwork to carry around. However, while existing speech systems provide hands-free operations, they also have various drawbacks associated with their configuration.
One drawback with current systems is the controls on the mobile computer. There are generally three ways to operate the controls, including stopping the task and looking at the controls, feeling around the controls for textures or features or simply operating the controls to see what happens. For example, in a speech system, a typical adjustment for a worker may be adjusting the volume to the associated headset using volume control buttons as the worker moves from one location in a warehouse to another. To look at the controls, the worker may have to shift the mobile computer which may be worn about waist level on a belt or other article of clothing and divert their eyes from the task at hand to look at the control buttons. In the case of a headset computer, the worker may actually have to take the headset off.
Alternatively, feeling around for certain shapes or textures requires knowledge of the terminal. More experienced workers familiar with the mobile computer may be able to select the proper control button by counting the buttons from left to right. This method, though, requires familiarity with the mobile computer as well as feeling around on the device to find a reference button. However, the option of experimentally trying buttons to see what happens is not a particularly desirable tactic.
Accordingly, there is a need, unmet by current and mobile devices, such as mobile computers, to address the issues noted above. There is particularly an unmet need in the area of control for mobile devices used for eyes-free, speech-directed work.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and, together with a general description of the invention given above and the Detailed Description given below, serve to explain the invention.
The invention addresses the problems with the prior art by providing an aural feedback apparatus for user controls of a device. In one embodiment, a device, such as a portable computer, has a processing unit with processor and controls disposed on the surface, which are operable for controlling the processing unit and the operation of the device. The controls include at least one button or other control mechanism with capacitive sensing. The button is operable to generate a first indication, such as an audible indication, to a user when touched and is further operable to interact with the processing unit to execute a function when depressed or otherwise engaged. The device may be in the form of a mobile, portable or wearable computer that is configured for wireless communications to communicate with a central processor. Although, the invention is not limited and might be used on other devices that utilize headsets or speakers worn by a user. Therefore, while the disclosed exemplary embodiments illustrate the invention in a mobile or wearable computer device, the invention is not so limited. The user controls on the device that are configured with capacitive sensing and an audible indication facilitates easy identification of control buttons without a user having to look directly at the controls. Likewise, easy identification allows the user to concentrate on the task at hand rather than on the controls, thus, increasing efficiency and safety.
Turning now to the drawings, wherein like numbers denote like parts throughout the several views,
Instructions that the worker 12 receives from the mobile computer through the headset 14 may be related to any number of voice-directed work tasks. The mobile computer 16 has a control panel 20 with control buttons 18 disposed on the surface with which the worker 12 may control functions of the mobile computer 16. These adjustments may include increasing or decreasing the volume, turning the device on and off, or replaying or pausing an instruction to the worker 12, for example. Other control functions might also be provided. Although
Referring now to
In accordance with one aspect of the invention, in addition to visual indications on the buttons 18 on the control panel 20, the control buttons 18 also are equipped with capacitive sensing. Capacitive sensing may be used to indicate that a particular button 18 has been touched by a user 12, such as by the finger 12′ of the user. Once the mobile computer senses that a particular control button 18 has been touched, it sends an audible or aural feedback to the user, such as a sound or speech, through the speaker(s) of the headset 14.
When the control circuitry 36 detects the capacitive change caused by the disruption of the electric field 34, the control circuitry 36 then sends an electrical signal to the processor 38 indicating which of the buttons 18 has been touched. The processor may then generate, and use the sound circuit 40 to generate, a particular sound or tone that is unique to a particular button 18. The tone or sound is electrically transmitted to speaker 42, such as a speaker in the headset 14, that may then produce an audible indication 44 of the tone to the user 12. In accordance with one aspect of the invention, each button may have its own unique sound or tone associated therewith. From the audible indication or feedback 44, the user 12 may easily audibly identify which of the buttons 18 they have touched. Once the user 12 has determined that they have found the correct control button 18, the user 12 may then depress the button or otherwise activate the control device, which causes the processor 38 to operate the mobile computer or other device 16 to perform the control function associated with that button 18. Therefore, the control circuitry 36 is configured and is appropriately coupled with the control buttons 18 so that the control circuitry will know when a user is touching a button but not pressing it, and when a user is pressing the button to activate a function.
In other embodiments, the audible indication 44 sent to the speaker 42 may be replaced by speech. An alternate embodiment utilizing speech may be seen in
The speech patterns selectable by the processor 38 may be in the form of a pre-recorded voice that is stored in the mobile computer 16. In other embodiments, the speech patterns may be generated by a synthesized voice from data that may also be stored in the mobile computer 16. The types of speech that may be output through the speaker 42 may indicate the function of the button, for example, by the phrases volume up, volume down, pause, replay, etc.
Referring now to
While the embodiments above have been illustrated using a capacitive sensing method which is determined by generating an electric field and sensing disturbances in the electric field, a person skilled in the art will recognize that any method of capacitive sensing may be utilized in place of the electric fields in the embodiments shown. Other embodiments may utilize capacitive matrix sensing as well as other techniques and still be within the scope of the invention. Similarly, the audio CODEC module for the speech synthesis may be replaced by any other module suitable for changing electrical signals into speech which may be then sent to a speaker as would be apparent to one skilled in the art.
While the present invention has been illustrated by the description of the embodiments thereof, and while the embodiments have been described in considerable detail, it is not the intention of the application to restrict or in any way limit the scope of the appended claims to such detail. Additional advantages and modifications will readily appear to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details or representative apparatus and method, and illustrative examples shown and described. Accordingly, departures may be made form such details without departure from the spirit or scope of applicant's general inventive concept.