The present invention relates to a computer interface. More particularly, the present invention relates to a system and method for interfacing with a computer by way of audio communications. Even more particularly, the present invention relates to a voice recognition system and method for receiving audio input, a module for interacting with computer applications and a voice synthesis module for transmitting audio output.
The public is increasingly using computers to store and access information that affects their daily lives. Personal information such as appointments, tasks and contacts, as well as enterprise data such as data in spreadsheets, databases, word processing documents and the like are all types of information that are particularly amenable to storage in a computer because of the ease of updating, organizing, and accessing such information. In addition, computers are able to remotely access time-sensitive information, such as stock quotes, weather reports and so forth, on or near a real-time basis from the Internet or another network. To perform all of the tasks required of them, computers have become quite sophisticated and computationally powerful. Thus, while a user has access to his or her computer—in other words, while the user is at home or at the office—the user is able to easily access such computational power to perform a desired task.
In many situations, however, a user will require access to such information while traveling or while simply away from his or her computer. Unfortunately, the full computing power of a computer is, for the most part, immobile. For example, a desktop computer is designed to be placed at a fixed location, and is, therefore, unsuitable for mobile applications. Laptop computers are much more transportable than desktop computers, and have comparable computing power, but are costly and still fairly cumbersome. In addition, wireless Internet connectivity is expensive and still not widely available, and a cellular phone connection for such a laptop is slow by current Internet standards. In addition, having remote Internet connectivity is duplicative of the Internet connectivity a user may have at his or her home or office, with attendant duplication of costs.
Conventionally, a personal digital assistant (“PDA”) can be used to access a user's information. Such a PDA can connect intermittently with a computer through a cradle or IR beam and thereby upload or download information with the computer. Some PDAs can access the information through a wireless connection, or may double as a cellular phone. However, PDAs have numerous shortcomings. For example, PDAs are expensive, often duplicate some of the computing power that already exists in the user's computer, sometimes require a subscription to an expensive service, often require synchronization with a base station or personal computer, are difficult to use—both in terms of learning to use a PDA and in terms of a PDA's small screen and input devices requiring two-handed use—and have limited functionality as compared to a user's computer. As the amount of mobile computing power is increased, the expense and complexity of PDAs increases as well. In addition, because a conventional PDA stores the user's information on-board, a PDA carries with it the risk of data loss through theft or loss of the PDA.
As the size, cost and portability of cellular phones has improved, the use of cellular phones has become almost universal. Some conventional cellular phones have limited voice activation capability to perform simple tasks using audio commands such as calling a specified person. Similarly, some automobiles and advanced cellular phones can recognize sounds in the context of receiving simple commands. In such conventional systems, the software involved simply identifies a known command (i.e., sound) which causes the desired function, such as calling a desired person, to be performed. In other words, a conventional system matches a sound to a desired function, without determining the meaning of the word(s) spoken. Similarly, conventional software applications exist that permit an email message to be spoken to a user by way of a cellular phone. In such an application, the cellular phone simply relays a command to the software, which then plays the message.
Conventional software that is capable of recognizing speech is either server-based or primarily for a user that is co-located with the computer. For example, voice recognition systems for call centers need to be run on powerful servers due to the systems' large size and complexity. Such systems are large and complex in part because they need to be able to recognize speech from speakers having a variety of accents and speech patterns. Such systems, despite their complex nature, are still typically limited to menu-driven responses. In other words, a caller to a typical voice recognition software package must proceed through one or more layers of a menu to get to the desired functions, rather than being able to simply speak the desired request and have the system recognize the request. Conventional speech recognition software that is designed to run on a personal computer is primarily directed to dictation, and such software is further limited to being used while the user is in front of the computer and to accessing simple menu items that are determined by the software. Thus, conventional speech recognition software merely serves to act as a replacement for or a supplement to typical input devices, such as a keyboard or mouse.
Furthermore, conventional PDAs, cellular phones and laptop computers have the shortcoming that each is largely unable to perform the other's functions. Advanced wireless devices combine the functionality of PDAs and cellular phones, but are very expensive. Thus, a user either has to purchase a device capable of performing the functions of a PDA, cellular phone, and possibly even a laptop—at great expense—or the user will more likely purchase an individual cellular phone, a PDA, and/or a laptop.
Accordingly, what is needed is a portable means for communicating with a computer. More particularly, what is needed is a system and method for verbally communicating with a computer to obtain information by way of an inexpensive, portable device, such as a cellular phone. Even more particularly, what is needed is a system and method of operatively interconnecting multiple computing programs operating on a computer to provide an integrated system for sending commands to and receiving information from a remote computer.
In light of the foregoing limitations and drawbacks, a method and system for interacting with data stored on a computer is provided. In the method, a communications connection between a computer and user by way of a remote communications device is established. A spoken utterance or audio signal is received from the user by way of the remote communications device. This utterance or signal is processed to determine a desired function and the desired function is performed with respect to the data stored on the computer, in accordance with the spoken utterance.
In the system, a communications channel enables communication between the computer and a remote communications device, and the channel is initiated by either the computer or the remote communications device. A voice recognition component receives an audio input and converts the input to textual form. A text-to-voice component converts textual data to spoken form, and a file interface component interacts with a file having the data stored therein. An interface program receives an audio input by way of the communications channel, causes the voice recognition component to convert the utterance to determine a desired function, causes the file interface to interact with the file according to the desired function, and causes the text-to-voice component to provide a result or confirmation of the desired action in spoken form to the remote communications device, and/or causes the desired action to be performed.
The foregoing summary, as well as the following detailed description of preferred embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the invention, there is shown in the drawings exemplary embodiments of the invention; however, the invention is not limited to the specific methods and instrumentalities disclosed. In the drawings:
FIGS. 2A-C are diagrams of exemplary computer configurations in which aspects of the present invention may be implemented;
FIGS. 4A-C are flowcharts of an exemplary method of a user-initiated transaction in accordance with an embodiment of the present invention;
FIGS. 6A-F are screenshots illustrating an exemplary interface program in accordance with an embodiment of the present invention; and
FIGS. 7A-B are screenshots illustrating an exemplary spreadsheet in accordance with an embodiment of the present invention.
A system and method for operatively connecting a remote communications device with a computer by way of audio commands is described herein. In one embodiment of the present invention, a remote communications device such as, for example, a cellular phone, wireless transceiver, microphone, wired telephone or the like is used to transmit an audio or spoken command to a user's computer. In another embodiment, the user's computer initiates a spoken announcement or the like to the user by way of the same remote communications device. An interface program running on the user's computer operatively interconnects, for example, speech recognition software to recognize the user's spoken utterance, text-to-speech software to communicate with the user, appointment and/or email software, spreadsheets, databases, the Internet or other network and/or the like. The interface program also can interface with computer I/O ports to communicate with external electronic devices such as actuators, sensors, fax machines, telephone devices, stereos, appliances, and the like. It will be appreciated that in such a manner an embodiment of the present invention enables a user to use a portable communications device to communicate with his or her computer from any location.
For example, in one embodiment, a user may operate a cellular phone to call his or her computer. Upon establishing communications, the user may request any type of information the software component is configured to access. In another embodiment, the computer may contact the user by way of such cellular phone to, for example, notify the user of an appointment or the like. It will also be appreciated that the cellular phone need not perform any voice recognition or contain any of the user information that the user wishes to access. In fact, a conventional, “off-the-shelf” cellular phone or the like may be used with a computer running software according to one embodiment of the present invention. As a result, an embodiment of the present invention enables a user to use the extensive computing power of his or her computer from any location, and by using any of a wide variety of communications devices.
An example of such a computer, in accordance with one embodiment, is illustrated below in connection with
Turning now to
In an embodiment, computer 100 is also operatively connected to a network 120 such as, for example, the Internet, an intranet or the like. Computer 100 further comprises a processor 112 for data processing, memory 110 for storing data, and input/output (I/O) 114 for communicating with the network 120 and/or another communications medium such as a telephone line or the like. It will be appreciated that processor 112 of computer 100 may be a single processor, or may be a plurality of interconnected processors. Memory 110 may be, for example, RAM, ROM, a hard drive, CD-ROM, USB storage device, or the like, or any combination of such types of memory. In addition, memory 110 may be located internal or external to computer 100. I/O 114 may be any hardware and/or software component that permits a user or external device to communicate with computer 100. The I/O 114 may be a plurality of devices located internally and/or externally.
Turning now to FIGS. 2A-C, diagrams of exemplary computer configurations in which aspects of the present invention may be implemented are illustrated. In
In
For example, in one embodiment, a user may call a telephone number corresponding to local telephone 206 by way of remote telephone 204 or cellular phone 208. In such an embodiment, computer 100 monitors all incoming calls for a predetermined signal or the like, and upon detecting such signal, the computer 100 forwards such information from the call to the interface program or other software component. In such a manner, computer 100 may, upon connecting to the call, receive a spoken command or request from the user and issue a response. Conversely, the computer 100 may initiate a conversation with the user by calling the user at either remote telephone 204 or cellular phone 208. As may be appreciated, computer 100 may have telephone-dialing capabilities, or may use local telephone 206, if present, to accomplish the same function.
It will be appreciated that a telephone 204-208 may be any type of instrument for reproducing sounds at a distance in which sound is converted into electrical impulses (in either analog or digital format) and transmitted either by way of wire or wirelessly by, for example, a cellular network or the like. As may be appreciated, an embodiment's use of a telephone for remotely accessing a computer 100 ensures relatively low cost and ready availability of handsets for the user. In addition, any type or number of peripherals may be employed in connection with a telephone, and any such type of peripheral is equally consistent with an embodiment of the present invention. In addition, any type of filtering or noise cancellation hardware or software may be used—either at a telephone such as telephones 204-208 or at the computer 100—so as to increase the signal strength and/or clarity of the signal received from such telephone 204-208.
Local telephone 206 may, for example, be a corded or cordless telephone for use at a location remote from the computer 100 while remaining in a household environment. In an alternate embodiment such as, for example, in an office environment, multi-line and/or long-range cordless telephone(s) may be used in connection with the present invention. It will be appreciated that while an embodiment of the present invention is described herein in the context of a single user operating a single telephone 204-208, any number of users and telephones 204-208 may be used, and any such number is consistent with an embodiment of the present invention. As mentioned previously, local telephone 206 may also be a cellular telephone or other device capable of communicating via a cellular telephone network.
Devices such as pagers, push-to-talk radios, and the like may be connected to computer 100 in addition to or in place of telephones 204-208. As will be appreciated, all or most of the user's information is stored in computer 100. Therefore, if a remote communications device such as, for example, telephones 204-208 are lost, the user can quickly and inexpensively replace the device without any loss of data.
Turning now to
An example of how such telephone communication may be configured is by way of a Voice Over Internet Protocol (VoIP) connection. In such an embodiment, any remote phone may be able to dial the computer 100 directly, and connect to the interface program by way of an aspect of network 120. Such an interface program is discussed in greater detail below in connection with
Thus, several exemplary configurations of a user computer 100 in which aspects of the present invention may be implemented are presented. As may be appreciated, any manner of operatively connecting a user to a computer 100, whereby the user may verbally communicate with such computer 100, is equally consistent with an embodiment of the present invention.
As may also be appreciated, therefore, any means for remotely communicating with computer 100 is equally consistent with an embodiment of the present invention. Additional equipment may be necessary for such computer 100 to effectively communicate with such remote communications device, depending on the type of communications medium employed. For example, the input to a speech recognition engine generally is received from a standard input such as a microphone. Similarly, the output from a text-to-speech engine generally is sent to a standard output device such as a speaker. In the same manner, a communications device, such as a cellular telephone, may be capable of receiving input from a (headset) microphone and transmitting output to a (headset) speaker. Accordingly, an embodiment of the present invention provides connections between the speech engines and a communications device directly connected to the computer (e.g., telephone 206 as shown in
In a basic embodiment, such transference is accomplished between a telephone 206 that is external to the computer using patch-cords (as in
Another embodiment of such signal transference and conditioning involves “softphone” software, operating at the computer 100 in conjunction with the interface program. Such software facilitates VoIP telephonic communication places and receives telephone calls on a computer 100 using the Session Initiation Protocol (SIP) standard or other protocols such as H.323. One example of such software is X-PRO, which is manufactured by Xten Networks, Inc., of Burnaby, British Columbia, Canada. Softphone software generally sends telephonic sound to a user by way of local speakers or a headset, and generally receives telephone voice by way of a local microphone. Often the particular audio devices to be used by the softphone software can be selected as a user setting, as sometimes a computer 100 has multiple audio devices available. As noted above, text-to-speech software generally sends sound (output) to its local user by way of local speakers or a headset; and, speech recognition software generally receives voice (input) by way of a local microphone.
Accordingly, the softphone software must be linked by an embodiment of the present invention to the text-to-speech software and the speech recognition software. Such a linkage may be accomplished in any number of ways and involving either hardware or software, or a combination thereof. In one embodiment, a hardware audio device is assigned to each application, and then the appropriate output ports and input ports are linked using patch cables. Such an arrangement permits audio to flow from the softphone to the speech recognition software, and from the text-to-speech software to the softphone software. As may be appreciated, such an arrangement entails connecting speaker output ports to microphone input ports and therefore in one embodiment impedance-matching in the patch cables is used to mitigate sound distortion.
Another embodiment uses special software to link the audio signals between applications. An example of such software is Virtual Audio Cable (software written by Eugene V. Muzychenko), which emulates audio cables entirely in software, so that different software programs that send and receive audio signals can be readily connected. In such an embodiment, a pair of Virtual Audio Cables are configured to permit audio to flow from the softphone to the speech recognition software, and from the text-to-speech software to the softphone software. In yet another embodiment, the softphone software, the text-to-speech software and the speech recognition software are modified or otherwise integrated so the requirement for an external audio transference device is obviated entirely.
Turning now to
It will be appreciated that each software and/or hardware component illustrated in
A telephony input 302 is any type of component that permits a user to communicate by way of spoken utterances or audio commands (e.g., DTMF signals) with the computer 100 via, for example, input devices as discussed above in connection with FIGS. 2A-C. Likewise, a telephony output 304 is provided for outputting electrical signals as sound for a user to hear. It will be appreciated that both telephony input 302 and telephony output 304 may be adapted for other purposes such as, for example, receiving and transmitting signals to a telephone or to network 120, including having the functionality necessary to establish a connection by way of such telephone or network 120. Telephony input 302 and output 304 may be hardware internal or external to the computer 100, or software such a softphone application and associated network interface card.
Also provided is voice recognition software 310 which, as the name implies, is adapted to accept an electronic signal—such as a signal received by telephony input 302—wherein the signal represents a spoken utterance by a user, and to decipher such utterance. Voice recognition software 310 may be, for example, any type of specialized or off-the-shelf voice recognition software. Such recognition software 310 may include user training for better-optimized speech recognition. In addition, a text-to-speech engine 315 for communicating with a user is illustrated. Such text-to-speech engine 315, in an embodiment, generates spoken statements from electronic data, that are then transmitted to the user. In an embodiment as illustrated in
User data 320 comprises any kind of information that is stored or accessible to computer 100, that may be accessed and used in accordance with an embodiment of the present invention. For example, a personal information data file 322 may be any type of computer file that contains any type of information. Email, appointment files, personal information and the like are examples of the type of information that is stored in a personal information database. Additionally, such a personal information data file 322 may be a type of file such as, for example, a spreadsheet, database, document file, email data, and so forth. Furthermore, such a data file 322 (as well as data file 324, below) may be able to perform tasks at the user's direction such as, for example, open a garage door, print a document, send a fax, send an e-mail, turn on and/or control a household appliance, record or play a television or radio program, interface with communications devices and/or systems, and so forth. Such functionality may be included in the data file 322-324, or may be accessible to such data file 322-324 by way of, for example, telephony input 302 and output 304, Input/Output 350, and/or the like. It will be appreciated that the interface program 300 may be able to carry out such tasks using components, such as those discussed above, that are internal to the computer 100, or the program 300 may interface—using telephony input 302 and output 304, Input/Output 350, and/or the like—with devices external to the computer 100.
An additional file that may be accessed by computer 100 on behalf of a user is a network-based data file 324. Such a data file 324 contains macros, XML tags, or other functionality that accesses a network 120, such as the Internet, to obtain up-to-date information for the user. Such information may be, for example, stock prices, weather reports, news, and the like. Another example of such a data file 324 will be discussed below in the context of an Internet-enabled spreadsheet in FIGS. 7A-B. As will be appreciated, the term user data 320 as used herein refers to any type of data file including the data files 322 and/or 324. A data file interface 335 is provided to permit the interface program 300 to access the user data 320. As may be appreciated, there may be a single data file interface 335, or a plurality of interfaces 335 which may interface only with specific files or file types. Also, in one embodiment, a system clock 340 is provided for enabling the interface program 300 to determine time and date information. In addition, in an embodiment an Input/Output 350 is provided for interfacing with external devices, components, and the like. For example, Input/Output 350 may comprise one or more of a printer port, serial port, USB port, and/or the like.
Operatively connected (as indicated by the dotted lines) to the aforementioned hardware and software components is the interface program 300. Details of an exemplary user interface associated with such interface program 300 are discussed below in connection with FIGS. 6A-F. However, the interface program 300 itself is either a stand-alone program, or a software component that orchestrates the performance of tasks in accordance with an embodiment of the present invention. For example, the interface program 300 controls the other software components, and also controls what user data 320 is open and what “grammars” (expected phrases to be uttered by a user) are listened for.
It will be appreciated that the interface program 300 need not itself contain the user data 320 in which the user is interested. In such a manner, the interface program 300 remains a relatively small and efficient program that can be modified and updated independently of any user data 320 or other software components as discussed above. In addition, such a modular configuration enables the interface program 300 to be used in any computer 100 that is running any type of software components. As a result, compatibility concerns are alleviated. Furthermore, it will be appreciated that the interface program's 300 use of components and programs that are designed to operate on a computer 100, such as a personal computer, enables sophisticated voice recognition to occur in a non-server computing environment. Accordingly, the interface program 300 interfaces with programs that are designed to run on a computer 100—as opposed to a server—and are familiar to a computer 100 user. For example, such programs may be preexisting software applications that are part of, or accessible to, an operating system of computer 100. As may be appreciated, such programs may also be stand-alone applications, hardware interfaces, and/or the like.
It will also be appreciated that the modular nature of an embodiment of the present invention allows for the use of virtually any voice recognition software 310. However, the large variances in human speech patterns and dialects limits the accuracy of any such recognition software 310. Thus, in one embodiment, the accuracy of such software 310 is improved by limiting the context of the spoken material the software 310 is recognizing. For example, if the software 310 is limited to recognizing words from a particular subject area, the software 310 is more likely to correctly recognize an utterance—that may sound similar to any number of unrelated words—as a word that is related to the desired subject area. Therefore, in one embodiment, the user data 320 that is accessed by the interface program 300 is configured and organized in such a manner as to perform such context limiting. Such configuration can be done in the user data 320 itself, rather than requiring a change to the interface program 300 or other software components as illustrated in
For example, a spreadsheet application such as Microsoft® Excel or the like provides a means for storing and accessing data in a manner suitable for use with the interface program 300. Script files, alarm files, look-up files, command files, solver files and the like are all types of spreadsheet files that are available for use in an embodiment of the present invention. The use of a spreadsheet in connection with an embodiment of the present invention will be discussed in detail in connection with
A script file is a spreadsheet that provides for a spoken dialogue between a user and a computer 100. For example, in one embodiment, one or more columns (or rows) of a spreadsheet represent a grammar that may be spoken by a user—and therefore will be recognized by the interface program 300—and one or more columns (or rows) of the spreadsheet represent the computer's 100 response. Thus, if a user says, for example, “hello,” the computer 100 may say “hi” or “good morning” or the like. Such a script file thereby enables a more user-friendly interaction with a computer 100.
An alarm file, in one embodiment, has entries in one or more columns (or rows) of a spreadsheet that correspond to a desired function. For example, an entry in the spreadsheet may correspond to a reminder, set for a particular date and/or time, for the user to take medication, attend a meeting, etc. Thus, the interface program 300 interfaces with a component such as the telephony output 304 to contact the user and inform him or her of the reminder. Thus, it will be appreciated that an alarm file is, in some embodiments, always active because it must be running to generate an action upon a predetermined condition.
A look-up file, in one embodiment, is a spreadsheet that contains information or is cross-referenced to information. In one embodiment, the information is contained entirely within the look-up file, while in other embodiments the look-up file references information from data sources outside of the look-up file. For example, spreadsheets may contain cells that reference data that is available on the Internet (using, for example, “smart tags” or the like), and that can be “refreshed” at a predetermined interval to ensure the information is up-to-date. Therefore, a look-up file may be used to find information for a user such as, for example, stock quotes, sports scores, weather conditions and the like. It will be appreciated that such information may be stored locally or remote to computer 100.
A command file, in one embodiment, is a spreadsheet that allows a user to input commands to the computer 100 and to cause the interface program 300 to interface with an appropriate component to carry out the command. For example, the user may wish to hear a song, and therefore the interface program 300 interfaces with a music program to play the song. A solver file, in one embodiment, allows a user to solve mathematical and other analytical problems by verbally querying the computer 100.
In each type of file, the data contained therein is organized in a series of rows and/or columns, which include “grammars” or links to grammars which the voice recognition software 310 must recognize to be able to determine the data to which the user is referring. As noted above, an exemplary spreadsheet used by an embodiment of the present invention is discussed below in connection with FIGS. 7A-B.
As noted above, a script file represents a simple application of spreadsheet technology that may be leveraged by the interface program 300 to provide a user with the desired information or to perform the desired task. It will be appreciated that, depending on the particular voice recognition software 310 being used in an embodiment, the syntax of such scripts affects what such software is listening for in terms of a spoken utterance from a user. As will be discussed below in connection with
An embodiment is configured so as to only open, for example, a lookup file when requested by a user. In such a manner, the number of grammars that the computer 100 must potentially decipher is reduced, thereby increasing the speed and reliability of any such voice recognition. In addition, such a configuration also frees up computer 100 resources for other activities. If a user desires to open such a file, the user may issue a verbal command such as, for example, “look up stock prices” or the like. The computer 100 then determines which data file 322-324, or the like corresponds to the spoken utterance and opens it. The computer then 100 informs the user, by way of a verbal cue, that the data is now accessible.
In an alternate embodiment, the user would not complete the spreadsheets or the like using the standard spreadsheet technology. Instead, a wizard, API or the like may be used to fill, for example, a standard template file. In another embodiment, the speech recognition technology discussed above may be used to fill in such a template file instead of using a keyboard 104 or the like. In yet another embodiment, the interface program 300 may prompt the user with a series of spoken questions, to which the user speaks his or her answers. In such a manner, the computer 100 may ask more detailed questions, create or modify user data 320, and so forth. Furthermore, in yet another embodiment, a wizard converts an existing spreadsheet, or one downloaded from the Internet or the like, into a format that is accessible and understandable to the interface program 300.
Therefore, in such an exemplary configuration as illustrated in
Turning now to FIGS. 4A-C, flowcharts of an exemplary method of a user-initiated transaction in accordance with an embodiment of the present invention are shown. As was noted in the discussion of alarm scripts in connection with
At step 405, a user establishes communications with the computer 100. Such an establishment may take place, for example, by the user calling the computer 100 by way of a cellular phone 208 as discussed above in connection with FIGS. 2B-C. It will be appreciated that such an establishment may also have intermediate steps that may, for example, establish a security clearance to access the user data 320 or the like. At optional step 410, a “spoken” prompt is provided to the user. Such a prompt may simply be to indicate to the user that the computer 100 is ready to listen for a spoken utterance, or such prompt may comprise other information such as a date and time, or the like.
At step 415, a user request is received by way of, for example, the telephony input 302 or the like. At step 420, the user request is parsed and/or analyzed to determine the content of the request. Such parsing and/or analyzing is performed by, for example, the voice recognition module 310 and/or the natural language processing module 325. At step 425, the desired function corresponding to the user's request is determined. It will be appreciated that steps 410-425 may be repeated as many times as necessary for, for example, voice recognition software 310 to recognize the user's request. Such repetition may be necessary, for example, when the communications channel by which the user is communicating with the computer 100 is of poor quality, the user is speaking unclearly, or for any other reason.
If the determination of step 425 is that the user is requesting existing information or for computer 100 to perform an action, the method proceeds to step 430 of
Thus, and turning now to
It will be appreciated that the determination of step 425 could result in a determination that the user is requesting a particular action be performed. For example, the user may wish to initiate a phone call. In such an embodiment, the interface program 300 directs Session Initiation Protocol (SIP) softphone software by way of telephony input and output 302 and 304, Input/Output 350, and/or the like (not shown in
When placing a call in such an embodiment, the program 300 initiates, for example, a conference call utilizing the SIP phone, such that the user and one or more other users are connected together on the same line and, in addition, have the ability to verbally issue commands and request information from the program. Specific grammars would enable the program to “listen” quietly to the conversation among the users until the program 300 is specifically requested to provide information and/or perform a particular activity. Alternatively, the program 300 “disconnects” from the user once the program has initiated the call to another user or a conference call among multiple users.
As discussed above in connection with
In contrast to the method described above in connection with FIGS. 4A-C, the method of
At step 515, a spoken prompt is issued to the user. For example, upon the user answering his or her cellular phone 208, the interface program 300 causes the text-to-speech engine 315 to generate a statement regarding the action item. It will be appreciated that other, non-action-item-related statements may also be spoken to the user at such time such as, for example, security checks, predetermined pleasantries, and the like. At step 520, the user response is received, and at step 525, the response is parsed and/or analyzed as discussed above in connection with FIGS. 4A-B. At step 530, a determination is made as to whether further action is required, based on the spoken utterance. If so, the method returns to step 515. If no further action is required, at optional step 535 the interface program 300 makes any adjustments that need to be made to user data 320 to complete the user's request such as, for example, causing the database interface 320 to save changes or settings, set an alarm, and the like. The interface program 300 then returns to step 500 to continue monitoring the user data 320. It will be appreciated that the user may disconnect from the computer 100, or may remain connected to perform other tasks. In fact, the user may then, for example, issue instructions that are handled according to the method discussed above in connection with
Thus, it will be appreciated that interface program 300 is capable of both initiating and receiving contact from a user with respect to user data 320 stored on or accessible to computer 100. It will also be appreciated that interface program 300, in some embodiments, runs without being seen by the user, as the user accesses computer 100 remotely. However, the user may have to configure or modify interface program 300 so as to have such program 300 operate according to the user's preferences. Accordingly, FIGS. 6A-F are screenshots illustrating an exemplary user interface 600 of such interface program 300 in accordance with an embodiment of the present invention. As noted above, one of skill in the art should be familiar with the programming and configuration of user interfaces for display on a display device of a computer 100, and therefore the details of such configurations are omitted herein for clarity.
Turning now to
Referring now to
Turning now to
Turning now to
Referring now to
Turning now to
Turning now to
However, in one embodiment, the audio input to and output from the computer 100 is located in the first and second rows, respectively, of sheet 716 in each column. In such an embodiment, the computer 100 may be programmed to detect the entire question, or just key words or the like. The computer 100 thus responds with the predetermined answer, as shown in the second row. It will be appreciated that in one embodiment the answer restates the question in some form so as to avoid confusing the user, and to let the user know that the computer 100 has interpreted the user's question accurately.
It will be appreciated that a user may program such spreadsheets 700 with customized information, so the user will have a spreadsheet 700 that contains whatever information the user desires, in any desired format. In addition, the use of spreadsheets permits the user to, for example, download such spreadsheets 700 from a network 120, the Internet or the like. It will also be appreciated that the full functionality of such a spreadsheet 700 program (including web queries, smart tags and the like) may be used to provide the user with a flexible means for storing and accessing data that is independent of both the interface program 300 and the remote communications device being used. As will be appreciated, the exemplary stock quote spreadsheet 700 of
It will be appreciated that such phrases 712, in one embodiment, contain multiple possible grammars for requesting the same information. In such a manner, the user does not have to remember the exact syntax for the desired query, which is of particular in embodiments where the user is located remotely from the computer 100. Therefore, a request having a slight variation in the spoken syntax can still be recognized by the computer 100.
As an example, an inflexible grammar for requesting the current price of a particular stock may only return a response if the spoken utterance is exactly: “what is the current price of [record]?” In contrast, a flexible grammar can contain a plurality of grammatically-equivalent phrases that a user might use when speaking to the computer 100 such as, for example, “what is,” “what's,” “what was,” the “last price,” “current price,” “price,” of/for [record] and the like. Accordingly, a user who says, “what's the price for [record]?” will get the same response as a user who says, “what was the last price of [record]?” It will be appreciated that in one embodiment such flexibility is provided by way of logical symbols and the like, but any such method of providing a flexible grammar is equally consistent with an embodiment of the present invention. As can be seen in the second row of the spreadsheet 700, an answer to the question posed above would be “the last price for [record] was [price].”
In one embodiment, the interface program 300, by way of the data file interface 335, interfaces with a spreadsheet, such as a Microsoft® Excel spreadsheet, in such a manner that a user can readily access data in a logical, and yet personalized manner. The data file interface 335 looks for input grammar in, for example, row 1 of sheet 2, output grammar in row 2 of sheet 2 and record labels in column 1 of sheet 2. When a user asks the interface program 300 to look-up a file, the data file interface 335 opens the spreadsheet and goes to sheet 2. The interface program 300 generates all of the possible input grammars (i.e., every question in row 1, in every form with respect to flexible grammars) is combined with every record. For example, in the above example the flexible grammar is “what is,” “what's,” “what was,” the “last price,” “current price,” “price,” of/for [record]. Such a grammar would generate three separate grammars for “what is,” “what's” and “what was.” This would be multiplied by three grammars for “last price,” “current price” and “price,” and by two more grammars for “of” or “for,” and then would be multiplied again for the number of stocks (records) in the sheet.
The interface program, in such an embodiment, is then programmed to respond with the text-to-speech output grammar corresponding to the identified input grammar. The output grammar is generally a combination of the “output grammar” found in row 2, with the record label that is part of the input grammar, and the data “element” that is found in the cell that correlates with the column of the input grammar and the input record. The interface program 300 then sends the text-to-speech output to the selected output communications device. This format allows the user to readily program input and output grammars that are useful and personal.
It will also be appreciated that in some embodiments or contexts, a flexible grammar may not be appropriate, and in still other embodiments the grammar of the computer's 100 spoken text may be flexible as well. In such a manner, the computer 100 has a more “natural” feel for the user, as the computer 100 varies its text in a more realistic way. Such variance may be accomplished, for example, by way of a random selection of one of a plurality of equivalent grammars, or according to the particular user, time of day, and/or the like.
It will also be appreciated that a spreadsheet 700 may contain macros for performing certain tasks. For example, an entry in a spreadsheet may be configured to respond to the command “call Joe Smith” by looking up a phone number associated with a “Joe Smith” entry in the same or different spreadsheet, or even in a separate application such as Microsoft® Outlook® or another an email program. The interface program 300 may then access a component for dialing a phone number, and the phone number would then be dialed and the call connected to the user. Any such functionality can be used in accordance with an embodiment of the present invention. For example, in the spreadsheet 700 of
Referring now to
Thus, a method and system for operatively connecting a computer to a remote communications device by way of verbal commands has been provided. While the present invention has been described in connection with the exemplary embodiments of the various figures, it is to be understood that other similar embodiments may be used or modifications and additions may be made to the described embodiment for performing the same function of the present invention without deviating therefrom. For example, one skilled in the art will recognize that the present invention as described in the present application may apply to any configuration of communications devices or software applications. Therefore, the present invention should not be limited to any single embodiment, but rather should be construed in breadth and scope in accordance with the appended claims.
This application claims the benefit of U.S. Application No. 60/415,3 11, filed Oct. 1, 2002, titled “A System and Method for Wireless Audio Communication with a Computer,” and U.S. Application No. 60/457,732, filed Mar. 25, 2003, also titled “A System and Method for Wireless Audio Communication with a Computer,” the disclosures of which are hereby incorporated by reference in their entirety.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US03/31193 | 10/1/2003 | WO | 3/29/2005 |
Number | Date | Country | |
---|---|---|---|
60415311 | Oct 2002 | US | |
60457732 | Mar 2003 | US |