The following patent applications, which are assigned to the assignee of the present invention and filed concurrently herewith, cover subject matter related to the subject matter of the present invention: “SPEECH COMMAND INPUT RECOGNITION SYSTEM FOR INTERACTIVE COMPUTER DISPLAY WITH MEANS FOR CONCURRENT AND MODELESS DISTINGUISHING BETWEEN SPEECH COMMANDS AND SPEECH QUERIES FOR LOCATING COMMANDS”, Scott A. Morgan et al. Ser. No. 09/213,858; “SPEECH COMMAND INPUT RECOGNITION SYSTEM FOR INTERACTIVE COMPUTER DISPLAY WITH TERM WEIGHTING MEANS USED IN INTERPRETING POTENTIAL COMMANDS FROM RELEVANT SPEECH TERMS”, Scott A. Morgan et al. Ser. No. 09/213,845; “SPEECH COMMAND INPUT RECOGNITION SYSTEM FOR INTERACTIVE COMPUTER DISPLAY WITH INTERPRETATION OF ANCILLARY RELEVANT SPEECH QUERY TERMS INTO COMMANDS”, Scott A. Morgan et al. Ser. No. 09/213,856; and “METHOD AND APPARATUS FOR PRESENTING PROXIMAL FEEDBACK IN VOICE COMMAND SYSTEMS”, Alan R. Tannenbaum Ser. No. 09/213,857.
The present invention relates to interactive computer controlled display systems with speech command input and more particularly to such systems which present display feedback to the interactive users.
The 1990's decade has been marked by a technological revolution driven by the convergence of the data processing industry with the consumer electronics industry. This advance has been even further accelerated by the extensive consumer and business involvement in the Internet over the past few years. As a result of these changes it seems as if virtually all aspects of human endeavor in the industrialized world require human/computer interfaces. There is a need to make computer directed activities accessible to people who, up to a few years ago, were computer illiterate or, at best, computer indifferent.
Thus, there is continuing demand for interfaces to computers and networks which improve the ease of use for the interactive user to access functions and data from the computer. With desktop-like interfaces including windows and icons, as well as three-dimensional virtual reality simulating interfaces, the computer industry has been working hard to fulfill such interface needs by making interfaces more user friendly by making the human/computer interfaces closer and closer to real world interfaces, e.g. human/human interfaces. In such an environment it would be expected that speaking to the computer in natural language would be a very natural way of interfacing with the computer for even novice users. Despite the potential advantages of speech recognition computer interfaces, this technology has been relatively slow in gaining extensive user acceptance.
Speech recognition technology has been available for over twenty years but it has only been recently that it is beginning to find commercial acceptance, particularly with speech dictation or “speech to text” systems, such as those marketed by International Business Machines Corporation (IBM) and Kurzweil Corporation. That aspect of the technology is now expected to have accelerated development until it will have a substantial niche in the word processing market. On the other hand, a more universal application of speech recognition input to computers, which is still behind expectations in user acceptance, is in command and control technology wherein, for example, a user may navigate through a computer system's graphical user interface (GUI) by the user speaking the commands which are customarily found in the systems menu text, icons, labels, buttons, etc.
Many of the deficiencies in speech recognition, both in word processing and in command technologies, are due to inherent voice recognition errors due in part to the status of the technology and in part to the variability of user speech patterns and the user's ability to remember the specific commands necessary to initiate actions. As a result, most current voice recognition systems provide some form of visual feedback which permits the user to confirm that the computer understands his speech utterances. In word processing, such visual feedback is inherent in this process since the purpose of the process is to translate from the spoken to the visual. That may be one of the reasons that the word processing applications of speech recognition have progressed at a faster pace. In any event, in all voice recognition systems with visual feedback, at some stage, the interactive user is required to make some manual input, e.g. through a mouse or a keyboard. The need for such manual operations still gets in the way of interactive users who, because of a lack of computer skills or other reasons, wish to relate to the computer system in a fully voice activated or conversational manner.
The present invention provides a solution for users of voice recognition systems who still need visual feedback in order to confirm the accuracy of spoken commands but need to operate in a “hands-off” mode with respect to computer input. In an interactive computer controlled display system with speech command input recognition, the present invention provides a system for confirming the recognition of a command by first predetermining a plurality of speech commands for respectively designating each of a corresponding plurality of system actions and providing means for detecting such speech commands. There also are means responsive to a detected speech command for displaying said command for a predetermined time period, during which time the user may give a spoken command to stop the system action designated by said displayed command. In the event that said system action is not stopped during said predetermined time period, the system action designated by said displayed command will be executed. The user need not wait for the expiration of the time period if he notes that the displayed command is the right one, he has speech command means for executing the system action designated by said displayed command prior to the expiration of said time period. This may be as simple as just repeating the displayed command.
The present invention will be better understood and its numerous objects and advantages will become more apparent to those skilled in the art by reference to the following drawings, in conjunction with the accompanying specification, in which:
Referring to
A central processing unit (CPU) 10, such as any PC microprocessor in a PC available from IBM or Dell Corp., is provided and interconnected to various other components by system bus 12. An operating system 41 runs on CPU 10, provides control and is used to coordinate the function of the various components of
A read only memory (ROM) 16 is connected to CPU 10 via bus 12 and includes the basic input/output system (BIOS) that controls the basic computer functions. Random access memory (RAM) 14, I/O adapter 18 and communications adapter 34 are also interconnected to system bus 12. It should be noted that software components, including operating system 41 and application 40, are loaded into RAM 14, which is the computer system's main memory. I/O adapter 18 may be a small computer system interface (SCSI) adapter that communicates with the disk storage device 20, i.e. a hard drive. Communications adapter 34 interconnects bus 12 with an outside network enabling the data processing system to communicate with other such systems over a local area network (LAN) or wide area network (WAN), which includes, of course, the Internet. I/O devices are also connected to system bus 12 via user interface adapter 22 and display adapter 36. Keyboard 24 and mouse 26 are all interconnected to bus 12 through user interface adapter 22. Manual I/O devices, such as the keyboard and the mouse, are shown primarily because they may be used for ancillary I/O functions not related to the present invention, which uses primarily spoken commands. Audio output is provided by speaker 28 and the speech input which is made through input device 27, which is diagrammatically depicted as a microphone, which accesses the system through an appropriate interface adapter 22. The speech input and recognition will be subsequently described in greater detail, particularly with respect to
Voice or speech input is applied through microphone 27 which represents a speech input device. Since the art of speech terminology and speech command recognition is an old and well developed one, we will not go into the hardware and system details of a typical system which may be used to implement the present invention. It should be clear to those skilled in the art that the systems and hardware in any of the following patents may be used: U.S. Pat. No. 5,671,328; U.S. Pat. No. 5,133,111; U.S. Pat. No. 5,222,146; U.S. Pat. No. 5,664,061; U.S. Pat. No. 5,553,121; and U.S. Pat. No. 5,157,384. The speech input to the system could be actual commands, which the system will recognize, and/or speech terminology, which the user addresses to the computer so that the computer may propose appropriate relevant commands through feedback. The input speech goes through a recognition process which seeks a comparison to a stored set of commands. If a command is identified, the actual command will be displayed first and subsequently carried out after a set time period during which the command may be vocally retracted.
Now with respect to
The initial display screen of
Now with reference to
With this set up, the running of the process will now be described with respect to
One of the preferred implementations of the present invention is as an application program 40 made up of programming steps or instructions resident in RAM 14,
Although certain preferred embodiments have been shown and described, it will be understood that many changes and modifications may be made therein without departing from the scope and intent of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
4726065 | Froessl | Feb 1988 | A |
4766529 | Nakano et al. | Aug 1988 | A |
5027406 | Roberts et al. | Jun 1991 | A |
5068900 | Searcy et al. | Nov 1991 | A |
5133011 | McKiel, Jr. | Jul 1992 | A |
5157384 | Greanias et al. | Oct 1992 | A |
5222146 | Bahl et al. | Jun 1993 | A |
5231670 | Goldhor et al. | Jul 1993 | A |
5305244 | Newman et al. | Apr 1994 | A |
5386494 | White | Jan 1995 | A |
5408582 | Colier | Apr 1995 | A |
5428707 | Gould et al. | Jun 1995 | A |
5465317 | Epstein | Nov 1995 | A |
5500920 | Kupiec | Mar 1996 | A |
5526407 | Russell et al. | Jun 1996 | A |
5553121 | Martin et al. | Sep 1996 | A |
5602963 | Bissonnette et al. | Feb 1997 | A |
5604840 | Asai et al. | Feb 1997 | A |
5632002 | Hashimoto et al. | May 1997 | A |
5638486 | Wang et al. | Jun 1997 | A |
5664061 | Andreshak et al. | Sep 1997 | A |
5671328 | Fitzpatrick et al. | Sep 1997 | A |
5698834 | Worthington et al. | Dec 1997 | A |
5706399 | Bareis | Jan 1998 | A |
5729659 | Potter | Mar 1998 | A |
5890122 | Van Kleeck et al. | Mar 1999 | A |
6073097 | Gould et al. | Jun 2000 | A |