This application is related to U.S. patent application Ser. No. 10/154,147, entitled “Talking Ebook”, filed on May 22, 2002, U.S. patent application Ser. No. 10/146,406, entitled “Voice Command and Voice Recognition for Hand-Held Devices”, filed on May 15, 2002, and U.S. patent application Ser. No. 10/135,151, entitled “Mixing Music and Text-To-Speech (TTS) for Hand-Held Devices”, filed on Apr. 23, 2002, which are commonly assigned and concurrently filed herewith, and the disclosures of which are incorporated herein by reference.
1. Field of the Invention
The present invention generally relates to hand-held devices and, more particularly, to text-to-speech (TTS) for hand-held devices.
2. Background of the Invention
An electronic book (also referred to as an “Ebook”) is an electronic version of a traditional print book (or other printed material such as, for example, a magazine, newspaper, and so forth) that can be read by using a personal computer or by using an Ebook reader. Unlike PCs or handheld computers, Ebook readers deliver a reading experience comparable to traditional paper books, while adding powerful electronic features for note taking, fast navigation, and key word searches. However, such actions, irrespective of whether or not they are performed on a PC, handheld computer, or Ebook reader, generally require the user to read the text from a display. Thus, the use of an Ebook generally requires the user to focus his or her visual attention on a display to read the text content (e.g., book, magazine, newspaper, and so forth) of the Ebook. Moreover, the use of any hand-held device requires the user to focus his or her visual attention on a display for one purpose or another.
Accordingly, it would be desirable and highly advantageous to have a hand-held device such as, for example, an Ebook, that allows a user to assimilate content without having to look at a display.
The problems stated above, as well as other related problems of the prior art, are solved by the present invention, a hand-held device having text-to-speech (TTS) capabilities.
According to an aspect of the present invention, there is provided an Ebook. The Ebook comprises a memory device, a text-to-speech (TTS) module, and at least one speaker. The memory device stores files. The files include text. The TTS module synthesizes speech corresponding to the text. The at least one speaker outputs the speech.
According to another aspect of the present invention, there is provided a method for using an Ebook. At least one file is stored in the Ebook. The at least one file includes text. Speech corresponding to the text is synthesized and output from the Ebook.
These and other aspects, features and advantages of the present invention will become apparent from the following detailed description of preferred embodiments, which is to be read in connection with the accompanying drawings.
The present invention is directed to a hand-held device having text-to-speech (TTS) capabilities and to a method for using a hand-held device having text-to-speech (TTS) capabilities. It is to be appreciated that the present invention is directed to any type of hand-held device including, but not limited to, electronic books (Ebooks), personal digital assistants (PDAs), and so forth. However, for the purposes of describing the present invention, the following description will be provided with respect to Ebooks.
It is to be understood that the present invention may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof. Preferably, the present invention is implemented as a combination of hardware and software. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage device. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (CPU), a random access memory (RAM), and input/output (I/O) interface(s). The computer platform also includes an operating system and microinstruction code. The various processes and functions described herein may either be part of the microinstruction code or part of the application program (or a combination thereof) which is executed via the operating system. In addition, various other peripheral devices may be connected to the computer platform such as an additional data storage device and a printing device.
It is to be further understood that, because some of the constituent system components and method steps depicted in the accompanying Figures are preferably implemented in software, the actual connections between the system components (or the process steps) may differ depending upon the manner in which the present invention is programmed. Given the teachings herein, one of ordinary skill in the related art will be able to contemplate these and similar implementations or configurations of the present invention.
A display device 116 is operatively coupled to system bus 104 by display adapter 110. A disk storage device (e.g., a magnetic or optical disk storage device) 118 is operatively coupled to system bus 104 by I/O adapter 112.
A mouse 120 and keyboard 122 are operatively coupled to system bus 104 by user interface adapter 114. The mouse 120 and keyboard 122 are used to input and output information to and from system 100.
The computer system 100 further includes a text-to-speech (TTS) module 194 and a speaker 196.
One or more files (hereinafter “file”) is input into the Ebook (step 310). The file includes, at the least, text. The file may be provided via a memory device (e.g., floppy disk, compact disk, flash memory, and so forth), downloaded from the Internet, and so forth. The file may be an Ebook application file, an e-mail file, a Web page, a word processor document, and so forth. The file is then stored in the Ebook (step 320).
Optionally, at step 325, a choice is provided to a user of the Ebook to select between a strictly visual mode where the text is displayed on the display, a strictly audio mode where the text is synthesized by the TTS module and output by the speaker, and a combined visual-audio mode where the text is displayed on the display and simultaneously synthesized by the TTS module and output by the speaker (260, 270).
One or more commands are received by the Ebook (step 330). Preferably, the commands correspond to a playback of the file. The commands may include, for example: a command to begin synthesizing speech corresponding to the text included in the file so that the text is reproduced audibly; a command to end the synthesis; a command to preset a start-up time and/or an end time for the speech synthesis; a command to select/change a voice(s) used in the speech synthesis; a command to select/change the speed of the synthesized speech; a command corresponding to navigation through the file (e.g., to skip one or more pages, sections, chapters, and so forth); and so forth.
With respect to the selection of different voices, many different types of voices may be used in the synthesis of speech such as, for example, a man's voice, a woman's voice, an adolescent's voice, or even a funny sounding voice (e.g., chipmunk, etc.). Moreover, different voices may be used in a single playback of a single file. The selection of a particular voice may be made based on, for example, the preference of the user, the different application parameters/circumstances, and/or on a random basis.
Further, it is to be appreciated that some of the commands received at step 330 may not correspond to the playback of the text file. For example, if other functions are integrated with the Ebook such as, for example, a calendar function with a daily reminder schedule, then information relating to the calendar function (or any other function) may be received by the Ebook.
The commands are then acted upon to control operations of the Ebook having TTS capabilities (step 340). Step 340 may include the step of synthesizing speech corresponding to the text and/or displaying the text (step 340a). It is to be appreciated that step 340 may include acting upon any type of command received at step 330 including those in support of synthesizing the speech corresponding to the text and/or displaying the text, as well as other functions that may be integrated into the Ebook.
First and second inputs are received specifying a start time and an end time for a playback of a file on the Ebook (step 410). A third input is received specifying the actual file to be played back (step 420). A fourth input is received specifying a voice for the playback (step 430). It is to be appreciated that steps 420 and 430 may be performed randomly by the Ebook, upon simply receiving the first and second inputs. Alternatively, all (or some combination amounting to less than all) of the inputs may be user provided.
Playback is commenced at the selected start time, including synthesizing speech corresponding to the file so that the text file is audibly reproduced (step 440). Optionally, the text included in the file may be displayed concurrently with the outputting of the synthesized speech. After a random or a pre-specified time period has elapsed, but before the selected end time, the playback volume and/or the speech speed are/is decreased (step 450). Step 450 may be repeated a pre-specified or random number of times so as to gradually decrease the volume and/or speech speed in increments. The reduced playback volume and/or speech speed are intended to render a listener drowsy. The playback is terminated at the specified end time (step 460).
A first input is received specifying a start time for a playback of a file on the Ebook (step 510). A second input is received specifying the actual file to be played back (step 520). A third input is received specifying a voice for the playback (step 530). It is to be appreciated that steps 520 and 530 may be performed randomly by the Ebook, upon simply receiving the first input. Alternatively, all (or some combination amounting to less than all) of the inputs may be user provided.
Playback is commenced at the selected start time, including synthesizing speech corresponding to the text file so that the text file is audibly reproduced (step 540). Optionally, the text included in the file may be displayed concurrently with the outputting of the synthesized speech. After a random or a pre-specified time period(s) has elapsed, the playback volume and/or the speech speed are/is increased (step 550). Step 550 may be repeated so as to incrementally increase the playback volume and/or the speech speed at predefined or random intervals until a stop playback input has been received. The playback is terminated when the stop playback input has been received (step 560).
Thus, the present invention advantageously allows the use of an Ebook with TTS for applications where reading is not convenient or desirable. For example, the present invention may be used to read while driving, for audibly reading stories to children, for a daily schedule reminder, and so forth. Given the teachings of the present invention provided herein, one of ordinary skill in the related art will contemplate these and various other scenarios in which the present invention may be advantageously employed while maintaining the spirit and scope of the present invention.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the present invention is not limited to those precise embodiments, and that various other changes and modifications may be affected therein by one skilled in the art without departing from the scope or spirit of the invention. All such changes and modifications are intended to be included within the scope of the invention as defined by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
4377345 | Yamada et al. | Mar 1983 | A |
4389121 | Hashimoto et al. | Jun 1983 | A |
4701862 | Washizuka et al. | Oct 1987 | A |
4985697 | Boulton | Jan 1991 | A |
5386493 | Degen et al. | Jan 1995 | A |
5611018 | Tanaka et al. | Mar 1997 | A |
5615380 | Hyatt | Mar 1997 | A |
5694521 | Shlomot et al. | Dec 1997 | A |
5771273 | McAllister et al. | Jun 1998 | A |
5812977 | Douglas | Sep 1998 | A |
5826231 | Vigier | Oct 1998 | A |
5850629 | Holm et al. | Dec 1998 | A |
6009398 | Mueller et al. | Dec 1999 | A |
6182041 | Li et al. | Jan 2001 | B1 |
6236622 | Blackman | May 2001 | B1 |
6310833 | Guyett et al. | Oct 2001 | B1 |
6324511 | Kiraly et al. | Nov 2001 | B1 |
6557173 | Hendricks | Apr 2003 | B1 |
6633741 | Posa et al. | Oct 2003 | B1 |
6748358 | Iwasaki et al. | Jun 2004 | B1 |
6838994 | Gutta et al. | Jan 2005 | B2 |
6876969 | Nakao | Apr 2005 | B2 |
6925437 | Hayashi | Aug 2005 | B2 |
7240005 | Chihara | Jul 2007 | B2 |
20010027395 | Sakai et al. | Oct 2001 | A1 |
20020107591 | Gabai et al. | Aug 2002 | A1 |
20020184189 | Hay et al. | Dec 2002 | A1 |
20030004723 | Chihara | Jan 2003 | A1 |
20030009337 | Rupsis | Jan 2003 | A1 |
20030014252 | Shizuka et al. | Jan 2003 | A1 |
Number | Date | Country |
---|---|---|
0 339 316 | Nov 1989 | EP |
WO 0101373 | Jan 2001 | WO |
Number | Date | Country | |
---|---|---|---|
20030212559 A1 | Nov 2003 | US |