1. Technical Field
The embodiments herein generally relate to toys, and more specifically to an interactive toy, which is programmed to talk and respond with respect to speech emitted by a nearby device or toy.
2. Description of the Related Art
Generally toys are considered as objects for play and entertainment. They provide entertainment not only to children but also to pets such as dogs, cats, etc. Recently, toys have taken a new dimension to serve people with a variety of purposes. Toys and other devices such as robots are also currently used to provide education, to impart training to individuals, and to improve the language skills of the individuals. Children use toys and play with devices to discover their identity, help their body grow strong, learn cause and effect, explore relationships, and practice skills. These toys and other devices are also used by adults and pets interactively to reduce boredom and solitude. Currently available toys tend to have a limited capability for interacting with a user. The toys react mostly based on a manual input by a user. In other words, toys tend to interact passively and not actively and dynamically. Moreover, toys emit speech or sound based on some physical stimuli and are generally made to emit some stored text but do not provide an intelligent conversation with a user. Furthermore, toys are not generally programmed with a script generated by a user or with content created by a wide variety of third party content providers or with downloaded content.
Accordingly, there is a need to develop a programmable, interactive talking toy device which is programmed to respond and emit text generated by a user or the text created by a third party service provider or by the script downloaded from an internet or server in order to dynamically interact with the responses made by a nearby device or user intelligently.
In view of the foregoing, the embodiments herein provide an interactive device which can be programmed with a variety of scripted conversations provided by a user or by a third party content provider, which can be downloaded to the device from a server. Additionally, the embodiments herein provide an interactive talking environment to a device with respect to another adjacent device or with a user. Furthermore, the embodiments herein provide a talking device with a recorded speech or speech synthesized to output pre-programmed statements upon activation by a user. Also, the embodiments herein provide a talking device, which can be programmed with a script that may be modified by a user or with a script downloaded from a remote server computer. Moreover, the embodiments herein provide a plurality of interactive devices that can interact with one another dynamically.
The embodiments herein further provide a plurality of talking devices in which scripted speech is output in response to a speech output from an adjacent device, when one device is activated by a user. Additionally, the embodiments herein provide a device, which can be programmed by a user through a personal computer or mobile telephone or television to provide a desired conversation script. Furthermore, the embodiments herein provide an interactive programmable device in which a user can upload a generated conversation script to remote server computer for sharing with other users. Additionally, the embodiments herein provide an interactive programmable device in which a user can download a script generated by others and program the downloaded script into a pair of talking devices. Moreover, the embodiments herein provide an interactive programmable device in which one script of the device becomes an input variable for the script on the adjacent device
More particularly, the embodiment herein provides an interactive programmable device that has a memory unit adapted to store the data modules, which can be synthesized into speech and a microprocessor based speech module, which is connected to the memory and to a transceiver. The transceiver receives an identification data and a status data from an adjacent device. A remote server computer is operatively connected to a programmable device through a wireless communication system and is provided with a database to store digital data modules and scripts that are either input by a user or downloaded from a third party content provider. Software is operated on the remote server computer to provide the third party content and the scripts. The interactive programmable device receives the digital data modules and scripts from a server computer through wireless communication system and stores received digital data modules and the scripts in the memory. A software program is operated on the interactive programmable device to select a stored digital data module corresponding to a stored script from the memory based on the received identification data and status data from an adjacent device. A set of instructions are executed on a microprocessor for synthesizing digital data modules acquired from memory with respect to received identification data and the status data of the adjacent device.
The embodiments herein also provide an interactive talking device environment comprising of at least two interactive devices, which dynamically and intelligently interact with one another. Search rules for response script of second device are based on adjacent device script category. The script of adjacent device contains identity and categorization metadata that becomes an input variable for the script on the second device.
The embodiments herein provide an operating method for a programmable interactive talking toy. A sensor is activated to detect the status of an adjacent toy. The detected data are transmitted to a remote server through a Bluetoothâ„¢ communication system. A software program is operated on the remote server to select a suitable response script from a stored script table based on the received status data of the adjacent toy. The script table contains the data contents loaded from other service providers or the contents generated by third party. The selected response script is forwarded to the programmable talking toy. A speech processor analyses the received script to generate a corresponding voice message which is output through the speaker.
These and other embodiments herein are understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:
The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
As mentioned, there remains a need for a novel programmable, interactive talking toy device. The embodiments herein achieve this by providing an interactive programmable device. The device has a memory and a microprocessor based speech synthesis module that is connected to the memory and to a transceiver. The memory stores the data modules, which can be synthesized into speech. Referring now to the drawings, and more particularly to
The device 100 has an antenna 102 to receive RF signals containing device identification data and status data from an adjacent device. Device module 100 further includes a universal serial bus (USB) port 120 through which a flash memory drive storing a digital data module and a script generated by others is coupled. The functional components in the module are supplied with an electrical power provided from a battery 106. A battery charge sensor 104 detects the residual charge in battery 106 and the detected residual battery charge condition is displayed through a LED display 114. The collected data from the adjacent device and the script from an application server are time and date stamped with the data obtained from the real time clock 116. An RFID transmitter 126 forwards the device identification data acquired from a unique device ID 128. A universal asynchronous receiver transmitter 124 (UART) is a transceiver which communicates the data between the various functional units and a microprocessor 136. The UART 124 is used to execute a serial communication between the microprocessor 136 and the devices connected to the USB port 120. The devices connected to the USB port 120 may include a flash memory drive, an adjacent toy, a detection sensor, etc.
The embodiments herein provide an interactive talking device 100 with recorded speech or a speech synthesizer 118 to emit pre programmed statements upon activation by a user. An interactive talking device could be programmed with a script that could be identified by a user or with a downloaded script from the remote server 204. The interactive talking device is made to output a scripted speech in response to the speech of an adjacent device when the device is activated by a user. The interactive device can be programmed by a user through a personal computer 202, mobile telephone 302, or television (not shown) to provide a desired conversation. The embodiments herein enable users to upload a self-authored conversation to a remote server computer 204 for sharing with other users. Moreover, the embodiments herein further enable the users to download conversation scripts authored by other users 208 and to program the downloaded scripts into a pair of talking devices (not shown). Thus, the embodiments herein provide a dynamic talking environment for a plurality of devices to talk with one another.
The programmable interactive talking device 100 may be used as an educational toy to help students and children to learn a language or any foreign language or any topic of interest. Furthermore, the programmable interactive talking device 100 also may be used as an entertainment toy. The device 100 further comprises a sensor (not shown) to detect an adjacent device. In one embodiment, the sensor may be a Radio Frequency Identification device (RFID) interrogator (not shown), which detects and reads the data contained in the RFID provided in an adjacent device. In another embodiment, the device 100 may be a Bluetoothâ„¢ communications device which receives a RF signal emitted by an adjacent device. The radio frequency signal emitted by the adjacent device contains the identification data of the device and the status data of the device.
A transceiver (not shown) receives an identification data and a status data from the adjacent device. Furthermore, the remote server computer 204 is operatively connected to the programmable device 100 through the wireless communication system 122 and the remote server 204 is provided with a database (not shown) to store digital data modules and scripts that are either input by a user 208 or downloaded from a third party content provider 206. A software program is operated on the remote server computer 204 to provide the third party contents and the scripts. The interactive programmable device 100 receives the digital data modules and the scripts from the server computer 204 through the wireless communication system 122 and stores the received digital data modules and the scripts in the memory units of the device 100. The programmable script can be modified by the user and can be stored by the user in a computer such as the remote server computer 204. The scripts for a pair of interactive devices can be programmed by the user via a personal computer 202, mobile phone 302, television (not shown), or any other appropriate communication device. The scripts are uploaded and downloaded by the user from the remote server computer 204. Furthermore, the conversation scripts are accessible to other users for sharing.
A software program is operated on the interactive programmable device 100 to select a stored digital data module corresponding to a stored script from the memory based on the received identification data and the status data from an adjacent device. A set of instructions are executed on the microprocessor based speech synthesizer 118 for synthesizing the digital data modules acquired from the memory with respect to the received identification data and the status data of the adjacent device. The set of instructions may contain codes or commands to execute a speech synthesizing algorithm or the set of instructions can also be a software program for performing a speech synthesizer process. The interactive device 100 is adapted to respond to the speech of adjacent device to create a simulated conversation between the devices when the device 100 is activated by a user after detecting speech from another device with a sensor. The interactive devices 100 are programmed with a variety of scripted conversations by the user or by third party content providers.
Another embodiment provides an interactive talking device environment comprising of at least two interactive devices (not shown). Each device 100 has a memory for storing data, which can be synthesized into speech modules and a speech synthesis processor 118 for converting digital data into a speech module. A microprocessor 136 is connected to the speech synthesis processor, the memory, and to a transceiver (not shown). A sensor is provided to identify an adjacent device. A user activates the device 100 based on the detected sensor signal indicating the presence and the response of the adjacent device to provide a response with respect to the speech from the adjacent device. Software is executed to provide a responsive conversation script according to adjacent device status and script.
The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
4840602 | Rose | Jun 1989 | A |
4846693 | Baer | Jul 1989 | A |
4857030 | Rose | Aug 1989 | A |
4923428 | Curran | May 1990 | A |
5029214 | Hollander | Jul 1991 | A |
5033864 | Lasecki et al. | Jul 1991 | A |
5376038 | Arad et al. | Dec 1994 | A |
6048209 | Bailey | Apr 2000 | A |
6050826 | Christianson et al. | Apr 2000 | A |
6056618 | Larian | May 2000 | A |
6089942 | Chan | Jul 2000 | A |
6110000 | Ting | Aug 2000 | A |
6135845 | Klimpert et al. | Oct 2000 | A |
6149490 | Hampton et al. | Nov 2000 | A |
6193580 | Albert et al. | Feb 2001 | B1 |
6206745 | Gabai et al. | Mar 2001 | B1 |
6227931 | Shackelford | May 2001 | B1 |
6247934 | Cogliano | Jun 2001 | B1 |
6257948 | Silva | Jul 2001 | B1 |
6290566 | Gabai et al. | Sep 2001 | B1 |
6309275 | Fong et al. | Oct 2001 | B1 |
6358111 | Fong et al. | Mar 2002 | B1 |
6361396 | Snyder et al. | Mar 2002 | B1 |
6364735 | Bristow et al. | Apr 2002 | B1 |
6375535 | Fong et al. | Apr 2002 | B1 |
6380844 | Pelekis | Apr 2002 | B2 |
6394872 | Watanabe et al. | May 2002 | B1 |
6409511 | Cogliano | Jun 2002 | B2 |
6454625 | Fong et al. | Sep 2002 | B1 |
6471420 | Maekawa et al. | Oct 2002 | B1 |
6497604 | Fong et al. | Dec 2002 | B2 |
6497606 | Fong et al. | Dec 2002 | B2 |
6497607 | Hampton et al. | Dec 2002 | B1 |
6514117 | Hampton et al. | Feb 2003 | B1 |
6527611 | Cummings | Mar 2003 | B2 |
6537128 | Hampton et al. | Mar 2003 | B1 |
6544098 | Hampton et al. | Apr 2003 | B1 |
6551165 | Smirnov | Apr 2003 | B2 |
6554679 | Shackelford et al. | Apr 2003 | B1 |
6565407 | Woolington et al. | May 2003 | B1 |
6572431 | Maa | Jun 2003 | B1 |
6585556 | Smirnov et al. | Jul 2003 | B2 |
6607388 | Cogliano | Aug 2003 | B2 |
6609943 | Chan | Aug 2003 | B1 |
6631351 | Ramachandran et al. | Oct 2003 | B1 |
6641401 | Wood et al. | Nov 2003 | B2 |
6641454 | Fong et al. | Nov 2003 | B2 |
6663393 | Ghaly | Dec 2003 | B1 |
6682387 | Choi | Jan 2004 | B2 |
6682390 | Saito | Jan 2004 | B2 |
6692328 | Reinberg et al. | Feb 2004 | B1 |
6699045 | Christianson et al. | Mar 2004 | B2 |
6702644 | Hornsby et al. | Mar 2004 | B1 |
6729934 | Driscoll et al. | May 2004 | B1 |
6736694 | Hornsby et al. | May 2004 | B2 |
6761637 | Weston et al. | Jul 2004 | B2 |
6773322 | Gabai et al. | Aug 2004 | B2 |
6773344 | Gabai et al. | Aug 2004 | B1 |
6847892 | Zhou et al. | Jan 2005 | B2 |
6949003 | Hornsby et al. | Sep 2005 | B2 |
6959166 | Gabai et al. | Oct 2005 | B1 |
6995680 | Fong | Feb 2006 | B2 |
7033243 | Hornsby et al. | Apr 2006 | B2 |
7035583 | Ferrigno et al. | Apr 2006 | B2 |
7066781 | Weston | Jun 2006 | B2 |
7068941 | Fong et al. | Jun 2006 | B2 |
7568963 | Atsmon et al. | Aug 2009 | B1 |
20060229810 | Cross et al. | Oct 2006 | A1 |
20080160877 | Lipman | Jul 2008 | A1 |
20100041304 | Eisenson | Feb 2010 | A1 |
Number | Date | Country |
---|---|---|
WO 0112285 | Feb 2001 | WO |
Number | Date | Country | |
---|---|---|---|
20090275408 A1 | Nov 2009 | US |