Benefit is claimed to Indian Provisional Application No. 2283/CHE/2012 titled “KIDS TELECOMMUNICATION DEVICE” by GUPTA, Ankush, filed on 12 Oct. 2012, which is herein incorporated in its entirety by reference for all purposes.
The present invention relates to the field of communication and more particularly to a communication method and system for enabling communication using an animated character in real-time.
Generally, computer animation is more compelling when it includes realistic human like interaction among the components in a graphics scene. This is especially true when the animated characters in a graphics scene are supposed to simulate life-like interaction. However, using the conventional methods it is difficult for application programs to synchronize the actions of characters so that they appear more life-like.
Most current applications use a time-based scripting system, in which the precise times at which individual actions and gestures evolve in lock step with a clock. This method is very flexible and quite powerful. Unfortunately, it requires a great deal of attention to each frame, it is very time-consuming, and the resulting script is hard to read. These limitations affect the use and availability of animation to designers in the mass market. Since it is particularly difficult to express such scripts in string format, they are particularly unsuitable to the World Wide Web (the Web), over which most control information is transmitted as text.
In conventional systems, the communication system the video and voice are transmitted via the network, which consumes large amount of data and high bandwidth. Moreover, the conventional communication sessions such as chat environment and video communication do not provide an option to animate the animation character in real time using traditional landline telephones. Furthermore, most of the communication sessions involving communication protocols require both the ends to take an action using keyboard or touch screen.
The objective of the invention is to provide a method of controlling an animated character running on communication device remotely.
Another objective of the invention is to provide a mechanism to control various emotions and activities of an animated character remotely through voice.
Yet another objective of invention is to provide a method and system for creating dynamic real-time video of animated characters using corresponding fragments of videos and images.
Yet another objective of invention is to provide a method and system for enabling state transitions of activities of the animated character.
Yet another objective of invention is to provide a method and system for enabling the communication device adapted to manage one or more class rooms provide real time learning experience.
The foregoing has outlined, in general, the various aspects of the invention and is to serve as an aid to better understanding the more complete detailed description which is to follow. In reference to such, there is to be a clear understanding that the present invention is not limited to the method or application of use described and illustrated herein. It is intended that any other advantages and objects of the present invention that become apparent or obvious from the detailed description or illustrations contained herein are within the scope of the present invention.
The various embodiments of the present invention provide a method of enabling communication between at least two communication device using an animated character in real-time. In one aspect of present invention, the method comprises establishing a communication session between a first communication device and a second communication device. Further, the first communication device transmits a voice signal and an event message to the second communication device. The transmitted the voice signal and an event message are analyzed by a data analyzer module in the second communication device. The method further comprises creating an animation sequence corresponding to the animated character based on the analysis by an animation engine and displaying the animated character in the second communication device. The method according to present invention enables the animated character to perform a plurality of pre-defined actions on the second communication device, wherein the plurality of pre-defined actions comprises at least one of selecting an emotion or performing an activity by the animated character based on one or more control instructions from the first communication device.
Additionally, the method comprises activating a communication application pre-installed in the first communication device and the second communication and selecting an animated character corresponding to a pre-defined user identity. Furthermore, the method comprising dividing the received voice signal based on a predefined duration at a pre-defined frame rate and computing maximum amplitude of the voice signal for the predefined duration. The method comprising extracting a plurality of header attributes, identifying one or more commands provided in a header of the event message, and mapping at least one of an emotion or activity based on the plurality of header attributes.
The method further comprises selecting one or more image frames based on the computed amplitude of the voice signal, selecting one or more image frames or video frames corresponding to the selected animated character, performing a frame animation on the selected one or more image frames, performing a video animation on the selected one or more image frames or video frames corresponding to the selected animated character based on the one or more commands in the event message and combining the frame animated image frames and the video animated video frames to create the animation sequence. The method further comprising modulating the received voice signal based on the selected animated character.
In another aspect, system for enabling communication between at least two communication devices using an animated character in real-time, the system comprising a first communication device, a server and a second communication device. The second communication device comprising an application module comprising a data analyzer module configured for analyzing the voice signal and an event message and an animation engine configured for creating an animation sequence corresponding to the animated character and enabling the animated character to perform a plurality of pre-defined actions on the second communication device and controlling the animated character based on one or more control instructions from the first communication device. The second communication device further comprising a display module configured for displaying the animated character in the second communication device.
In another aspect, a device for enabling communication using an animated character in real-time, the device comprises a communication module configured for establishing a communication session with another communication device and receiving a voice signal and an event message from another communication device and an application module comprising a data analyzer module and an animation engine. The data analyzer module is configured for analyzing the voice signal and an event message. The animation engine configured for creating an animation sequence corresponding to the animated character based on the analysis by an animation engine and enabling the animated character to perform a plurality of pre-defined actions. Further, the device comprising a user interface module for displaying the animated character.
Additionally, the device comprising a resource repository adapted for storing a plurality of pre-defined animated characters and a plurality of image frames, video frames and audio frames associated with the plurality of animated characters.
Moreover, the data analyzer module of the device according to present invention comprises an attribute extraction module and a voice processing module. The attribute extraction module is configured for extracting a plurality of header attributes identifying one or more commands provided in a header of the event message and mapping at least one of an emotion or activity based on the plurality of header attributes. The voice processing module configured for dividing the received voice signal based on a predefined duration at a pre-defined frame rate and computing maximum amplitude of the voice signal for the predefined duration.
Likewise, the animation engine in the device according to present invention comprises a frame animation module, a video animation module and a frame combining module. The frame animation module is configured for selecting one or more image frames based on the computed amplitude of the voice signal and performing a frame animation on the selected one or more image frames. The video animation module configured for selecting one or more image frames or video frames corresponding to the selected animated character and performing a video animation on the selected one or more image frames or video frames corresponding to the selected animated character based on the one or more commands in the event message. The frame combining module configured for combining the frame animated image frames and the video animated video frames to create the animation sequence.
The device further comprises a voice modulation module configured for modulating the received voice signal based on the selected animated character.
The present invention provides method, system and device of enabling communication between at least two communication devices using an animated character in real-time. In the following detailed description of the embodiments of the invention, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.
According to an embodiment of present invention, the user identity corresponding to the first communication device 101 is registered in the server. A plurality of such user identities is registered in the server. Accordingly, the server authenticate the first communication device 101 to establish a communication session with the second communication device 100, only if the user identity corresponding to the first communication device 101 is registered in the server and vice versa. Once the communication session is established between the first communication device 101 and the second communication device 100, an animated character corresponding to the pre-registered user identity is displayed in the second communication device 100. The animated character displayed in the second communication device 100 is controlled by the first communication device 101. Further, the voice signal corresponding to the user of the first communication device 100 is modulated corresponding to the voice signal of the animation character displayed in the second communication device 100. The server 103 uses the Gateways 104 and ENUM servers 105 or equivalent technology to facilitate the calling of traditional landline phone device 102.
At step 205, the application module 604 of the second communication device 100 determines whether any event message is received from the first communication device. If the second communication device 100 receive any event message, then at step 206, the voice signal and event message are analyzed by a data analyzer module. An exemplary method of analyzing the received voice signal in accordance with the embodiment of present invention is illustrated in
According to another embodiment herein, once connection is established to the second communication device 100, the device control gets transferred to the state machine 718 of the application module. The control instructions provided by the state machine 718 enables the animated character to perform at least one of a state comprising an activity state, talking state, a listening state and an idle state.
At step 208, the animated sequence is displayed in the second communication device 100 as per the event message and voice signal received from the first communication device 101. At step 210, the animated character displayed on the second communication device 100 is enabled to perform a plurality of pre-defined actions. The plurality of pre-defined actions comprises selecting an emotion or performing an activity by the animated character based on one or more control instructions send from the first communication device. For example, the control instructions include changing of dress, hair, color or the like in real time. It can further control speaker, microphone volume remotely.
The memory 610 may include a volatile memory 610 and a non-volatile memory 612. The memory 608 includes resource repository 614. A detailed illustration of resource repository according to an exemplary embodiment of present invention is illustrated in
A variety of computer-readable media may be stored in and accessed from the memory elements of the communication device 600, such as the volatile memory 610 and the non-volatile memory 612, the removable storage 620 and the non-removable storage 622. Memory elements may include any suitable memory device(s) for storing data and machine-readable instructions, such as read only memory, random access memory, erasable programmable read only memory, electrically erasable programmable read only memory, hard drive, removable media drive for handling compact disks, digital video disks, diskettes, magnetic tape cartridges, memory cards, Memory Sticks, and the like.
The processor 606, as used herein, means any type of computational circuit, such as, but not limited to, a microprocessor, a microcontroller, a complex instruction set computing microprocessor, a reduced instruction set computing microprocessor, a very long instruction word microprocessor, an explicitly parallel instruction computing microprocessor, a graphics processor, a digital signal processor, or any other type of processing circuit. The processing unit 608 may also include embedded controllers, such as generic or programmable logic devices or arrays, application specific integrated circuits, single-chip computers, smart cards, and the like.
Embodiments of the present subject matter can be implemented in conjunction with program modules, including functions, procedures, data structures, and application programs, for performing tasks, or defining abstract data types or low-level hardware contexts. Machine-readable instructions stored on any of the above-mentioned storage media may be executable by the processing unit 606. The machine-readable instructions may cause the communication device 600 to encode according to the various embodiments of the present subject matter.
The data analyzer module 702 is configured for analyzing the transmitted voice signal and event message. Typically, the data analyzer module 702 comprises an attribute extraction module 708 and a voice processing module 710. The attribute extraction module 708 is configured for extracting a plurality of header attributes and identifying one or more commands provided in a header of the event message and mapping at least one of an emotion and activity based on the plurality of header attributes. The voice processing module 710 is configured for dividing the received voice signal based on a predefined duration at a pre-defined frame rate and computing maximum amplitude of the voice signal for the predefined duration.
The animation engine 704 of the application module 604 comprises a video animation module 712, a frame animation module 714 a frame combining module 716and a state machine 718. The computed amplitude of the voice signal is sent to the frame animation module 714. The frame animation module 714 is configured for selecting one or more image frames based on the computed amplitude of the voice signal and performing the frame animation on the selected one or more image frames. The identified commands of the event message are sent to the video animation module 712. The video animation module 712 configured for selecting one or more image frames or video frames corresponding to the selected animated character and performing the video animation on the selected one or more video frames corresponding to the selected animated character based on the one or more commands in the event message. The outcome of video animation module 712 and frame animation module 714 are sent to the frame combining module 716. The frame combining module716 configured for combining the frame animated image frames and the video animated video frames to create the animation sequence.
The state machine 718 enables the animated character to be in states such as activity, talking, listing and idle. The animation sequence corresponding to the animated character when the second communication device 100 receives voice signal and event messages from the first communication device 101 is created by the state machine 718. According to one embodiment of present invention, the state machine 718 has the states such as activity, talking, listening and idle. Whenever any event message is received state machine 718 moves to activity state till completion of activity or till next event message is received. The animated character is in talking state, whenever the second communication receives voice signals from first device and is not performing any activity. The animated character is in listening state only when it receives voice packets from microphone associated with second communication device, while no voice signals are being received from first device Likewise, the animated character is in idle state, when the first communication device 101 is not transmitting voice signal and event message and no voice signals are received from microphone associated with second communication device.
The voice modulation module 706 determines the bit rate of the voice signal. Subsequently the voice modulation module 706 changes the bit rate of the voice signal according to the animated character displayed on the second communication device 100. A child-like voice effect is created by increasing the bit rate from the voice signal of the user. The modulated voice is played on second communication device 100 through a speaker or an earphone.
While constructing multiple activities, emotions, expressions for fragment of videos, all fragments starts from a same frame and always end with same frame as starting frame. Typically this starting frame could show an animated character standing in an idle position. While transitioning from one fragment of video to another video it combines using same frame which brings continuity in animation, it creates an impact as if animated character is interacting with students.
Further in this embodiment, the first communication device has option to display camera feed of attached camera or network camera associated with the second communication device. The first communication device 101 also controls system aspects of the second communication device 100 such as speaker and microphone volume levels. The first communication device 101 can increase, decrease, mute or unmute speaker or microphone of the second communication device 100 by sending an additional SIP header command=<value> parameter in a SIP INFO or equivalent event message. The header command has the following values “increase_mic_volume”, “decrease_mic_volume”, “mute_mic”,unmute_mic”,“increase_speaker_volume”, “decrease_speaker_volume”, “mute_speaker”, “unmute_speaker”. “unmute_mic”,“increase_speaker_volume”,“decrease_speaker_volume”, “mute_speaker”, “unmute_speaker”. The first communication device facilitates sending of these primitives to the second communication device. Once the second communication device receives “Command” header, it extracts value of the parameter and performs appropriate function on the second communication device by using well-known methods provided by device drivers.
Although the present embodiments have been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the various embodiments. Furthermore, the various devices, modules, analyzers, generators, and the like described herein may be enabled and operated using hardware circuitry, for example, complementary metal oxide semiconductor based logic circuitry, firmware, software and/or any combination of hardware, firmware, and/or software embodied in a machine readable medium. For example, the various electrical structure and methods may be embodied using transistors, logic gates, and electrical circuits, such as application specific integrated circuit.
Number | Date | Country | Kind |
---|---|---|---|
4283/CHE/2012 | Oct 2012 | IN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IN2013/000618 | 10/14/2013 | WO | 00 |