The present invention relates to an apparatus and method for synchronizing and transmitting five sensory data and an actual-feeling multimedia data providing system and method; and, more particularly, to a five sensory synchronizing and transmitting apparatus and method which forms packets by describing vibration, odor, and taste expressed in video/audio by using touch, odor and taste data descriptor, synchronizes touch/odor/taste packets with video/audio packets on a frame basis and transmitting the synchronized packets, and an actual-feeling multimedia data providing system and method that can provide an actual-feeling multimedia service by demultiplexing the packets transmitted from the five sensory data synchronizing and transmitting apparatus and transmitting video data, audio data, touch data, odor data and taste data to corresponding devices.
Recent development in digital video/audio technology provides more realistic three-dimensional video and stereophonic sound and, further, an actual-feeling multimedia service applying all of the five senses of a human being stands in the spotlight.
Korean Patent Laid-open Nos. 2001-0096868 (which relates to a vibration effect device) and 2001-0111600 (which relates to a movie presenting system) disclose the actual-feeling multimedia service technology.
The vibration effect device stores vibration signals expressed in video by using the number of frames of the video or time code in a memory in advance and applies the stored vibration signals to a user whenever scenes of the video is outputted.
The movie presenting system provides a vibration device that provides vibration signals to a user according to the intensity of audio sound outputted from speakers when a movie is shown in a theater and the like.
The conventional technologies do not precisely describe the direction and rotation with respect to motion of a person or an object expressed in the video/audio and only gives the users vibration by using the vibration effect device for a predetermined video/audio play time or by using the vibration device according to the intensity of audio sound.
However, since the conventional technologies do not precisely describe the direction and rotation with respect to motion of a person or an object expressed in the video/audio, there is a problem that the user enjoying the video/audio cannot enjoy the sense of vibration delicately and accurately. Also, since the conventional technologies do not describe odor and taste which are expressed in the video/audio, they fail to provide the users with a realistic actual-feeling multimedia service.
Meanwhile, under development is technology for spraying chemical aromatics to the users enjoying the video/audio by using an odor device and releasing taste forming materials to users by using a taste device whenever scenes (or circumstances) are changed. However, the odor device and the taste device cannot express the exact odor and taste presented in the video/audio and the spray and the chemical aromatics are sprayed and released by arbitrary manipulation of the users. Also, an actual-feeling multimedia data providing system, which is under development at present, the vibration, odor and taste are not synchronized with the video and sound presented in the video/audio, and they are simply described in a level similar to each scene.
Technical Problem
It is, therefore, an object of the present invention to provide a five sensory synchronizing and transmitting apparatus which forms packets by describing vibration, odor, and taste expressed in video/audio by using touch, odor and taste data descriptors, synchronizes touch/odor/taste packets with video/audio packets on a frame basis and transmitting the synchronized packets, and a method thereof.
It is another object of the present invention, there is provided an actual-feeling multimedia data providing system that can provide an actual-feeling multimedia service by demultiplexing the packets transmitted from the five sensory data synchronizing and transmitting apparatus and transmitting video data, audio data, touch data, odor data and taste data to corresponding devices, and a method thereof.
In accordance with one aspect of the present invention, there is provided an apparatus for synchronizing and transmitting five sensory data, which includes: a video/audio data generating unit for generating video/audio data by receiving multimedia data from an external device;
a touch data describing unit for describing vibration expressed in the multimedia data received from the external device based on a predefined touch data descriptor; an odor data describing unit for describing an odor expressed in the multimedia data transmitted from the external device based on a predefined odor data descriptor; a taste data describing unit for describing a taste expressed in the multimedia data transmitted from the external device based on a predefined taste data descriptor; a video/audio packet forming unit for forming video/audio packets out of the video/audio data generated in the video/audio generating unit; a touch/odor/taste packet forming unit for forming a touch packet, an odor packet, and a taste packet out of the touch, odor and taste data which are described in the touch data describing unit, the odor data describing unit, and the taste data describing unit, respectively; a multiplexing unit for multiplexing the video/audio packets generated in the video/audio packet generating unit with the touch packet, the odor packet and the taste packet formed in the touch/odor/taste packet forming unit to thereby synchronize the video/audio packets with the touch/odor/taste packets; and a transmitting unit for transmitting a multiplexed packet multiplexed in the multiplexing unit.
In accordance with one aspect of the present invention, there is provided a method for synchronizing and transmitting five sensory data, which includes the steps of: a) generating video/audio data by receiving multimedia data from an external device; b) describing vibration, an odor and a taste expressed in the multimedia data transmitted from the external device to generate touch data, odor data and taste data based on predefined touch, odor and taste data descriptors, respectively; c) forming video/audio packets out of the video/audio data; and forming a touch packet, an odor packet and a taste packet out of the touch data, the odor data and the taste data, respectively; d) performing synchronization by multiplexing the video/audio packets, the touch packet, the odor packet and the taste packet; and e) transmitting a multiplexed packet to a receiving part.
In accordance with one aspect of the present invention, there is provided a system for providing actual-feeling multimedia data, which includes: a video/audio data generating unit for generating video/audio data by receiving multimedia data from an external device; a touch data describing unit for describing vibration expressed in the multimedia data transmitted from the external device based on a predefined touch data descriptor; an odor data describing unit for describing an odor expressed in the multimedia data received from the external device based on a predefined odor data descriptor; a taste data describing unit for describing a taste expressed in the multimedia data received from the external device based on a predefined taste data descriptor; a video/audio packet forming unit for forming video/audio packets out of the video/audio data generated in the video/audio generating unit; a touch/odor/taste packet forming unit for forming a touch packet, an odor packet, and a taste packet out of the touch, odor and taste data described in the touch data describing unit, the odor data describing unit, and the taste data describing unit, respectively; a multiplexing unit for multiplexing the video/audio packets generated in the video/audio packet generating unit and the touch packet, the odor packet and the taste packet formed in the touch/odor/taste packet forming unit to thereby synchronize the video/audio packets with the touch/odor/taste packets; a transmitting unit for transmitting a multiplexed packet obtained in the multiplexing unit; a receiving unit for receiving the multiplexed packet; a demultiplexing unit for demultiplexing the multiplexed packet received by the receiving unit into the video data, the audio data, the touch data, the odor data and the taste data; a video device for decoding and outputting the video data demultiplexed by the demultiplexing unit; an audio device for decoding and outputting the audio data demultiplexed by the demultiplexing unit; a vibration device for providing vibration to a user by interpreting the touch data demultiplexed by the demultiplexing unit; an odor device for spraying chemical aromatics to a user by interpreting the odor data demultiplexed by the demultiplexing unit; and a taste device for releasing a taste forming material to a user by interpreting the taste data demultiplexed by the demultiplexing unit.
In accordance with one aspect of the present invention, there is provided a method for providing actual-feeling multimedia data in an actual-feeling multimedia data providing system, which includes the steps of: a) generating video/audio data by receiving multimedia data from an external device; b) describing vibration, an odor and a taste expressed in the multimedia data transmitted from the external device to thereby generate touch data, odor data and taste data based on predefined touch, odor and taste data descriptors, respectively; c) forming video/audio packets out of the video/audio data; and forming a touch packet, an odor packet and a taste packet out of the touch data, the odor data and the taste data, respectively; d) performing synchronization by multiplexing the video/audio packets with the touch packet, the odor packet and the taste packet; e) transmitting a multiplexed packet to a receiving part; f) receiving the multiplexed packet and demultiplexing the multiplexed packet received by the receiving unit into the video data, the audio data, the touch data, the odor data and the taste data; g) decoding and outputting the demultiplexed video data and the demultiplexed audio data; h) providing a user with vibration by interpreting the demultiplexed touch data; i) spraying chemical aromatics to the user by interpreting the demultiplexed odor data; and j) a taste device for releasing taste forming materials to a user by interpreting the demultiplexed taste data.
The above and other objects and features of the present invention will become apparent from the following description of the preferred embodiments given in conjunction with the accompanying drawings, in which:
Other objects and aspects of the invention will become apparent from the following description of the embodiments with reference to the accompanying drawings, which is set forth hereinafter.
As illustrated in
Meanwhile, the receiving part 200 comprises a receiving module 20, a demultiplexing module 21, a video/audio decoding module 22, a video device 23, an audio device 24, a vibration device 25, an odor device 26, and a taste device 27. The receiving module 20 receives the stream-type packets transmitted from the transmitting part 100. The demultiplexing module 21 depacketizes the packets received in the receiving module 20, demultiplexes the resultant into the video data, audio data, touch data, taste data and odor data, and transmits the data to corresponding processing devices. The video/audio decoding module 22 decodes video data and audio data demultiplexed in the demultiplexing module 21. The video device 23 outputs the video data decoded in the video/audio decoding module 22 onto a screen. The audio device 24 outputs the video data decoded in the video/audio decoding module 22 onto a screen. The vibration device 25 receives touch data demultiplexed in the demultiplexing module 21 and gives vibration to the user to feel movement and rotation. The odor device 26 receives odor data demultiplexed in the demultiplexing module 21, spraying chemical aroma to the user to feel the odor. The taste device 27 receives taste data demultiplexed in the demultiplexing module 21, releasing chemical taste forming materials to the user to feel the taste.
Herein, the real-sense multimedia data providing system of the present invention includes the transmitting part 100 and the receiving part 200.
Hereinafter, structures and operations of the structural elements will be described in detail.
The video/audio packet forming module 11 forms video/audio packets, each of which is formed of a header and payloads, to be suitable for transmitting the video/audio data having a compressed stream type generated in the video/audio data generating module 10 through a communication network. Herein, the header contains a destination address, data for checking continuity when data are lost, data for controlling time synchronization such as time stamp and the payloads contains the video/audio data having the compressed stream type.
The touch data describing module 12 describes vibration expressed in the multimedia data provided from the external device of the content provider by using descriptors describing where touch data are described, whether right/left movement is described, whether up/down movement is described, whether back/forth movement is described, movement distance, movement velocity, movement, acceleration, whether right/left rotation is described, right/left rotation angle, right/left rotation speed, and right/left rotation acceleration.
The odor data describing module 13 describes odor expressed in the multimedia data provided from the external device of the contents provider by using descriptors for whether odor data are described, kind of odor, and intensity of odor.
The taste data describing module 14 describes taste expressed in the multimedia data provided from the external device of the contents provider by using descriptors for whether taste data are described, kind of taste, and taste intensity.
For example, when a producer related to a real-sense movie service provided to the receiving part 200 sees a pre-produced movie, the producer describes vibration, odor and taste of a current scene of the movie in the form of touch/odor/taste data by using touch data descriptors, odor descriptors, and taste descriptors to be suitable for the scene, synchronizes the touch/odor/taste data with the video data and audio data, and transmits the synchronized data to the receiving part 200. Also, it is possible that not all touch/odor/taste data can be described for one scene or that the touch/odor/taste data are combined and then described.
The touch/odor/taste packet forming module 15 forms the stream-type touch/odor/taste data, which are described in the touch data describing module 12, odor data describing module 13, and taste data describing module 14 by using corresponding touch/odor/taste descriptors, into packets including a header which are suitable forms to be transmitted to the receiving part 200 through the network. Herein, the header includes descriptor information that describes the touch/odor/taste data. The packets formed in the touch/odor/taste packet forming module 15 includes the touch/odor/taste data sequentially.
The multiplexing module 16 synchronizes the video/audio packet and the touch/odor/taste packet formed in the video/audio packet forming module 11 and the touch/odor/taste packet forming module 15. The multiplexing module 16 performs multiplexing by adding all the video/audio packets into frames that form the multimedia data and adding the touch/odor/taste packets into the last packet. That is, one frame is formed of a plurality of video/audio packets. Among the packets of each frame, the touch/odor/taste packet is added to the last packet. In short, the touch data, the odor data and the taste data are added to the last packet of each frame sequentially.
The demultiplexing module 21 of the receiving part 200 depacketizes the stream-type packet received in the receiving module 20, demultiplexes into video/audio data formed of a payload and a header deprived of network-related header information, e.g., address of the transmitting part 100, and into touch/odor/taste data formed of a header, and transmits the data to corresponding processing devices. Herein, the demultiplexing module 21 examines the headers of the received packets and confirms whether the data of packet is video data, audio data, touch data, odor data, and taste data. In other words, video data and audio data that form one frame are all transmitted to corresponding processing devices and then the touch data, the odor data, and the taste data are transmitted to corresponding processing devices sequentially to thereby synchronize five sensory data, i.e., video data, audio data, touch data, odor data, and taste data and make a user feel vibration, odor and taste expressed in the circumstance of each scene of the multimedia data along with video and sound.
The vibration device 25 is embodied as a vibration chair that can be moved right and left, up and down, and back and forth and/or rotated. The vibration device 25 reads the touch data which is demultiplexed (or separated) in the demultiplexing module 21 and makes a movement or rotation in the right and left, up and down and back and forth. Herein, the starting time and duration of the movement or rotation of the vibration device 25 is determined by being synchronized with the video and sound outputted from the video device 23 and the audio device 24. That is, as the transmitting part 100 transmits the touch data for video and sound, the vibration device 25 reads the transmitted touch data and makes a movement in the requested direction or makes a rotation. Then, if the transmitting part 100 transmits another touch data for another video and sound, the vibration device 25 reads the new touch data transmitted thereto, stops previous movement and makes a movement in a different direction or makes a rotation.
The odor device 26 is embodied as an aroma spray which is provided with a plurality of chemical aromatics and it can control the intensity of the odor. It analyzes the odor data demultiplexed, or separated, in the demultiplexing module 21 and sprays chemical aromatics having a corresponding intensity. Herein, the starting time and duration of the spraying of a specific chemical aromatic in the odor device 26 are determined after synchronized with video and sound outputted from the video device 23 and the audio device 24. In addition, the odor device 26 can spray one kind of odor by mixing a plurality of chemical aromatics or spray a plurality of prepared aromatics simultaneously to spray diverse aromatics corresponding to the odor data described in the transmitting part 100.
The taste device 27 is embodied in such a method that a plurality of chemical taste forming materials are prepared and a chemical taste forming material of the corresponding taste is released into the mouth of a user through a straw. The taste device 27 analyzes the taste data demultiplexed, or separated, in the demultiplexing module 21 and releases a chemical taste forming material of the corresponding taste. Herein, the starting time and duration of the release of a specific chemical taste forming material in the taste device 27 are determined after synchronized with video and sound outputted from the video device 23 and the audio device 24.
A touch object flag (TouchObjectFlag) indicates whether or not there is a touch data description. For example, when the touch object flag (TouchObjectFlag) is 1, it means that the touch data are described and, accordingly, the touch data are transmitted from the demultiplexing module 21 of the receiving part 200 to the vibration device 25, thereby activating the vibration device 25.
A length field indicates the size of the touch data packet and the size is 64 bits.
An X_MoveFlag indicates whether or not there is a description on the right/left movement in the touch data. For example, when the X_MoveFlag is 1, the vibration device 25 moves in the right/left.
An Y_MoveFlag indicates whether or not there is a description on the up/down movement in the touch data. For example, when the Y_MoveFlag is 1, the vibration device 25 moves up and down.
A Z_MoveFlag indicates whether or not there is a description on the back/forth movement in the touch data. For example, when the Z_MoveFlag is 1, the vibration device 25 moves back and forth.
Herein, only any one move flag among the X_MoveFlag, Y_MoveFlag and Z_is activated for a predetermined time. Thus, the vibration device 25 moves only in one direction among right/left, up/down and back/forth.
A MoveDistance indicates a distance of movement in any one direction among the right/left, up/down and back/forth in the touch data. In other words, as any one move flag among the X_MoveFlag, Y_MoveFlag and Z_is activated, the MoveDistance indicates a movement distance in a direction corresponding to the MoveFlag. For example, if X_MoveFlag is 1 and the MoveDistance is 10 cm, the vibration device moves in the right and left range of 10 cm.
A MoveSpeed indicates a speed of movement in one direction among right/left, up/down and back/forth in the touch data. For example, if the X_MoveFlag is 1 and the MoveDistance is 10 cm and the MoveSpeed is 5 cm/second, the vibration device 25 moves in the right and left range of 10 cm for 2 seconds.
The MoveAcceleration indicates an acceleration of movement in any one direction among the right/left, up/down and back/forth. For example, if the X_MoveFlag is 1 and the MoveDistance is 10 cm and the MoveSpeed is 5 cm/second and the MoveAcceleration is 5 cm/second2, the vibration device 25 moves in the right and left range of 10 cm for 2 seconds and the movement is increased gradually at an acceleration of 5 cm/second2.
A RotationFlag indicates whether or not there is right/left rotation description. For example, if the RotationFlag is 1, the vibration device 25 is rotated right/left.
A RotationAngle indicates a right/left rotation angle in the touch data.
A RotationSpeed indicates a right/left rotation speed in the touch data.
A RotationAcceleration indicates right/left rotation acceleration in the touch data.
A SmellobjectFlag indicates whether or not there is an odor data description. For example, if the SmellObjectFlag is 1, it means that the odor data are described and, accordingly, the odor data are transmitted from the demultiplexing module 21 of the receiving part 200 to the odor device 26 to thereby activate the odor device 26.
A length field indicates the size of an odor data packet and the size is 32 bits.
A ‘Type’ means the kind of odor in the odor data. For example, the odor of an aroma is pre-established as ‘100’ and if the SmellObjectFlag is 1 and the type is 100, the odor device 26 sprays a chemical aromatic having the odor of the aroma.
A ‘Level’ indicates the intensity of the odor in the odor data. For example, if the SmellObjectFlag is 1 and the type is 100 and the level is 31, the odor device 26 sprays a chemical aromatic having the odor of the aroma at the predetermined level of 31. Herein, the higher the level is, the stronger the intensity of the odor is.
A TasteObjectFlag indicates whether or not there is a taste data description. For example, if the TasteObjectFlag is 1, it means that the taste data are described and, accordingly, the taste data are transmitted from the demultiplexing module 21 of the receiving part 200 to the taste device 27 to thereby activate the taste device 27.
A ‘Length’ field indicates the size of a taste data packet and the size is 32 bits.
A ‘Type’ indicates the kind of taste in the taste data. For example, if a hot taste is pre-established as ‘7’ and if the TasteObjectFlag is 1 and the type is 7, the taste device 27 releases a chemical taste forming material that tastes hot.
A ‘Level’ indicates the intensity of taste in the taste data. For example, if the TasteObjectFlag is 1 and the type is 7 and the Level is 31, the taste device 27 releases a chemical taste forming material that tastes hot with an intensity of the pre-established 31.
First, at step 500, multimedia data are inputted from an external device, e.g., a contents provider.
At step 501, video/audio data having a compressed stream type are generated. In other words, when multimedia data are inputted from an external device, e.g., a contents provider, compressed stream-type video/audio data are generated by using an image encoding method, such as Moving Picture Experts Group 2 (MPEG-2) compressed encoding method.
Subsequently, at step 503, the stream-type video/audio data, which are generated in the above, are formed into video/audio packets. That is, the stream-type video/audio data are formed into video/audio packets which are formed of a header including destination address information and a payload including substantial video/audio data, which are proper forms to transmit the stream-type video/audio data to the receiving part 200 through a network.
Meanwhile, at step 502, the vibration/odor/taste expressed in the inputted multimedia data are described by using touch/odor/taste descriptors. That is, vibration expressed in the multimedia data provided from the external device, e.g., a contents provider, is described by using a predefined touch descriptor, and the odor expressed in the multimedia data provided from the external device, e.g., a contents provider, is described by using a predefined odor descriptor, while the taste expressed in the multimedia data provided from the external device, e.g., a contents provider, is described by using a predefined taste descriptor.
Subsequently, at step 504, the touch/odor/taste data are formed into touch/odor/taste packets. That is, touch/odor/taste packets having a header including touch/odor/taste data descriptor information sequentially are formed so that the above described touch data, odor data and taste data can be transmitted to the receiving part 200 through the network properly.
Subsequently, at step 505, the audio/video packet and the touch/odor/taste packets are multiplexed on a frame bass. Herein, the multiplexing module 16 synchronizes the audio/video packets and the touch/odor/taste packets which are restructured in the audio/video packet forming module 11 and the touch/odor/taste forming module 15, respectively. That is, the multiplexing module 16 sequentially performs the multiplexing by adding a plurality of audio/video packets to frames that forms the multimedia data and, lastly, adding the touch/odor/taste packets in order.
At step 506, the multiplexed packets are transmitted to the receiving part 200. At step 507, the packets are received and demultiplexed into video/audio data and touch/odor/taste data in the receiving part 200. That is, the demultiplexing module 21 of the receiving part 200 depacketizes the stream-type packets received in the receiving module 20 and finds out whether the packets are of video data, audio data, touch data, odor data and taste data by checking the headers of the received packets.
At step 508, the demultiplexed video/audio data are decoded in the receiving part 200.
Subsequently, at step 509, video data decoded in the receiving part 200 are transmitted to the video device 23.
At step 510, audio data decoded in the receiving part 200 are transmitted to the audio device 24.
At step 511, touch data multiplexed in the receiving part 200 in the step 507 are transmitted to the vibration device 25.
At step 512, odor data demultiplexed in the receiving part 200 in the step 507 are transmitted to the odor device 26.
At step 513, taste data demultiplexed in the receiving part 200 in the step 507 are transmitted to the taste device 27.
Accordingly, at step 514, the video device 23 outputs the video data on a screen and, at step 515, the audio device 24 outputs the audio data on a speaker. At step 516, the vibration device 25 analyzes the touch data and gives vibration to the user to feel the sense of touch. At step 517, the odor device 26 analyzes the odor data and sprays a chemical aromatic so that the user can feel the odor. At step 518, the taste device 270 analyzes the taste data and releases a chemical taste forming material so that the user can feel the taste.
The method of the present invention, which is described above, can be embodied as a program and stored in a computer-readable recording medium, e.g., CD-ROM, RAM, ROM, floppy disks, hard disks, magnetooptical disks and the like. As the process can be easily implemented by those of ordinary skill in the art, further description on it will not be provided herein.
Since the present invention describes vibration, odor, and taste expressed in multimedia data by using touch/odor/taste data descriptors and transmits them to corresponding devices on the user's part that receives the multimedia service, the user can receive more realistic real-sense multimedia service as well as sensing the five senses expressed in the multimedia data.
Also, the present invention can provide the user with vibration, odor and taste that conform to each scene of the multimedia data with the vibration device, odor device and taste device by transmitting the synchronized video data, audio data, touch data, odor data and taste data based on each frame of the multimedia data. Therefore, the technology of the present invention can make the user feel the five senses expressed in each scene of the multimedia data precisely.
While the present invention has been described with respect to certain preferred embodiments, it will be apparent to those skilled in the art that various changes and modifications may be made without departing from the scope of the invention as defined in the following claims.
Number | Date | Country | Kind |
---|---|---|---|
10-2003-0079865 | Nov 2003 | KR | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/KR03/02917 | 12/30/2003 | WO | 1/30/2007 |