This application claims priority to and the benefit of Korean Patent Application No. 10-2010-0115687 filed in the Korean Intellectual Property Office on Nov. 19, 2010, the entire contents of which are incorporated herein by reference.
The present invention relates to 4D (four-dimension) broadcasting technology. More particularly, an exemplary embodiment of the present invention relates to a method and an apparatus for controlling a broadcasting network and a home network for a 4D broadcasting service.
With recent technological development, solutions that implement 3D contents in a broadcasting receiver have been developed. Representative examples of methods of providing 3D contents include a glass type and a non-glass type. Furthermore, as more detailed methods for implementing a non-glass type 3D TV, parallax barrier technology and lenticular technology have been discussed.
In the parallax barrier, numerous bars are erected in a display device so as not to view each channel according to eyes. That is, at a predetermined viewpoint, a left image is hidden with respect to a right eye and a right image is hidden with respect to a left eye.
Meanwhile, the lenticular uses a stereoscopic picture postcard and a transparent uneven film is plated on the stereoscopic picture postcard and left and right images are refracted and sent by arranging a small lens in a display.
However, in the case of 3D technology discussed up to now, only improvement of a 3D effect of an image has been primarily focused, and a study of processing realistic effect data related to 3D images and development of a solution have been insufficient.
The present invention has been made in an effort to provide a protocol and a device of a network capable of more accurately and rapidly processing realistic effect data related to 3D images.
An exemplary embodiment of the present invention provides a method for controlling a broadcasting network for a 4D (four-dimension) broadcasting service, the method including: encoding image data of a predetermined place photographed by multiple cameras; receiving realistic effect data from at least one sensor sensing state information of the predetermined place while the photographing is performed; synchronizing the realistic effect data and the image data with each other by considering an encoding time of the image data; generating a transport stream (TS) including the realistic effect data and the image data based on the synchronization; and transmitting the generated TS to a home network.
Another exemplary embodiment of the present invention provides a method for controlling a home network for a 4D broadcasting service, the method including: receiving a TS including image data of a predetermined place photographed by the multiple cameras and realistic effect data received from at least one sensor sensing state information of the predetermined place; transmitting a first area corresponding to the image data in the received TS to an image processor; transmitting a second area corresponding to the realistic effect data in the received TS to a realistic effect data analyzing module; and transmitting a command signal depending on the realistic effect data analyzed by the realistic effect data analyzing module to a corresponding device of the home network.
Yet another exemplary embodiment of the present invention provides a broadcasting server of a broadcasting network, the server including: a first receiving module receiving encoded image data from an encoder encoding image data of a predetermined place photographed by multiple cameras; a second receiving module receiving realistic effect data from at least one sensor sensing state information of the predetermined place while the photographing is performed; a synchronization module synchronizing the realistic effect data and the image data with each other by considering an encoding time of the image data; a TS generating module generating a transport stream (TS) including the realistic effect data and the image data based on the synchronization; and a transmission module transmitting the generated TS to a home network.
Still another exemplary embodiment of the present invention provides control device of a home network, the device including: a receiving module receiving a TS including image data of a predetermined place photographed by the multiple cameras and realistic effect data received from at least one sensor sensing state information of the predetermined place; an image processor processing a first area corresponding to the image data in the received TS; a realistic effect data analyzing module processing a second area corresponding to the realistic effect data in the received TS; and a transmission module transmitting a command signal depending on the realistic effect data analyzed by the realistic effect data analyzing module to a corresponding device of the home network.
According to the exemplary embodiments of the present invention, a realistic effect which the existing broadcasting media cannot provide can be reproduced by adding realistic effect information required to apply a realistic service, and the like to the existing broadcasting media including a moving picture, audio, and texts.
Further, according to the exemplary embodiments of the present invention, a service having more improved reality can be provided generating realistic effect information related to 3D images in link with information at the time of actually photographing images.
Besides, according to the exemplary embodiments of the present invention, related data can be processed by one sequence by generating a realistic effect in which a lot of data are generated in a short time, such as motions of people, and the like in real time by using an aggregator.
In exemplary embodiments described below, components and features of the present invention are combined with each other in a predetermined pattern. Each component or feature may be considered to be optional unless stated otherwise. Each component or feature may not be combined with other components or features. Further, some components and/or features are combined with each other to configure the exemplary embodiments of the present invention. The order of operations described in the exemplary embodiments of the present invention may be modified. Some components or features of any exemplary embodiment may be included in other exemplary embodiments or substituted with corresponding components or features of other exemplary embodiments.
The exemplary embodiments of the present invention may be implemented through various means. For example, the exemplary embodiments of the present invention may be implemented by hardware, firmware, software, or combinations thereof.
In the case of implementation by hardware, a method according to the exemplary embodiment of the present invention may be implemented by application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), a processor, a controller, a microcontroller, a microprocessor, and the like.
In the case of implementation by firmware or software, the method according to the exemplary embodiments of the present invention may be implemented in the form of a module, a process, or a function of performing the functions or operations described above. Software codes may be stored in a memory unit and driven by a processor. The memory unit is positioned inside or outside of the processor to transmit and receive data to and from the processor by a previously known various means.
Throughout this specification and the claims that follow, when it is described that an element is “coupled” to another element, the element may be “directly coupled” to the other element or “electrically coupled” to the other element through a third element. In addition, unless explicitly described to the contrary, the word “comprise” and variations such as “comprises” or “comprising”, will be understood to imply the inclusion of stated elements but not the exclusion of any other elements.
Further, term “module”, etch described in the specification imply a unit of processing a predetermined function or operation and can be implemented by hardware or software or a combination of hardware and software.
Predetermined terms used in the following description are provided to help understanding the present invention and the use of the predetermined terms may be modified into different forms without departing from the spirit of the present invention.
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. In describing the present invention, well-known functions or constructions will not be described in detail since they may unnecessarily obscure the understanding of the present invention. In addition, terms described below as terms defined by considering their functions in the present invention may be changed depending on a user or operator's intention or a convention. Therefore, the definitions should be made on the basis of overall contents of the specification.
According to the exemplary embodiment of the present invention, while actually recording broadcasts such as movies, dramas, and the like by using a 3D camera, environmental information, motion information, and realistic information are automatically acquired by using a sensor installed at a place where images are acquired or a mobile sensor and added to the images in real time. In contrast, when realistic effect data related to a place where a 3D image is acquired is fully separated from a process of acquiring the 3D image, a user cannot perfectly receive a situation or feeling of the place where the 3D image is photographed. Therefore, the technology of converting the image inputted from the 3D camera into a broadcasting media format, the aggregator technology for collecting the realistic effect, the technology for storing the image and the realistic effect into a broadcasting format, MPEG-2 TS, the technology of transmitting the realistic effect synchronized with the image, the technology of extracting the image and the realistic effect from a home server, the technology of verifying the realistic effect in a simulator, and the technology of reproducing the realistic effect by using an actual realistic device will be more specifically described below.
As shown in
Further, while photographing using the multiple cameras 100 are in progress, an aggregator 102 for collecting realistic effect data collects circumferential information and converts the collected circumferential information into an XML format to transmit the converted XML-format to the broadcasting server 103. However, the circumferential information is acquired by using a sensor for acquiring, for example, a temperature, humidity, illumination, acceleration, angular velocity, gas, wind velocity, and positional information.
Furthermore, the broadcasting server 103 performs a synchronization operation in order to match collected realistic information with a reproduction time of an image. Experimentally, since an encoding time of image data is long and the realistic effect is collected by the sensor within a relatively short time, the synchronization operation is performed by delaying the time of the realistic effect to a predetermined range in time for the reproduction time of the image. The synchronized data is converted into an MPEG-4 which is a storage format.
A moving picture encoded into MPEG-4 is again encoded into the MPEG-2 TS format by using an MPEG-4 over MPEG-2 TS encoder for a broadcasting service. Thereafter, the moving picture encoded into the MPEG-2 TS format is multicasted to an IP network 104 by using a UDP/IP. Meanwhile, in the above description, the MPEG-4 and the MPEG-2 have been described as an example, but the present invention is not necessarily limited thereto and data having different formats may be used.
A home server 106 receives multicasted broadcasting contents by using a UDP/IP receiver and thereafter, removes a header of TS by using an MPEG-2 TS demux. Furthermore, during this process, the 3D image is transmitted to a 3D player 105 and the realistic effect data is transmitted to a corresponding device 107 to be reproduced in synchronization with the 3D image. In particular, the corresponding device 107 corresponds to a device that can process metadata related with the realistic effect transmitted from the broadcasting server 103 and for example, may be an aroma emitter, an LED, a curtain, a bubble generator, a tickler, an electric fan, a heater, a haptic mouse, a tread mill, a motion chair, a display, or the like.
As shown in
The production & delivery network 200 is a component that acquires the 3D image and the realistic effect and the home network 201 is a component that receives and reproduces the 3D image and the realistic effect.
Furthermore, the production & delivery network 200 may include an MPEG-4(H.264) encoder 210, an MPEG-4 over MPEG-2 TS converter 211, and an MPEG-2 TS streamer 212. Further, the home network 201 may include an MPEG-2 TS remultiplexer 213, a realistic device reproducer 214, and an MPEG-4(H.264) player 215.
More specifically, an NVS encoder 220 is an encoder for converting an analog image photographed by using a 2D camera or 3D camera into a digital image and an image outputted from the NVS encoder 220 is transmitted to a raw data receiver 221 of the broadcasting server 221. The raw data receiver 221 converts the received raw data into MPEG-4 data. However, the MPEG-4 data needs to be converted into the MPEG-2 TS again in order to be multicasted to the home network. An MPEG-4(H.264) over MPEG-2 TS converter 222 converts the MPEG-4 file into the MPEG-2 TS.
Meanwhile, a realistic effect data aggregator 223 collects the realistic effect by using the sensor and converts the collected realistic effect into XML-type metadata. One detailed example of the realistic effect metadata is shown in
According to the exemplary embodiment of the present invention, the production & delivery network 200 transfers values sensed by the sensor to the home network 201. Furthermore, at the side of the home network 201, current temperatures of the home network and a home where the home network is installed need to be recorded in order to process the XML data of
The sensor effect metadata collected from the realistic effect data aggregator 223 is combined to the MPEG-2 TS like the image through a realistic effect data inserter & UDP/IP multicasting module 224. Further, the realistic effect data inserter & UDP/IP multicasting module 224 multicasts the combined MPEG-2 TS to the home network 201.
The data transmitted to the home network 201 is received by, particularly, an MPEG-2 TS receiver & MPEG-2 TS remultiplexer 225. The received MPEG-2 TS is separated into the 2D or 3D image and the realistic effect data, which are each transferred to an MPEG-4(H.264) player 229, a realistic verification simulator 227, and a realistic effect data analyzer 226.
The realistic effect data analyzer 226 analyzes the transmitted realistic effect and converts the realistic effect into the corresponding device control code, and thereafter, sends a control command to a device controller 228.
The data which can be inputted into the NVS may include, for example, data or MPEG-4 NAL. The data may be converted into its own format, while the MPEG-4 NAL format needs to be processed as shown in
In the case where an MPEG-4(H.264) NAL frame 400 is inputted in the NVS, a program association table (PAT) 402 and a program map table ( ) 403 of the MPEG-2 TS type are generated by referring to a packetized elementary stream-packet (PES-packet) in the frame.
That is, stream_ID in the PES-packet is analyzed, and audio and video are distinguished from each other to generate and insert a stream information descriptor of an elementary stream (ES) 404. In addition, in generating the stream information descriptor, predetermined PID information is allocated and converted into the MPEG-2 TS. In this case, the audio configures an AUDIO-MPEG-2 TS 406, which is stored in a payload of the MPEG-2 TS and the video configures a VIDEO-MPEG-2 TS 405, which is stored in the payload of the MPEG-2 TS. Information thereon is stored in an MPEG-2 TS header 407.
As shown in
Furthermore, when the XML document is generated according to the above process, the XML document is packetized to a form of an access unit (AU) 506 to be transmitted to an MPEG-2 TS. The packetized AU is subjected to an MPEG-2 Private Section, which is stored in the MPEG-2 TS. The MPEG-2 TS stored through such a process is transmitted by using a UDP/IP communication.
The server receives the MPEG-2 TS and thereafter, identifies texture format for multimedia description streams (TeM). In addition, the private section of the MPEG-2 TS is configured to the AU packet again and thereafter, the AU is stacked on an XSL queue 508 through an XSL composer 507. Thereafter, the AU is compared with an initial description tree 510 stored in the home server by using an XSLT engine 509 and thereafter, a changed description tree (CDT) 511 is configured and a changed part is transmitted to a relevant device. The relevant device may be diverse devices in a home network, which are controlled by the home server.
As shown in
In
A broadcasting server 900 according to the exemplary embodiment of the present invention includes a first receiving module 902 receiving image data encoded by an encoder 920 encoding image data of a predetermined place photographed by multiple cameras. Further, the broadcasting server 900 further includes a second receiving module 901 receiving realistic effect data from at least one sensor 910 sensing state information of the predetermined place while the photographing is performed.
Furthermore, a synchronization module 903 of the broadcasting server 900 synchronizes the realistic effect data and the image data by considering an encoding time of the image data and a TS generating module 904 generates a TS including the realistic effect data and the image data based on the synchronization. In addition, a transmission module 905 of the broadcasting server 900 is designed to transmit the generated TS to a home network.
The encoder 920 corresponds to, for example, the NVS encoder 101 shown in
Meanwhile, a control device 950 of the home network according to the exemplary embodiment of the present invention includes a receiving module 9510 receiving a TS including image data of a predetermined place photographed by the multiple cameras and realistic effect data received from at least one sensor sensing state information of the predetermined place. The control device 950 of the home network corresponds to, for example, a home server or a broadcasting receiver.
Furthermore, the control device 950 further includes an image processor 952, a realistic effect data analyzing module 953, and a transmission module 954.
The image processor 952 processes a first area corresponding to the image data in the received TS and the realistic effect data analyzing module 953 processes a second area corresponding to the realistic effect data in the received TS.
In addition, the transmission module 954 transmits a command signal depending on the realistic effect data analyzed by the realistic effect data analyzing module 953 to a corresponding device 960 of the home network. The corresponding device 960 may be a predetermined electronic appliance that is connected to a home network to transmit and receive data, such as an aroma emitter, an electric fan, a heater, or the like.
As shown in
Meanwhile, in the case where the home server or broadcasting receiver 1000 receives realistic effect data (e.g., temperature related XML data) collected in real time at the time of photographing a 3D image from the broadcasting server, the electric fan 1010 is designed to output higher wind intensity 1030 according to a control command from the home server or broadcasting receiver 1000 as shown in
As shown in
Meanwhile, in the case where the home server or broadcasting receiver 1100 receives realistic effect data (e.g., temperature related XML data) collected in real time at the time of photographing the 3D image from the broadcasting server, the heater 1100 is designed to output larger heating amount 1130 according to the control command from the home server or broadcasting receiver 1110 as shown in
As shown in
Meanwhile, in the case where the home server or broadcasting receiver 1200 receives realistic effect data (e.g., fragrance related XML data) collected in real time at the time of photographing the 3D image from the broadcasting server, the aroma emitter 1210 is designed to output fragrance B 1230 according to the control command from the home server or broadcasting receiver 1200 as shown in
As shown in
Meanwhile, in the case where the home server or broadcasting receiver 1300 receives realistic effect data (e.g., contrast related XML data) collected in real time at the time of photographing the 3D image from the broadcasting server, the entire curtain 1310 is designed to be changed to an opened state according to the control command from the home server or broadcasting receiver 1310 as shown in
As shown in
Furthermore, a transport stream (TS) including the realistic effect data and the image data based on the synchronization is generated (S1440) and the generated TS is transmitted to a home network (S1450).
According to another exemplary embodiment of the present invention, as shown in
Further, according to yet another exemplary embodiment of the present invention, operation S1430 includes delaying a synchronization timing of the realistic effect data by a required time by the first to third conversions. In addition, operation S1450 includes multicasting the TS generated in operation S1440 to a server of the home network on the basis of a UDP/IP.
In addition, the first data format corresponds to, for example, an MPEG-4 file format and the second data format corresponds to, for example, an MPEG-2 file format. The multiple cameras include, for example, a 2D camera and a 3D camera for a 3D service. Further, at least one sensor includes, for example, at least one of a temperature sensor, a wind velocity sensor, a positional information sensor, and a humidity sensor.
Meanwhile, in the control method of the home network for the 4D broadcasting service according to the exemplary embodiment of the present invention, a TS including image data of a predetermined place photographed by the multiple cameras and realistic effect data received from at least one sensor sensing state information of the predetermined place is received (S1460).
Furthermore, a first area corresponding to the image data in the received TS is transmitted to an image processor (S1470) and a second area corresponding to the realistic effect data in the received TS is transmitted to a realistic effect data analyzing module (S1480). In addition, a command signal depending on the realistic effect data analyzed by the realistic effect data analyzing module is transmitted to a corresponding device of the home network (S1490).
According to another exemplary embodiment of the present invention, operation S1460 includes receiving a TS in which encoded image data and realistic effect data sensed while the image data is photographed are synchronized from a broadcasting server.
The realistic effect data includes, for example, first data indicating a type of a predetermined device to be controlled among at least one device connected with the home network and second data indicating a function of the predetermined device, which are mapped. More specifically, for example, the realistic effect data may be designed as shown in
As described above, according to the exemplary embodiment of the present invention, a realistic effect is not arbitrarily added to a corresponding moving picture after the photographing of the 3D image is completely terminated and editing of the photographed 3D image ends, but a situation at the time of actually photographing the 3D image included in the moving picture is transmitted to reproduce an effect that allows a user to feel that he/her is actually present at the place. Further, by applying this effect to a broadcast, media can be produced more effectively in fields to feel an actual situation, such as the news, a documentary, and the like.
As described above, the exemplary embodiments have been described and illustrated in the drawings and the specification. Herein, specific terms have been used, but are just used for the purpose of describing the present invention and are not used for defining the meaning or limiting the scope of the present invention, which is disclosed in the appended claims. Therefore, it will be appreciated to those skilled in the art that various modifications are made and other equivalent embodiments are available. Accordingly, the actual technical protection scope of the present invention must be determined by the spirit of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
10-2010-0115687 | Nov 2010 | KR | national |