METHOD AND APPARATUS FOR CONFIGURING CONTENT IN A BROADCAST SYSTEM

Abstract
A method and an apparatus are provided for configuring content in a broadcast system. The method includes receiving a media processing unit (MPU) including a data part and a control part, the MPU being processed independently, wherein the data part includes media data and the control part includes parameters related to the media data; and processing the received MPU, wherein the MPU comprises at least one fragmentation unit, wherein the parameters comprise a first parameter indicating a sequence number of the MPU, and wherein the sequence number of the MPU is unique to where the MPU belongs.
Description
BACKGROUND OF THE INVENTION
1. Field of the Invention

The present invention relates generally to a method and an apparatus for configuring content in a broadcast system, and more particularly, to a method and an apparatus for configuring a data unit of content in a broadcast system supporting multimedia services based on an Internet Protocol (IP).


2. Description of the Related Art

A conventional broadcast network generally uses the Moving Picture Experts Group-2 Transport Stream (MPEG-2 TS) for transmission of multimedia content. The MPEG-2 TS is a representative transmission technique that allows a plurality of broadcast programs (a plurality of encoded video bit streams) to transmit multiplexed bit streams in a transmission environment having errors. For example, the MPEG-2 TS is appropriately used in digital TeleVsion (TV) broadcasting, etc.



FIG. 1 illustrates a layer structure supporting a conventional MPEG-2 TS.


Referring to FIG. 1, the conventional MPEG-2 TS layer includes a media coding layer 110, a sync (synchronization) layer 120, a delivery layer 130, a network layer 140, a data link layer 150, and a physical layer 160. The media coding layer 110 and the sync layer 120 configure media data to a format usable for recording or transmission. The delivery layer 130, the network layer 140, the data link layer 150, and the physical layer 160 configure a multimedia frame for recording or transmitting a data block having the format configured by the sync layer 120 in/to a separate recording medium. The configured multimedia frame is transmitted to a subscriber terminal, etc., through a predetermined network.


Accordingly, the sync layer 120 includes a fragment block 122 and an access unit 124, and the delivery layer 130 includes an MPEG-2 TS/MPEG-4 (MP4) Real-time Transport Protocol (RTP) Payload Format/File delivery over unidirectional transport (FLUTE) 132 block, an RTP/HyperText Transfer Protocol (HTTP) block 134, and a User Datagram Protocol (UDP)/Transmission Control Protocol (TCP) block 136.


However, the MPEG-2 TS has several limitations in supporting multimedia services. Specifically, the MPEG-2 TS has limitations of inefficient transmission due to unidirectional communication and a fixed size of a frame, generation of an unnecessary overhead due to the usage of a transport protocol, and an IP specialized for audio/video data, etc.


Accordingly, the newly proposed MPEG MEDIA Transport (MMT) standard has been proposed by MPEG in order to overcome the above-described limitations of the MPEG-2 TS.


For example, the MMT standard may be applied for the efficient transmission of complex content through heterogeneous networks. Here, the complex content includes a set of content having multimedia factors by a video/audio application, etc. The heterogeneous networks include networks in which a broadcast network and a communication network coexist.


In addition, the MMT standard attempts to define a transmission technique that is friendlier to an IP that is a basic technique in a transmission network for the multimedia services.


Accordingly, the MMT standard attempts to representatively provide efficient MPEG transmission techniques in a multimedia service environment that changes based on the IP, and in this respect, the standardization and continuous research of the MMT standard have been progressed.



FIG. 2 illustrates a conventional layer structure of an MMT system for transmission of a multimedia frame according to multi-service/content through heterogeneous networks.


Referring to FIG. 2, an MMT system for configuring and transmitting a multimedia frame includes a media coding layer 210, an encapsulation layer (Layer E) 220, delivery layers (Layer D) 230 and 290, a network layer 240, a data link layer 250, a physical layer 260, and control layers (Layer C) 270 and 280. The layers include three technique areas, Layer E 220, Layers D 230 and 290, and Layers C 270 and 280. Layer E 220 controls complex content generation, Layers D 230 and 290 control the transmission of the generated complex content through the heterogeneous network, and Layers C 270 and 280 control consumption management and the transmission management of the complex content.


Layer E 220 includes three layers, i.e., MMT E.3222, MMT E.2224, and MMT E.1226. The MMT E.3222 generates a fragment, which is a basic unit for the MMT service, based on coded multimedia data provided from the media coding layer 210. The MMT E.2224 generates an Access Unit (AU) for the MMT service by using the fragment generated by the MMT E.3222. The AU is the smallest data unit having a unique presentation time. The MMT E.1226 combines or divides the AUs provided by the MMT E.2224 to generate a format for generation, storage, and transmission of the complex content.


Layer D includes three layers, i.e., MMT D.1232, MMT D.2234, and MMT D.3290. The MMT D.1232 operates with an Application Protocol (AP) similarly functioning to the RTP or the HTTP, the MMT D.2234 operates with a network layer protocol similarly functioning to the UDP or the TCP, and the MMT D.3290 controls optimization between the layers included in Layer E 220 and the layers included in Layer D 230.


Layer C includes two layers, i.e., MMT C.1270 and MMT C.2280. The MMT C.1270 provides information related to the generation and the consumption of the complex content, and the MMT C.2280 provides information related to the transmission of the complex content.



FIG. 3 illustrates a conventional data transmission layer for a broadcast system.


Referring to FIG. 3, Layer E in a transmission side stores elements of the content, such as video and audio, encoded to a Network Abstraction Layer (NAL) unit, a fragment unit, etc., by a codec encoder, such as an Advanced Video Codec (AVC) and a Scalable Video Codec (SVC) in units of AUs in layer E3, which is the top-level layer, and transmits the stored elements in the units of AUs to layer E2, which is a lower layer.


In the conventional technique, a definition and a construction of the AU transmitted from Layer E3 to Layer E2 depend on a codec.


Layer E2 structuralizes a plurality of AUs, encapsulates the structuralized AUs based on Layer E2 units, stores the encapsulated AUs in the unit of Elementary Streams (ES), and transmits the stored AUs to Layer E1, which is a next lower layer. Layer E1 instructs a relation and a construction of the elements of the content, such as the video and audio, encapsulates the elements together with the ES, and transmits the encapsulated elements to Layer D1 in units of packages.


Layer D1 divides a received package in accordance with a form suitable for transmission of the divided package to a lower layer, and the lower layer then transmits the packet to a next lower layer.


Layer D in a reception side collects the packets transmitted from the transmission side to configure the collected packets to the package of Layer E1. A receiver recognizes elements of the content within the package, a relation between the elements of the content, and information on construction of the elements of the content, to transfer the recognized information to a content element relation/construction processor and a content element processor. The content relation/construction processor transfers the respective elements for the proper reproduction of the entire content to the content element processor, and the content element processor controls elements to be reproduced at a set time and displayed at a set position on a screen.


However, a conventional Layer E2 technique provides only the AU itself or information on a processing time for the AU reproduction, e.g., a Decoding Time Stamp (DTS) or a Composition Time Stamp (CTS) and a Random Access Point (RAP). Accordingly, the utilization of the conventional Layer E2 technique is limited.


SUMMARY OF THE INVENTION

Accordingly, the present invention is designed to address at least the above-described problems and/or disadvantages occurring in the prior art, and to provide at least the advantages described below.


An aspect of the present invention is to provide a method of configuring AUs to a data unit for efficient reproduction of the AUs in Layer E2.


In accordance with an aspect of the present invention, a method is provided for receiving a media processing unit (MPU) including a data part and a control part, the MPU being processed independently, wherein the data part includes media data and the control part includes parameters related to the media data; and processing the received MPU, wherein the MPU comprises at least one fragmentation unit, wherein the parameters comprise a first parameter indicating a sequence number of the MPU, and wherein the sequence number of the MPU is unique to where the MPU belongs.





BRIEF DESCRIPTION OF THE DRAWINGS

The above and other aspects, features, and advantages of the present invention will be more apparent from the following detailed description taken in conjunction with the accompanying drawings, in which:



FIG. 1 is a block diagram illustrating a layer structure for a conventional MPEG-2 TS;



FIG. 2 is a block diagram illustrating an MMT service by a broadcast system based on a conventional MMT standard;



FIG. 3 illustrates a conventional data transmission layer diagram in a broadcast system;



FIG. 4 illustrates a conventional reproduction flow of a DU configured through encapsulation of AUs one by one;



FIG. 5 illustrates a conventional process of receiving and reproducing a Data Unit (DU);



FIG. 6 illustrates a process of receiving and reproducing a DU according to an embodiment of the present invention;



FIG. 7A illustrates a construction of conventional AUs;



FIG. 7B illustrates a construction of AUs according to an embodiment of the present invention;



FIGS. 8A and 8B are diagrams illustrating a comparison of a temporal scalability according to a construction of AUs within a DU;



FIGS. 9A and 9B are diagrams illustrating a comparison of an Application-Forward Error Control (AL-FEC) according to a construction of AUs within a DU; and



FIG. 10 illustrates a construction of a DU according to an embodiment of the present invention.





DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION

Hereinafter, various embodiments of the present invention will be described with reference to the accompanying drawings in detail. In the following description, a detailed explanation of known related functions and constitutions may be omitted to avoid unnecessarily obscuring the subject matter of the present invention. Further, the terms used in the description are defined considering the functions of the present invention and may vary depending on the intention or usual practice of a user or operator. Therefore, the definitions should be made based on the entire content of the description.


In accordance with an embodiment of the present invention, a method is proposed for configuring DUs by grouping a plurality of AUs. The DUs are continuously concatenated to become Elementary Streams (ES), which become data transmitted from Layer E2 to Layer E1.


Conventionally, a DU is configured by encapsulating the AUs one by one, a DTS and a CTS are granted to each AU, and a picture type (Intra (I)-picture, Bidirectionally Predictive (B)-picture, or Predictive (P)-picture) of a corresponding AU is expressed in each AU or whether a corresponding AU is a RAP is displayed.



FIGS. 4 and 5 illustrate a reproduction flow of a conventional DU configured by encapsulating the Aus, one by one, and FIG. 6 illustrates a reproduction flow of a DU configured with a plurality of AUs according to an embodiment of the present invention.


Referring to FIG. 4, when data begins to be received from a center of a DU string (401), because there is a probability that a corresponding DU is not the RAP, i.e., the I-picture, a receiver searches for the RAP, i.e., a DU in a type of I-picture, by continuously examining subsequent concatenated DUs (402), such that it is possible to initiate the reproduction the DU (403).


In accordance with an embodiment of the present invention, a DU is provided by grouping a plurality of AUs, and further configuring the DU in units of Group Of Pictures (GOPs), compared to the generation of a DU for each of the respective AUs. When the DU is configured in the GOPs, all DUs may be independently reproduced, without having to wait until a next DU is decoded, eliminating a complex buffer control requirement.


Further, as illustrated in FIG. 5, when Layer E1 (501) instructs reproduction while limiting a part of an ES, if the DU merely includes one AU, there is no guarantee that the DU corresponding to the instructed CTS is the I-picture. Therefore, it is necessary for the receiver to search for DUs prior to the corresponding DU in an inverse direction (502), decode the DUs from the I-picture (503), and reproduce the DU (504), in order to reproduce the DU from an instructed time point.


However, in accordance with an embodiment of the present invention, as illustrated in FIG. 6, when the DU is configured in a unit of a GOP (as indicated by a dashed line), the reproduction of the DU from a time (601) instructed in Layer E1 does not require an inverse-directional search of the DUs (602 through 604).


In accordance with an embodiment of the present invention, a DU may be configured with a plurality of GOP units. When the DU is configured with a plurality of GOP units, the I-pictures, the P-pictures, and the B-pictures are separately grouped and stored, and the respective data may be differently stored in three places.



FIG. 7A illustrates a construction of a conventional AU, and FIG. 7B illustrates a construction of an AU according to an embodiment of the present invention.


As illustrated in FIG. 7B, when the AUs are grouped according to a property and stored in the DU, even if a part of the DU fails to be transmitted during the transmission, it is possible to realize temporal scalability through a frame drop, etc. Further, because a transmission system utilizing an error correction method, such as an AL-FEC, may utilize a recoverable scope by departmentalizing a scope recoverable with the AL-FEC into a part including the collected I-pictures, a part including the collected PB-pictures, etc., grouping the AUs according to a property and stored them in the DU is also helpful for reducing transmission overhead due to the AL-FEC.



FIGS. 8A and 8B are diagrams illustrating a comparison of a temporal scalability according to a construction of AUs within a DU between a conventional art and an embodiment of the present invention.


Referring to FIGS. 8A and 8B, when the transmission of the DU is interrupted or an error is generated during the transmission of the DU, in FIG. 8A, it is impossible to view content after 8 seconds. However, in FIG. 8B, it is possible to view content for up to 14 seconds although it has a low temporal scalability.



FIGS. 9A and 9B are diagrams illustrating a comparison of an AL-FEC according to a construction of AUs within a DU between the conventional art and an embodiment of the present invention.


As illustrated in FIG. 9A, when the I-pictures, the P-pictures, and the B-pictures are arranged without any consideration to picture type, it is impossible to identify the construction of the AUs within the DU. Consequently, AL-FEC must then be applied to all durations.


However, in accordance with an embodiment of the present invention, when the AUs are arranged according to picture type, because the AUs of the I-picture and P-picture affect a picture quality, it is sufficient to apply AL-FEC only to the AUs in the I-picture and P-picture, as indicated by a thick line of FIG. 9B. Accordingly, the overhead of the AL-FEC is decreased over the remaining durations, i.e., AUs of the B-pictures.


As described above, there are several advantages in the configuration of the DU within a unit of a GOP or a plurality of units of GOPs.



FIG. 10 illustrates a construction of a DU according to an embodiment of the present invention.


Referring to FIG. 10, the DU includes a header 1001 and a set of AUs 1002 included in a GOP or a plurality of GOPs.


The header 1001 includes a DU description 1010, which includes information on the DU, an AU structure description 1020, which includes information on a construction the AUs 1002, and AU information 1030, which includes information on each AU.


For example, the DU description 1010 may include the following information.


1) Length 1011: This information represents a size of a DU and is a value obtained by adding a size of headers of remaining DUs and a size of a payload after a corresponding field. For example, the Length 1011 may be represented in units of bytes.


2) Sequence Number 1012: This information represents a sequence of a corresponding DU within the ES. Omission or duplicate reception between a plurality of continuous DUs may be identified using the sequence number 1012. When an increase of sequence numbers between a previous DU and a continuously received DU exceeds “1”, this indicates that an error is generated in the transmission of the DU.


3) Type of AU 1013: This information represents a type of AU included in the DU. For example, the AU may be generally classified into “timed data” or “non-timed data”, expressed with “0” or “1”, respectively. Timed data, represented by “0”, includes the CTS and/or the DTS and corresponds to multimedia elements, such as video data and audio data. Non-time data, represented by “1”, includes no CTS or DTS. The non-time data corresponds to general data, such a picture or a file.


4) Decoding Time of DU 1014: This information represents a time to start decoding a first AU of the DU, as a representative value.


5) Duration of DU 1015: This information represents a temporal length of the DU. A value obtained by adding a duration to the CTS of the first AU of the DU is the same as the time of termination of the reproduction of the finally decoded AU of the DU.


6) Error Correction Code of DU 1016: For example, a Cyclic Redundancy Check (CRC), a parity bit, etc., may be used as a code for error correction.


Further, an AU structure description 1020 may include the following information.


1) Number of AUs 1021: This information represents the number of AUs within the DU.


2) Pattern of AUs 1022: This information represents a structure and an arrangement pattern of AUs. For example, the Pattern of AUs 1022 may be indicated with values 0: open GOP, 1: closed GOP, 2: IPBIPB, 4:IIPPBB, 6: Unknown, or 8: reserved.


Each bit value is added through the OR calculation for use. For example, the construction of IPBIPB of the closed GOP is 1|2=3.


Open GOP, represented by “0”, represents when the GOP is the open GOP. Closed GOP, represented by “1”, represents when the GOP is the closed GOP. Definitions of the open GOP and closed GOP are the same as that of the conventional art.


IPBIPB, represented by “2”, represents when I-pictures, P-pictures, and B-pictures are collected based on each group and repeated at least two times within the DU, e.g., IPBBIPBB or IPPBBBBIPPBBBB. IIPPBB, represented by “4”, represents when I-pictures, P-pictures, and B-pictures are collected based on each group and repeated only one time within the DU, e.g., IIPPBBBB or IIPPPPBBBBBBBB. Unknown, represented by “6”, represents a failure to identify a pattern, and is used in when an order of AUs is not changed.


Reserved, represented by “8”, represents a value reserved for a later user.


3) Size of Patterns 1023: This information represents a size of each duration of a repeated pattern. For example, when pattern IPBIPB is actually configured as IPPBBBBIPPBBBB, lengths of duration I, duration PP, and duration BBBB are added to be represented as three values in units of bytes.


The size of the pattern may be expressed as:

    • for(i=0;i<number_of_patterns,i++){Size of patterns;}:


Further, the AU information 1030 may include the following information.


1) DTS of AUs 1031: This information represents the DTS of the AU, and may be expressed as “for(i=0;i<number_of_AUs;i++){Decoding timestamp of AU;}”.


2) CTS of AUs 1032: This information represents the CTS of the AU, and may be expressed as “for(i=0;i<number_of_AUs;i++){Composition timestamp of AU;}”.


3) Size of AUs 1033: This information represents a size of the AU in the unit of bytes, and may be expressed as “for(i=0;i<number_of_AUs;i++){Size of AU;}”.


4) Duration of AUs 1034: This information represents a temporal length of the AU, and may be expressed as “for(i=0;i<number_of_AUs;i++){Duration of AU;}”.


5) AU num of RAP 1035: This information represents a number of the AU, and may be expressed as “for(i=0;i<number_of_RAPs;i++){AU number;}”.


6) Independent and disposable AUs 1036: This information represents a relationship between a corresponding AU and a different AU, and may be expressed as “for(i=0;i<number_of_AUs;i++){Independent and disposable value of AU;}”.


More specifically, when the corresponding AU is dependent on the different AU, a value of the Independent and Disposable AUs 1036 is “1”, when the different AU refers to the corresponding AU, a value of the Independent and Disposable AUs 1036 is “2”, and when the corresponding AU and the different AU have duplicated information, a value of the Independent and Disposable AUs 1036 is “4”.


While the present invention has been shown and described with reference to certain embodiments and drawings thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims
  • 1. A method for receiving media data in a broadcast system, the method comprising: receiving a media processing unit (MPU) including a data part and a control part, the MPU being processed independently, wherein the data part includes media data and the control part includes parameters related to the media data; andprocessing the received MPU,wherein the MPU comprises at least one fragmentation unit,wherein the parameters comprise a first parameter indicating a sequence number of the MPU, andwherein the sequence number of the MPU is unique to where the MPU belongs.
  • 2. The method of claim 1, wherein the parameters further comprise a second parameter, based on the second parameter having a first value, the second parameter indicates that the at least one fragmentation unit comprises timed data including timeline information for decoding or presentation of content of the timed data, and based on the second parameter having a second value, the second parameter indicates that the at least one fragmentation unit comprises non-timed data, which does not include the timeline information for decoding or the presentation of the content of the non-timed data.
  • 3. The method of claim 1, wherein at least one packet received through the MPU includes information indicating if the at least one packet includes at least one random access point (RAP).
  • 4. The method of claim 1, wherein, according to the at least one fragmentation unit comprising the timed data, a first positioned fragmentation unit in the MPU is a start position enabling a playback of the media data to be started.
  • 5. The method of claim 1, wherein the media data in the data part corresponds to at least one group of pictures.
  • 6. The method of claim 1, wherein the control part includes information on a decoding order of each of the at least one fragmentation unit.
  • 7. The method of claim 1, wherein if the at least one fragmentation unit comprises timed data, the control part comprises information on a presentation duration of each of the at least one fragmentation unit and a presentation order of each of at least one fragmentation unit.
Priority Claims (1)
Number Date Country Kind
10-2011-0023578 Mar 2011 KR national
PRIORITY

This application is a Continuation Application of U.S. patent application Ser. No. 13/421,375, filed on Mar. 15, 2012, and claims priority under 35 U.S.C. § 119(a) to Korean Patent Application Serial No. 10-2011-0023578, which was filed in the Korean Industrial Property Office on Mar. 16, 2011, the entire content of which is incorporated herein by reference.

Continuations (1)
Number Date Country
Parent 13421375 Mar 2012 US
Child 16588417 US