The present technique relates to a transmission apparatus, a transmission method, a reception apparatus, and a reception method, and more particularly to a transmission apparatus and the like each of which transmits image data exhibiting ultra-high definition at a high frame rate.
In a reception environment in which a fixed receiver and a mobile receiver hold the same transmission band in common, for the purpose of efficiently utilizing a transmission bit rate, an image service (video service) for the fixed receiver in which definition is considered to be high, and an image service for a mobile receiver in which definition is considered to be middle hold a stream in common. In this case, it is considered that the whole bit rate can be reduced as compared with a so-called simulcast service for separately carrying out a service for the fixed receiver and a service for the mobile receiver. For example, patent literary document 1 describes that the media encoding is scalably carried out to produce a stream of a basic layer for an image service providing low definition, and a stream of an enhancement layer for an image service providing high definition, and a broadcasting signal containing these streams is transmitted.
On the other hand, when the smoothness or sharpness of the motion in a sport scene or the like is required, a so-called video service at a high frame rate is required in which a shutter speed is set at a high speed to increase a frame rate. When the service at the high frame rate is carried out, it is considered that a moving image which is captured with a camera using at a high speed frame shutter is converted into a moving image sequence having a lower frequency than that in case of the moving image to transmit the resulting moving image sequence. The image obtained by using the high speed frame shutter offers an effect in which the motion blur is improved to realize an image quality having the high sharpness. On the other hand, the image obtained by using the high speed frame shutter involves a problem about strobing effect by not displaying the whole video at the high frame rate, but displaying only a part thereof when the compatibility with the pass receiver at the normal frame rate is intended to be obtained. The present applicant previously proposed the technique with which the past receiver for converting a material by an image captured with a high speed frame shutter to carry out the decoding at the normal frame rate is made to display an image above a certain image quality (refer to patent literary document 2).
[PTL 1]
[PTL 2]
It is an object of the present technique to transmit image data exhibiting ultra-high definition at a high frame rate so that backward compatibility is satisfactorily feasible to be obtained on a reception side.
A concept of the present technique lies in: a transmission apparatus provided with an image processing portion, an image encoding portion, and a transmission portion. In this case, the image processing portion serves to process image data, having a basic format, from which an image having high definition at a basic frame rete is to be obtained, image data, having a first enhancement format, from which an image having high definition at a high frame rate is to be obtained, image data, having a second enhancement format, from which an image having ultra-high definition at a basic frame rate is to be obtained, and image data, having a third second enhancement format, from which an image having ultra-high definition at a high frame rate is to be obtained by processing image data having ultra-high definition at a high frame rate. The image encoding portion serves to produce a basic video stream containing encoded image data of the image data having the basic format, and a predetermined number of enhancement video streams containing encoded image data of the image data having the first to third enhancement formats. The transmission portion serves to transmit a container having a predetermined format containing the basic stream and the predetermined number of enhancement video streams.
The image processing portion executes mixing processing at a first ratio in units of temporally continuous two pictures for the image data having the ultra-high definition at the high frame rate to obtain first image data as image data having a basic frame rate, and executes mixing processing at a second ratio in units of the temporally continuous two pictures to obtain second image data as image data having an enhancement frame at a high frame rate.
The image processing portion executes down-scale processing for the first image data to obtain the image data having the basic format, and obtains a difference between third image data obtained by executing up-scale processing for the image data having the basic format, and the first image data to obtain the image data having the second enhancement format.
In addition, the image processing portion executes down-scale processing for the second image data to obtain the image data having the first enhancement format, and obtains a difference between fourth image data obtained by executing up-scale processing for the image data having the first enhancement format, and the second image data to obtain the image data having the third enhancement format.
With the present technique, the image processing portion processes image data having the high frame rate and the ultra-high definition. As a result, there are obtained the image data, having the basic format, from which the image having the high definition at the basic frame rate is to be obtained, the image data, having the first enhancement format, from which the image having the high definition at the high frame rate is to be obtained, the image data, having the second enhancement format, from which the image having the ultra-high definition at the basic frame rate is to be obtained, and image data, having the third enhancement format from which the image having the ultra-high definition at the high frame rate is to be obtained.
Here, the down-scale processing is executed for the first image data obtained by executing the mixing processing at the first ratio in units of the temporally continuous two pictures in image data having the ultra-high definition at the high frame rate, thereby obtaining the image data having the basic format. The difference between the third image data obtained by executing the up-scale processing for the image data having the basic format, and the first image data is obtained, thereby obtaining the image data having the second enhancement format. The down-scale processing is executed for the second image data obtained by executing the mixing processing at the second ratio in units of the temporally continuous two pictures, thereby obtaining the image data having the first enhancement format. The difference between the fourth image data obtained by executing the up-scale processing for the image data having the first enhancement format, and the second image data is obtained, thereby obtaining the image data having the third enhancement format.
The image encoding portion produces the basic video stream containing the encoded image data of the image data having the basic format, and a predetermined number of enhancement video streams containing the encoded image data of the image data having the first to third enhancement formats. For example, the image encoding portion may also be configured to produce the basic video stream containing encoded image data of the image data having the basic format, three enhancement video streams containing each pieces of encoded image data of the image data having the first to third enhancement formats or one enhancement video stream containing the whole of encoded image data of the image data having the first to third enhancement format. Then, the transmission portion transmits the container having the predetermined format and containing the basic stream and the predetermined number of enhancement video streams.
In such a way, with the present technique, there are transmitted the basic video stream containing the encoded image data of the image data, having the basic format from which the image having the high definition at the basic frame rate is to be obtained, and a predetermined number of enhancement video streams. In this case, a predetermined number of enhancement video streams contain the image data having the first enhancement format from which the image having the high definition at the high frame rate is to be obtained, the image data, having the second enhancement format, from which the image having the ultra-high definition at the basic frame rate is to be obtained, and the encoded image data of the image data, having the third enhancement format, from which the image having the ultra-high definition at the high frame rate is to be obtained. For this reason, the image data exhibiting the ultra-high definition is transmitted so that the backward compatibility is satisfactorily feasible to be obtained on the reception side.
For example, in case of the receiver having the decoding ability to be able to process the image data having the high definition at the basic frame rate, by processing only the basic video stream, the image having the high definition at the basic frame rate can be displayed. In addition, for example, in case of the receiver having the decoding ability to be able to process the image data having the high definition at the high frame rate, by processing both the basic video stream and the enhancement stream, the image having the high definition at the high frame rate can be displayed. In addition, for example, in case of the receiver having the decoding ability to be able to process the image data having the ultra-high definition at the basic frame rate, by processing both the basic video stream and the enhancement stream, the image having the ultra-high definition can be displayed at the basic frame rate. In addition, for example, in case of the receiver having the decoding ability to be able to process the image data having the ultra-high definition at the high frame rate, by processing both the basic video stream and the enhancement stream, the image having the ultra-high definition at the high frame rate can be displayed.
In addition, with the present technique, the down-scale processing is executed for first image data which is obtained by executing the mixing processing at the first ratio in units of the temporally continuous two pictures for the image data exhibiting the ultra-high definition at the high frame rate, thereby obtaining the image data having the basic format. For this reason, the image having the high definition at the basic frame rate, which is displayed by processing only the video stream on the reception side becomes a smooth image in which the strobing effect is suppressed.
It should be noted that with the present technique, for example, the transmission apparatus may further include an information inserting portion. The information inserting portion serves to insert identification information exhibiting temporal scalable into the encoded image data of the image data having the first enhancement format, and/or a container position corresponding to the encoded image data, insert identification information exhibiting spatial scalable into the encoded image data of the image data having the second enhancement format, and/or a container position corresponding to the encoded image data, and insert identification information exhibiting the temporal scalable and the spatial scalable into the encoded image data of the image data having the third enhancement format, and/or the container position corresponding to the encoded image data. By the insertion of the identification information, the reception side can readily grasp whether the pieces of image data having the respective enhancement formats pertain to the spatial scalable or the temporal scalable.
In this case, for example, the information inserting portion may be configured to further insert information exhibiting a ratio of the spatial scalable into the encoded image data of the image data having the second and third enhancement formats, and/or the container position corresponding to the encoded image data. The reception side can suitably execute the processing for the spatial scalable by using the information exhibiting the ratio of the spatial scalable, and can satisfactorily obtain the image data having the ultra-high definition.
In addition, in this case, the information inserting portion may be configured to further insert identification information exhibiting that the image data having the basic format is image data obtained by executing the mixing processing into the pieces of encoded image data of the image data having the first and third enhancement formats, and/or the container position corresponding to the encoded image data. By the insertion of the identification information, the reception side can readily grasp that the image data having the basic format is image data obtained by executing the mixing processing.
In addition, in this case, the information inserting portion may be configured to further insert ratio information in the mixing processing (first ratio information and second ratio information) into the pieces of encoded image data of the image data having the first and third enhancement formats, and/or the container position corresponding to the encoded image data. The reception side can suitably execute the processing for the temporal scalable and can satisfactorily obtain the image data at the high frame rata by using the ratio information in the mixing processing.
In addition, with the present technique, for example, the transmission apparatus may be configured to further include a transmission portion for transmitting a metafile having meta information with which a reception apparatus acquires a basic video stream and a predetermined number of enhancement video streams. In this case, the information exhibiting a response of scalability may be inserted into the metafile. The reception side can readily recognize the response of the scalability and can efficiently acquire only the necessary stream or the encoded image data to efficiently process only the necessary stream or the encoded image data from the information exhibiting the response of the scalability which is inserted into the metafile in such a way.
In addition, other concept of the present technique lies in a reception apparatus including a reception portion. In this case, the reception portion serves to receive a container having a predetermined format containing a basic video stream having encoded image data of image data, having a basic format, from which an image having high definition at a basic frame rate is to be obtained, and a predetermined number of enhancement video streams containing image data, having a first enhancement format, from which image having high definition at a high frame rate is to be obtained, image data, having a second enhancement format, from which image having ultra-high definition at a basic frame rate is to be obtained, and encoded image data of image data, having a third enhancement format, from which image having ultra-high definition at a high frame rate is to be obtained.
Down-scale processing is executed for first image data which is obtained by executing mixing processing at a first ratio in units of temporally continuous two pictures in the image data having the ultra-high definition at the high frame, thereby obtaining the image data having the basic format.
A difference between third image data which is obtained by executing up-scale processing for the image data having the basic format, and the first image data is obtained, thereby obtaining the image data having the second enhancement format.
Down-scale processing is executed for second image data which is obtained by executing mixing processing at a second ratio in units of the temporally continuous two pictures, thereby obtaining the image data having the first enhancement format.
A difference between fourth image data which is obtained by executing up-scale processing for the image data having the first enhancement format, and the second image data is obtained, thereby obtaining the image data having the third enhancement format.
The reception apparatus further includes a processing portion. The processing portion serves to obtain image data having high definition at the basic frame rate by executing only the basic video stream, or obtain image data having high resolution at the high frame rate by executing a part of or the whole of the predetermined number of enhancement video streams, image data having the ultra-high definition at the basic frame rate, or image data having the ultra-high definition at the high frame rate.
With the present technique, the reception portion receives a container having a predetermined format containing the basic video stream and the predetermined number of enhancement video streams. The basic video stream has encoded image data of the image data, having the basic format, from which image having the high definition at the basic frame rate is to be obtained. The predetermined number of enhancement video streams have the encoded image data of the image data, having the first enhancement format, from which the image having the high definition at the high frame rate is to be obtained, the image data, having the second enhancement format, from which the image having the ultra-high definition at the basic frame rate is to be obtained, and the image data, having the third enhancement format, from which the image having the ultra-high definition at the high frame rate is to be obtained.
Here, the down-scale processing is executed for the first image data which is obtained by executing the mixing processing having the first ratio in units of temporally continuous two pictures in the image data having ultra-high definition at the high frame rate, thereby obtaining the image data having the basic format. A difference between the third image data which is obtained by executing the up-scale processing for the image data having the basic format, and the first image data is obtained, thereby obtaining the image data having the second enhancement format. The down-scale processing is executed for the second image data which is obtained by executing the mixing processing having the second ratio in units of the temporally continuous two pictures, thereby obtaining the image data having the first enhancement format. A difference between the fourth image data which is obtained by executing the up-scale processing for the image data having the first enhancement format, and the second image data is obtained, thereby obtaining the image data having the third enhancement format.
The processing portion obtains the image data having the high definition at the basic frame rate by executing only the basic video stream, or obtains the image data having the high resolution at the high frame rate by executing a part of or the whole of the predetermined number of enhancement video streams, the image data having the ultra-high definition at the basic frame rate, or the image data having the ultra-high definition at the high frame rate.
In such a way, with the present technique, the image data having the high definition at the basic frame rate can be obtained by executing only the basic video stream containing the encoded image data of the image data, having the basic format, from which the image having the high definition at the basic frame rate is to be obtained. That is to say, in case of the receiver having the decoding ability to be able to process the image data having the high definition at the basic frame rate, the image having the high definition at the basic frame rate can be displayed by processing only the basic video stream. As a result, the backward compatibility can be realized.
Here, the down-scale processing is executed for the first image data which is obtained by executing the mixing processing having the first ratio in units of temporally continuous two pictures in the image data having the ultra-high definition at the high frame rate, thereby obtaining the image data having the basic format. For this reason, the image having the high definition at the basic frame rate which is displayed by processing only the basic video stream becomes a smooth image in which the strobing effect is suppressed.
In addition, the image data having the high definition at the high frame rate, the image data having the ultra-high definition at the basic frame rate, or the image data having the ultra-high definition at the high frame rate can be obtained by processing a part of or the whole of the basic video stream and the predetermined number of enhancement video streams. That is to say, in case of the receiver having the decoding ability to be able to process the image data having the high definition at the high frame rate, the image having the high definition at the high frame rate can be displayed by processing both the basic video stream and the enhancement stream.
In addition, in case of the receiver having the decoding ability to be able to process the image data having the ultra-high definition at the basic frame rate, the image having the ultra-high definition at the basic frame rate can be displayed by processing both the basic video stream and the enhancement stream. In addition, in case of the receiver having the decoding ability to be able to process the image data having the ultra-high definition at the high frame rate, the image having the ultra-high definition at the high frame rate can be displayed by processing both the basic video stream and the enhancement stream.
It should be noted that when with the present technique, for example, the information exhibiting the ratio of the spatial scalable is inserted into the pieces of encoded image data of the image data having the second and third enhancement formats, and/or the container position corresponding to the encoded image data, and the processing portion obtains the image data having the ultra-high definition at the basic frame rate or the image data having the ultra-high definition at the high frame rate, the information exhibiting the ratio of the inserted spatial scalable may be used. In this case, the processing of the spatial scalable can be suitable executed, and the image data having the ultra-high definition can be satisfactorily obtained.
In addition, with the present technique, when, for example, the information associated with the first ratio and the information associated with the second ratio are inserted into the encoded image data of the pieces of image data having the first and third enhancement formats, and/or the container positions corresponding to the encoded image data, and when the processing portion obtains the image data having the high definition at the high frame rate or the image data having the ultra-high definition at the high frame rate, the processing portion may use the inserted information associated with the first ratio and the inserted information associated with the second ratio. In this case, the processing of the temporal scalable can be suitable executed, and the image data at the high frame rate can be satisfactorily obtained.
According to the present technique, the image data exhibiting the ultra-high definition at the high frame rate can be transmitted so that the backward compatibility is satisfactorily feasible to be obtained on the reception side. It should be noted that the effect described in the present description is merely an exemplification and is by no means limited, and any of the additional effects may also be offered.
Hereinafter, a mode for carrying out the invention (hereinafter referred to as “an embodiment”) will be described. It should be noted that the description will be given in the following order.
1. Embodiment
2. Modified Changes
[Outline of MPEG-DASH Based Stream Delivery System]
Firstly, a description will be given with respect to an outline of an MPEG-DASH based stream delivery system to which the present technique can be applied.
The DASH stream file server 31 produces a stream segment complying with a DASH specification (hereinafter suitably referred to as “a DASH segment”) on the basis of media data (such as video data, audio data, or caption data) of predetermined contents, and sends the segment in response to an HTTP request sent from the service receiver. The DASH stream file server 31 may be a streaming dedicated server, or may be shared among Web servers in some cases.
Further, in response to a request of a segment of the predetermined stream sent from the service receiver (33-1, 33-2, . . . , 33-N), the DASH stream file server 31 transmits the segment of that stream to the receiver as a requestor through the CDN 34. In this case, the service receiver 33 selects the stream of an optimal rate and makes a request in response to a state of a network environment in which a client is placed by referring to a value of a rate described in a Media Presentation Description (MPD) file.
The DASH MPD server 32 is a server for producing an MPD file for acquiring a DASH segment produced in the DASH stream file server 31. The DASH MPD server 32 produces the MPD file based on contents metadata from a contents management server (not depicted), and an address (url) of a segment produced in the DASH stream file server 31. It should be noted that the DASH stream file server 31 and the DASH MPD server 32 may be physically identical to each other.
Respective attributes are described in the format of the MPD by utilizing an element of representation every stream of the video, the audio and the like. For example, the respective rates are described with the representation being divided every a plurality of video data streams different in rate from one another in the MPD file. In the service receiver 33, as described above, the optimal stream can be selected in response to the state of the network environment in which the service receiver 33 is placed by referring to the value of the rate.
In case of the stream delivery system 30B, the broadcasting sending system 36 transmits the stream segment (DASH segment), complying with the DASH specification, which is produced in the DASH stream file server 31, and an MPD file produced in the DASH MPD server 32 with the stream segment (DASH segment) and the HPD file being placed on a broadcasting wave.
As depicted in
As depicted in
It should be noted that the switching of the stream can be freely carried out among a plurality of representations contained in the adaptation set. As a result, the stream of the optimal rate can be selected and the video delivery can be carried out without interruption depending on the state of the network environment on the reception side.
[Example of Configuration of Transmission/Reception System]
In addition, in the transmission/reception system 10, the service receiver 200 corresponds to the service receiver 33 (33-1, 33-2, . . . , 33-N) of the stream delivery system 30A depicted in
The service transmission system 100 transmits DASH/MP4, in a word, MP4 in which the MPD file as the metafile, and the media stream (media segment) of the video, the audio or the like are contained through the communication network transmission path (refer to
In this embodiment, the media stream is the basic video stream which is obtained by processing the image data (moving image data) exhibiting Ultra-High Definition (UHD) at a High Frame Rate (HFR), and a predetermined number of enhancement video streams, for example, three or one enhancement video stream. The image data exhibiting the ultra-high definition at the high frame rate, for example, is image data exhibiting 4K/8K at 120 fps.
The basic video stream has encoded image data of the image data, having the basic format, from which the image having the high definition at the basic frame rate (normal frame rate) is to be obtained. The predetermined number of enhancement video streams have the encoded image data of the image data, having the first enhancement format, from which the image having the high definition at the high frame rate is to be obtained, the encoded image data of the image data, having the second enhancement format, from which the image having the ultra-high definition at the basic frame rate is to be obtained, and the encoded image data of the image data, having the third enhancement format, from which the image having the ultra-high definition at the high frame rate is to be obtained.
Here, the image data having the basic format is obtained by executing the down-scale processing for the first image data obtained by executing the mixing processing having the first ratio in units of temporally continuous two pictures in the image data exhibiting the ultra-high definition at the high frame rate. The image data having the second enhancement format is obtained by obtaining a difference between the third image data obtained by executing the up-scale processing for the image data having the basic format, and the first image data described above.
In addition, the image data having the first enhancement format is obtained by executing the down-scale processing for the second image data obtained by executing the mixing processing having the second ratio in units of the temporally continuous two pictures. The image data having the third enhancement format is obtained by obtaining a difference between the fourth image data obtained by executing the up-scale processing for the image data having the first enhancement format, and the second image data described above.
Here, as depicted in
On the other hand, as depicted in
For example, as depicted in
Identification information exhibiting that the stream is the spatial scalable stream, and information exhibiting a ratio of the spatial scalable stream are inserted into one of or both of the encoded image data of the image data having the second and third enhancement formats, and the container position corresponding to the encoded image data, both of them in this embodiment. In this embodiment, an SEI NAL unit having these pieces of information is inserted into the encoded image data (access unit) of the image data having the second and third enhancement formats. In addition, the descriptor having these pieces of information is inserted into a box of “moof” corresponding to the image data having the second and third enhancement formats of MP4. The reception side can readily recognize that the image data having the second and third enhancement formats is the image data pertaining to the spatial scalable stream, and the ratio of the spatial scalable stream from these pieces of information.
Identification information exhibiting the temporal scalable stream, identification information exhibiting that the image data having the basic format is the image data obtained by executing the mixing processing, and information associated with mixing ratios (first, second ratios) are inserted into one of or both of the encoded image data of the image data having the first and third enhancement formats, and the container position corresponding to the encoded image data, both of them in this embodiment.
In this embodiment, the SEI NAL unit having the pieces of information is inserted into the encoded image data (access unit) of the image data having the first and third enhancement formats. In addition, the descriptor having these pieces of information is inserted into the box of “moof” corresponding to the image data having the second enhancement format of MP4. The reception side can readily recognize that the image data having the first and third enhancement formats is the image data pertaining to the temporal scalable stream, and the image data having the basic format is the image data obtained by executing the mixing processing, and the mixing ratios (first, second ratios) from these pieces of information.
In addition, in this embodiment, information exhibiting the response of the scalability is inserted into the MPD file. That is to say, it is represented that the image data exhibiting the high definition at the high frame rate is obtained by the enhancement in which the image data having the first enhancement format is used on the image data having the basic format. In addition, it is represented that the image data exhibiting the ultra-high definition at the basic frame rate is obtained by the enhancement in which the image data having the second enhancement format is used on the image data having the basic format. In addition, it is represented that the image data exhibiting the ultra-high definition at the high frame rate is obtained by the enhancement in which the image data having the first, second and third enhancement formats is used on the image data having the basic format. The reception side can readily recognize the response of the scalability, and can acquire only the necessary stream or the encoded image data and efficiently process the same from the information.
The service receiver 200 receives the MP4 described above which is sent thereto from the service transmission system 100 through the communication network transmission path (refer to
In addition, in case of the receiver having the decoding ability to be able to process the image data having the ultra-high definition at the high frame rate, the service receiver 200 processes both the basic video stream and the enhancement video stream (the image data having the second enhancement format), and obtains the image data having the ultra-high definition at the basic frame rate to carry out the image reproduction. Moreover, in case of the receiver having the decoding ability to be able to process the image data having the ultra-high definition at the basic frame rate, the service receiver 200 processes both the basic video stream and the enhancement video stream (the image data having the first, second, third enhancement formats), and obtains the image data having the ultra-high definition at the high frame rate to carry out the image reproduction.
When the service receiver 200 executes the processing for the spatial scalable stream using the image data having the second, third enhancement formats, the service receiver 200 uses the information exhibiting the ratio of the spatial scalable stream which is inserted into the encoded image data of the image data having the second, third enhancement formats or the container position corresponding to the encoded image data. As a result, the service receiver 200 can suitably execute the processing for the spatial scalable stream.
In addition, when the service receiver 200 executes the processing for the temporal scalable stream using the image data having the first, third enhancement formats, the service receiver 200 uses the information associated with the mixing ratio (first, second ratios) which is inserted into the encoded image data of the image data having the first, third enhancement formats or the container position corresponding to the encoded image data. As a result, the service receiver 200 can suitably execute the processing for the temporal scalable stream.
Here, the basic video stream STb has the encoded image data of the image data, having the basic format, from which an image having the high deformation (HD) at the basic frame rate (LFR) is to be obtained. The enhancement video stream STe1 has the encoded image data of the image data, having a first enhancement format, from which an image having the high definition (HD) at the high frame rate (HFR) is to be obtained. The enhancement video stream STe2 has the encoded image data of the image data, having a second enhancement format, from which an image having the ultra-high definition (UHD) at the basic frame rate (LFR) is to be obtained. The enhancement video stream STe3 has the encoded image data of the image data, having a third enhancement format, from which an image having the ultra-high definition (UHD) at the high frame rate (HFR) is to be obtained. The enhancement video stream STe has the encoded image data of the image data having first, second, third enhancement formats.
In a service receiver 200A having the decoding ability to be able to process the image data having the ultra-high definition at the high frame rate, in a video decoder 203A, the basic video stream STb and the enhancement video streams STe1, STe2, STe3, or the basic video stream STb and one enhancement video stream STe are processed, and the image data “HFR/UHD video” exhibiting the ultra-high definition at the high frame rate is obtained to carry out the image reproduction.
In addition, in a service receiver 200B having the decoding ability to be able to process the image data having the high definition at the high frame rate, in a video decoder 203B, the basic video stream STb and the enhancement video stream STe1, or the basic video stream STb and the enhancement video stream STe are processed, and the image data “HFR/HD video” exhibiting the high definition at the high frame rate is obtained to carry out the image reproduction.
In addition, in a service receiver 200C having the decoding ability to be able to process the image data having the ultra-high definition at the basic frame rate, in a video decoder 203C, the basic video stream STb and the enhancement video stream STe2, or the basic video stream STb and the enhancement video stream STe is processed, and the image data “LFR/UHD video” exhibiting the ultra-high definition at the basic frame rate is obtained to carry out the image reproduction.
In addition, in a service receiver 200D having the decoding ability to be able to process the image data having the high definition at the basic frame rate, in a video decoder 203D, the basic video stream STb is processed, and the image data “LFR/HD video” exhibiting the high definition at the basic frame rate is obtained to carry out the image reproduction.
A sequence of the image data “HD 60 Hz Base” having the basic format and contained in the basic video stream STb in which a layering ID (layering_id) “0” is present in the lowermost stage. The layer ID (Layer_id) of the image data “HD 60 Hz Base” is “0.”
A sequence of the image data “HD HFR Enhanced1” having the first enhancement format and contained in the enhancement video stream STe1 in which the layering ID (layering_id) is “1” is present in the upper stage of the lowermost stage. The “HD HFR Enhanced1” is the scalability in the temporal direction for the image data “HD 60 Hz Base.” The layer ID (Layer_id) of the image data “HD HFRR Enhanced1” is “0.”
A sequence of the image data “UHD 60 Hz Enhanced2” having the second enhancement format and contained in the enhancement video stream STe2 in which the layering ID (layering_id) is “2” is “1” is present in the upper stage of that previous stage. The “UHD 60 Hz Enhanced2” is the scalability in the spatial direction for the image data “HD 60 Hz Base.” The layer ID (Layer_id) of the image data “UHD 60 Hz Enhanced 2” is “1.”
A sequence of the image data “UHD HFR Enhanced3” having the third enhancement format and contained in the enhancement video stream STe3 in which the layering ID (layering_id) is “3” is present in the upper stage of that previous stage. The “UHD HFR Enhanced3” is the scalability in the temporal direction for the image data “UHD 60 Hz Enhanced2,” and is also the scalability in the spatial direction for the image data “HD HFR Enhanced1.” The layer ID (Layer_id) of the image data “UHD HFR Enhanced3” is “1.”
The reproduction of the image (60 Hz, HD image) having the high definition (HD) at the basic frame rate can be carried out on the basis of the image data “HD 60 Hz Base” having the basic format. In addition, the reproduction of the image (120 Hz, HD image) having the high definition (HD) at the high frame rate can be carried out on the basis of the image data “HD 60 Hz Base” having the basic format and the image data “HD HFR Enhanced1” having the first enhancement format.
In addition, the reproduction of the image (60 Hz, UHD image) having the ultra-high definition (UHD) at the basic frame rate can be carried out on the basis of the image data “HD 60 Hz Base” having the basic format, and the image data “UHD 60 Hz Enhanced2” having the second enhancement format. In addition, the reproduction of the image (120 Hz, UHD image) having the ultra-high definition (UHD) at the high frame rate can be carried out on the basis of the image data “HD 60 Hz Base” having the basic format, the image data “HD HFR Enhanced1” having the first enhancement format, the image data “UHD 60 Hz Enhanced2” having the second enhancement format, and the image data “UHD HFR Enhanced3” having the third enhancement format.
The rectangular frames each indicate the pictures. An arrow indicates the response of the scalability. That is to say, the image having the high definition (HD) at the high frame rate, in a word, the image data of 120 Hz HD image is obtained by the enhancement of the temporal scalable stream in which the image data having the first enhancement format contained in the track E1 is used on the image data having the basic format contained in the track B. In addition, the image having the ultra-high definition (UHD) at the basic frame rate, in a word, the image data of 60 Hz UHD image is obtained by the enhancement of the spatial scalable stream in which the image data having the second enhancement format contained in the track E2 is used on the image data having the basic format contained in the track B.
In addition, the image having the ultra-high definition (UHD) at the high frame rate, in a word, the image data of 120 Hz UHD image is obtained by the enhancement of the spatial scalable stream, the temporal scalable stream in which the image data having the first enhancement format contained in the track E1, the image data having the second enhancement format contained in the track E2, and the image data having the third enhancement format contained in the track E3 are used on the image data having the basic format contained in the track B.
In the MP4 stream “video-basesubbitstream” corresponding to the track B, the encoded image data (access unit) having the basic format, for the predetermined number of pictures, for example, 1 GOP, is arranged in the “mdat” box of the respective movie fragments. Here, the access units are constituted by the NAL units such as “VPS,” “SPS,” “PPS,” “PSEI,” “SLICE,” and “SSEI.” It should be noted that “VPS,” “SPS” are inserted into the head picture of GOP.
In addition, “sublayer_level_present_flag[j−1]” is set to “1,” “sublayer_level_idc[j−1]” is set to “153,” and “sublayer_profile_idc[j−1]” is set to “7.” As a result, it is also represented that a level of the whole streams of the enhancement video streams STe2, STe1, and the basic video stream STb is “level 5.1” and the profile thereof is “Scalable Main 10 Profile.”
In addition, “sublayer_level_present_flag[j−2]” is set to “1,” “sublayer_level_idc[j−2]” is set to “126,” and “sublayer_profile_idc[j−2]” is set to “2.” As a result, it is also represented that a level of the whole stream of the enhancement video stream STe1, and the basic video stream STb is “level 4.2” and the profile thereof is “Main 10 Profile.”
In addition, “sublayer_level_present_flag[j−3]” is set to “1,” “sublayer_level_idc[j−3]” is set to “123,” and “sublayer_profile_idc[j−3]” is set to “2.” As a result, it is also represented that the level of the basic video stream STb is “level 4.1,” and the profile thereof is “main 10 Profile.”
Returning back to
In addition, a “tfdt” box is present in the “moof” box, a “sgpd” box is present in the “tfdt” box, and a “tscl” box is present in the “sgpd” box. Four parameters of “temporalLayerld,” “tllevel_idc,” “Tlprofile,” and “tlConstantFrameRate” are described in the “tscl” box. “temporalLayerId” exhibits a temporal ID (temporal_id). “tlConstantFrameRate” is set to 1, which exhibits that the frame rate is constant.
“tllevel_idc” indicates the level of the basic video stream STb, and is made to agree with “sublayer_level_idc[j−3]” of the element of the SPS (or VPS) described above. In this case, “tllevel_idc” is set to “123.” “Tlprofile” indicates the profile of the basic video stream STb, and is made to agree with “sublayer_profile_idc[j−3]” of the element of the SPS (or VPS) described above. In this case, “Tlprofile” is set to “2.”
In the MP4 stream “video-enhanced1subset” corresponding to the track E1, the encoded image data (access units) for a predetermined number of pictures, for example, for 1GOP of the image data having the first enhancement format is arranged in the “mdat” boxes of the respective movie fragments. Here, the access units are constituted by the NAL units such as “PPS,” “PSEI,” “SLICE,” and “SSEI.”
In the MP4 stream “video-enhanced1subset” corresponding to the track E1, a “traf” box is present in the “moof” boxes of the respective movie fragments, and “tfdt” box is present in the “traf” box. The decode time “baseMediaDecodeTime” of the first access unit after the “moof” box is described in the “traf” box.
In addition, a “tfdt” box is present in the “moof” box, a “sgpd” box is present in the “tfdt” box, and a “tscl” box is present in the “sgpd” box. Four parameters of “temporalLayerId,” “tllevel_idc,” “Tlprofile,” and “tlConstantFrameRate” are described in the “tscl” box. “temporalLayerld” exhibits a temporal ID (temporal_id). “tlConstantFrameRate” is set to “1,” which exhibits that the frame rate is constant.
“tllevel_idc” indicates the level of the whole streams of the enhancement video stream STe1 the basic video stream STb, and is made to agree with “sublayer_level_idc[j−2]” of the element of the SPS (or VPS) described above. In this case, “tllevel_idc” is set to “126.” “Tlprofile” indicates the profile of the whole streams of the enhancement video stream STe1, the basic video stream STb, and is made to agree with “sublayer_profile_idc[j−2]” of the element of the SPS (or VPS) described above. In this case, “Tlprofile” is set to “2.”
In the MP4 stream “video-enhanced2subset” corresponding to the track E2, the encoded image data (access units) for a predetermined number of pictures, for example, for 1GOP of the image data having the second enhancement format is arranged in the “mdat” boxes of the respective movie fragments. Here, the respective access units are constituted by the NAL units such as “PPS,” “PSEI,” “SLICE,” and “SSEI.”
In the MP4 stream “video-enhanced2subset” corresponding to the track E2, the “traf” box is present in the “moof” boxes of the respective movie fragments, and the “tfdt” box is present in the “traf” box. The decode time “baseMediaDecodeTime” of the first access unit after the “moof” box is described in the “traf” box.
In addition, the “tfdt” box is present in the “moof” box, the “sgpd” box is present in the “tfdt” box, and a “tscl” box is present in the “sgpd” box. Four parameters of “temporalLayerId,” “tllevel_idc,” “Tlprofile,” and “tlConstantFrameRate” are described in the “tscl” box. “temporalLayerld” exhibits a temporal ID (temporal_id). “tlConstantFrameRate” is set to 1, which exhibits that the frame rate is constant.
“tllevel_idc” exhibits the level of the whole streams of the enhancement video streams STe2, STe1, and the basic video stream STb, and is made to agree with “sublayer_level_idc[j−1” of the element of SPS (or VPS) described above. In this case, “tllevel_idc” is set to “153.” “Tlprofile” exhibits the profile of the whole streams of the enhancement video streams STe2, STe1, and the basic video stream STb, and is made to agree with “sublayer_profile_idc[j−1]” of the element of SPS (or VPS) described above. In this case, “Tlprofile” is set to “7.”
In the MP4 stream “video-enhanced3subset” corresponding to the track E3, the encoded image data (access units) for a predetermined number of pictures, for example, for 1 GOP of the image data having the third enhancement format is arranged in the “mdat” boxes of the respective movie fragments. Here, the respective access units are constituted by the NAP units such as “PPS,” “PSEI,” “SLICE,” and “SSEI.”
In the MP4 stream “video-enhanced3subset” corresponding to the track E3, the “traf” box is present in the “moof” boxes of the respective movie fragments, and the “tfdt” box is present in the “traf” box. The decode time “baseMediaDecodeTime” of the first access unit after the “moof” box is described in the “tfdt” box.
In addition, the “tfdt” box is present in the “moof” box, the “sgpd” box is present in the “tfdt” box, and the “tscl” box is present in the “sgpd” box. The four parameters of “temporalLayerld,” “tllevel_idc,” “Tlprofile,” and “tlConstantFrameRate” are described in the “tscl” box. “temporalLayerld” exhibits the temporal ID (temporal_id). “tlConstantFrameRate” is set to “1,” which exhibits that the frame rate is constant.
“tllevel_idc” exhibits the level of the whole streams of the enhancement video streams STe3, STe2, STe1, and the basic video stream STb, and is made to agree with “general_level_idc” of the element of SPS (or VPS) described above. In this case, “tllevel_idc” is set to “156.” “Tlprofile” exhibits the profile of the whole streams of the enhancement video streams STe3, STe2, STe1, and the basic video stream STb, and is made to agree with “general_profile_idc” of the element of SPS (or VPS) described above. In this case, “Tlprofile” is set to “7.”
In the MP4 stream “video-enhanced1subset” corresponding to the track E1, as described above, the access units, for the predetermined number of pictures, of the image data having the first enhancement format are arranged in the “mdat” boxes of the respective movie fragments. An SEI NAL unit having identification information exhibiting that the stream is the temporal scalable stream, identification information exhibiting that the image data having the basic format is image data obtained by executing the mixing processing, and the information associated with mixing ratios (first, second ratios) is inserted into the respective access units. In this embodiment, video scalability SEI (video_scalability_SEI) which is newly defined is inserted into a portion of “SEIs” of the access unit (AU).
In the MP4 stream “video-enhanced2subset” corresponding to the track E2, as described above, the access units, for the predetermined number of pictures, of the image data having the second enhancement format are arranged in the “mdat” box of the respective movie fragments. An SEI NAL unit having identification information exhibiting that the stream is the spatial scalable stream, and the information exhibiting a ratio of the spatial scalable stream is inserted into the respective access units. In this embodiment, video scalability SEI (video_scalability_SEI) which is newly defined is inserted into a portion of “SEIs” of the access unit (AU).
In addition, in the MP4 stream “video-enhanced3subset” corresponding to the track E3 as described above, the access units, for the predetermined number of pictures, of the image data having the third enhancement format are arranged in the “mdat” boxes of the respective movie fragments. An SEI NAL unit having the identification information exhibiting that the stream is the temporal scalable stream, the identification information exhibiting that the image data having the basic format is image data obtained by executing the mixing processing, the information associated with a mixing ratios (first, second ratios), the identification information exhibiting that the stream is the spatial scalable stream, and the information exhibiting the ratio of the spatial scalable stream is inserted into the respective access units. In this embodiment, video scalability SEI (video_scalability_SEI) which is newly defined is inserted into a portion of “SEIs” of the access unit (AU).
In the video scalability SEI which is inserted into the access unit of the image data having the first enhancement format, “temporal_scalable_flag” is set to “1,” and it is represented that the stream is the temporal scalable stream. In the video scalability SEI which is inserted into the access unit of the image data having the second enhancement format, “temporal_scalable_flag” is set to “0,” and it is represented that the stream is not the temporal scalable stream. In addition, in the video scalability SEI which is inserted into the access unit of the image data having the third enhancement format, “temporal_scalable_flag” is set to “1,” and it is represented that the stream is the temporal scalable stream.
1 bit field of “spatial scalable flag” exhibits whether or not the stream is the spatial scalable stream. For example, “1” exhibits that the stream is the spatial scalable stream, and “0” exhibits that the stream is not the spatial scalable stream.
In the video scalability SEI which is inserted into the access unit of the image data having the first enhancement format, “spatial_scalable_flag” is set to “0,” and it is represented that the stream is not the spatial scalable stream. In the video scalability SEI which is inserted into the access unit of the image data having the second enhancement format, “spatial_scalable_flag” is set to “1,” and it is represented that the stream is not the spatial scalable stream. In addition, in the video scalability SEI which is inserted into the access unit of the image data having the third enhancement format, “spatial_scalable_flag” is set to “1,” and it is represented that the stream is the spatial scalable stream.
When “spatial_scalable_flag” is “1,” 3 bit field of “scaling_ratio” is present. This field indicates a ratio of the spatial scalable, in a word, an enlargement ratio in one-dimensional direction of enlargement to basis. For example, “001” exhibits twice, “010” exhibits three times and “011” exhibits four times. For example, when the ultra-high definition (UHD) is the 4K definition, “scaling_ratio” is set to “001,” and when the ultra-high definition (UHD) is the 8K definition, “scaling_ratio” is set to “011.”
When “temporal_scalable_flag” is “1,” a 1 bit field of “picture_blending_flag” is present. The field exhibits whether or not the mixing processing of the pictures is executed for the basic stream (the image data having the basic format). For example, “1” exhibits that the mixing processing of the pictures is executed for the basic stream, and “0” exhibits that the mixing processing of the pictures is not executed for the basic stream.
When “picture_blending_flag” is “1,” a field exhibiting the mixing ratios (first, second ratios), that is, respective 3-bit fields of “blend_coef_alpha_alternatte_picture,” “blend_coef_beta_alternate_picture,” “blend_coef_alpha_current_picture,” and “blend_coef_beta_current_picture” are present.
The field of “blend_coef_alpha_alternatte_picture” is a coefficient by which the picture of the basic layer is multiplied (corresponding to a coefficient p which will be described later). A field of “blend_coef_beta_alternate_picture” is a coefficient by which the current picture (in enhancement stream) is multiplied (corresponding to a coefficient r which will be described later). A field of “blend_coef_alpha_current_picture” is a coefficient by which the picture of the enhancement layer is multiplied (corresponding to a coefficient q which will be described later). A field of “blend_coef_beta_current_picture” is a coefficient by which the current picture (in enhancement stream) is multiplied (corresponding to a coefficient s which will be described later).
Referring back to
In the MP4 stream “video-enhanced2subset” corresponding to the track E2, the identification information exhibiting that the stream is the spatial scalable stream, and the information exhibiting the mixing ratio of the spatial scalable stream are inserted into the “moof” boxes of the respective movie fragments. In this embodiment, a box of “udta” or “lays” is provided under the “moof” box, and a Syntax of a video scalability information descriptor (video_scalability_information_descriptor) which is newly defined is transmitted.
In addition, in the MP4 stream “video-enhanced3subset” corresponding to the track E3, the identification information exhibiting that the stream is the temporal scalable stream, the identification information exhibiting that the image data having the basic format is the image data obtained by executing the mixing processing, the information associated with the mixing ratios (first, second ratios) the identification information exhibiting that the structure is the spatial scalable stream, and the information exhibiting the ratio of the spatial scalable stream are inserted into the “moof” boxes of the respective movie fragments. In this embodiment, a box of “udta” or “lays” is provided under the “moof” box, and a Syntax of a video scalability information descriptor (video_scalability_information_descriptor) which is newly defined is transmitted.
In the representation associated with the basic video stream STb (HD Base stream), the descriptions of “frame rate=“60,” “codes=“hev1.A, L123, xx”,” “id=“tag0” are present. “framerate=“60” & L123 with no dependencyid” exhibits the basic stream of 2K 60P, and ““A”” exhibits a value of 2 exhibiting “Main 10 Profile.” Information associated with the level and the profile agrees with “sublayer_level_idc[j−3],” “sublayer_profile_idc[j−3]” of the elements of SPS (VPS) described above. Incidentally, “sublayer_profile_idc[j−3]”=“Main 10 Profile,” and “sublayer_level_idc[j−3]”=“level 4.1”=“123.” In addition, from the description of “<BaseURL>video-basesubbitstream.mp4</BaseURL>,” a location destination of the basic-video stream STb (Base stream) is indicated as “video-basesubbitstream.mp4.”
In the representation associated with the enhancement video stream STe1 (Enhanced1 stream), the description of “framerate=“120”,” “codes=“hev1.B. L126, xx”,” “id=“tag1”” is present. “framerate=“120” & L126 with dependencyid tagged tag0” exhibits that the stream of 2K 120P is realized. ““B”” exhibits a value of 2 exhibiting “main 10 Profile.” Information associated with the level and the profile agrees with “sublayer_level_idc[j−2],” “sublayer_profile_idc[j−2]” of the elements of SPS (or VPS) described above. Incidentally, “sublayer_profile_idc[j−2]”=“Main 10 Profile,” and “sublayer_level_idc[j−2]”=“level 4.2”=“126.” In addition, from the description of “<BaseURL>video-enhanced1subset.mp4</BaseURL>,” a location destination of the enhancement video stream STe1 (Enhanced1 stream) is indicated as “video-enhanced1subset.mp4.”
In the representation associated with the enhancement video stream STe2 (Enhanced2 stream), the descriptions of “framerate=“60”,” ““codecs=” hev1.C.L153, xx“,” “id=“tag2”,” and “dependencyid=“tag0” are present. “Framerate=“60”& L153 with dependencyid tagged tag0” exhibits that the stream of 4K 60P is realized on the basic stream by the enhancement. ““C”” exhibits a value of 7 exhibiting “Scalable Main 10 Profile.” The information associated with the level and the profile agrees with “sublayer_level_idc[j−1],” “sublayer_profile_idc[j−1]” of the elements of SPS (VPS) described above. Incidentally, “sublayer_profile_idc[j−1]”=“Scalable Main 10 Profile,” and “sublayer_level_idc[j−1]”=“level 5.1”=“153.” In addition, from the description of “<BaseURL>video-enhanced2subset.mp4</BaseURL>,” the location destination of the enhancement video stream STe2 (Enhanced2 stream) is indicated as “video-enhanced2subset.mp4.”
In the representation associated with the enhancement video stream STe3 (Enhanced3 stream), the descriptions of “framerate=“120”,” ““codecs=” hev1.D.L156, xx”,” “id=“tag3”,” and “dependencyid=“tag0, tag1, tag2” are present. “framerate=“120”& L156 with dependencyid tagged tag0, tag1, tag2” exhibits that the stream of 2K 120P is realized on the basic stream by enhancement with 2K 120P, and the enhancement component is added thereon to realize the stream of 4K 120P. ““D”” exhibits a value of 7 exhibiting “Scalable Main 10 Profile.” The information associated with the level and the profile agrees with “general_level_idc,” “general_profile_idc” of the elements of SPS (VPS) described above. Incidentally, “general_level_idc”=“Scalable Main 10 Profile,” and “general_level_idc”=level 5.2=“156.” In addition, from the description of “<BaseURL>video-enhanced3subset.mp4</BaseURL>,” the location destination of the enhancement video stream STe3 (Enhanced3 stream) is indicated as “video-enhanced3subset.mp4.”
In such a way, the information exhibiting the response of the scalability is inserted into MPD file, and it is represented that the spatial scalability and the temporal scalability are simultaneously realized.
A sequence of the image data “HD 60 Hz Base” having the basic format and contained in the basic video stream STb in which a layering ID (layering_id) is “0” is present in the lowermost stage. The layer ID (Layer_id) of the image data “HD 60 Hz Base” is “0.”
A sequence of the image data “HD HFR Enhanced1” having the first enhancement format in which the layering ID (layering_id) is “1” and contained in the enhancement video stream STe is present in the upper stage of the lowermost stage. The “HD HFR Enhanced1” is the scalability in the temporal direction for the image data “HD 60 Hz Base.” The layer ID (Layer_id) of the image data “HD HFR Enhanced1” is “0.”
A sequence of the image data “UHD 60 Hz Enhanced2” having the second enhancement format in which the layering ID (layering_id) is “2,” and contained in the enhancement video stream STe is present in the upper stage of the above stage. “UHD 60 Hz Enhanced2” is the scalability in the spatial direction for the image data “HD 60 Hz Base.” The layer ID (Layer_id) of the image data “UHD 60 Hz Enhanced2” is “1.” In addition, the temporal ID (Temporal_id) of the image data “UHD 60 Hz Enhanced2” is set equal to or smaller than a predetermined threshold value TH.
A sequence of the image data “UHD HFR Enhanced3” having the third enhancement format in which the layering ID (layering_id) is “3,” and contained in the enhancement video stream STe is present in the upper stage of the above stage. “UHD HFR Enhanced3” is the scalability in the temporal direction for the image data “UHD 60 Hz Enhanced2,” and is also the scalability in the spatial direction for the image data “HD HFR Enhanced1.” The layer ID (Layer_id) of the image data “UHD HFR Enhanced3” is “1.” In addition, the temporal ID (Temporal_id) of the image data “UHD 60 Hz Enhanced3” is set larger than the predetermined threshold value TH.
As described above, the temporal ID of the image data “UHD 60 Hz Enhanced2” is set equal to or smaller than the predetermined threshold value TH. On the other hand, the temporal ID of the image data “UHD HFR Enhanced3” is set larger than the threshold value TH. As a result, the determination as to whether or not the temporal ID is equal to or smaller than the threshold value TH enables the image data “UHD 60 Hz Enhanced2” and the image data “UHD HFR Enhanced3” to be distinguished from each other.
The image (60 Hz, HD image) having the high definition (HD) can be reproduced at the basic frame rate on the basis of the image data “HD 60 Hz Base” having the basic format. In addition, the image (120 Hz, HD image) having the high definition (HD) can be reproduced at the high frame rate on the basis of the image data “HD 60 Hz Base” having the basic format, and the image data “HD HFR Enhanced1” having the first enhancement format.
In addition, the image (60 Hz, UHD image) having the ultra-high definition (UHD) can be reproduced at the basic frame rate on the basis of the image data “UHD 60 Hz Base” having the basic format, and the image data “UHD 60 Hz Enhanced2” having the second enhancement format. In addition, the image (120 Hz, UHD image) having the ultra-high definition (UHD) can be reproduced at the high frame rate on the basis of the image data “HD 60 Hz Base” having the basic format, the image data “HD HFR Enhanced1” having the first enhancement format, the image data “UHD 60 Hz Enhanced2” having the second enhancement format, and the image data “UHD HFR Enhanced3” having the third enhancement format.
The rectangular frames each indicate the pictures. An arrow indicates the response of the scalability. That is to say, the image having the high definition (HD) at the high frame rate, in a word, the image data of 120 Hz HD image is obtained by the enhancement of the temporal scalable stream in which the image data having the first enhancement format and contained in the track EH is used on the image data having the basic format and contained in the track B. In addition, the image having the ultra-high definition (UHD) at the basic frame rate, in a word, the image data of 60 Hz UHD image is obtained by the enhancement of the spatial scalable stream in which the image data having the second enhancement format and contained in the track EH is used on the image data having the basic format and contained in the track B.
In addition, the image having the ultra-high definition (UHD) at the high frame rate, in a word, image data of the 120 Hz UHD image is obtained by the enhancement of the spatial scalable stream, the temporal scalable stream in which the image data having the first, second and third enhancement formats and contained in the track EH are used on the image data having the basic format and contained in the track B.
In the MP4 stream “video-based sub-bit stream” corresponding to the track B, the encoded image data (access unit), for the predetermined number of pictures, for example, 1 GOP, having the basic format is arranged in the “mdat” boxes of the respective movie fragments. Here, the respective access units are constituted by the NAC units such as “VPS,” “SPS,” “PPS,” “PSEI,” “SLICE,” and “SSEI.” It should be noted that “VPS,” “SPS,” for example, are inserted into the head picture of GOP.
In the MP4 stream “video-basesubbitstream” corresponding to the track B, a “traf” box is present in the “moof” boxes of the respective movie fragments, and a “tfdt” box is present in the “traf” box. The decoding time “baseMediaDecodeTime” of a first access unit after the “moof” box is described in the “tfdt” box.
In addition, a “tfdt” box is present in the “moof” box, a “sgpd” box is present in the “tfdt” box, and a “tscl” box is present in the “sgpd” box. Four parameters of “temporalLayerId,” “tllevel_idc,” “Tlprofile,” and “tlConstantFrameRate” are described in the “tscl” box. “temporalLayerId” exhibits a temporal ID (temporal_id). “tlConstantFrameRate” is set to “1,” which exhibits that the frame rate is constant.
“tllevel_idc” indicates the level of the basic video stream STb, and is made to agree with “sublayer_level_idc[j−3]” of the element of the SPS (or VPS) described above. In this case, “tllevel_idc” is set to “123.” “Tlprofile” indicates the profile of the basic video stream STb, and is made to agree with “sublayer_profile_idc[j−3]” of the element of the SPS (or VPS) described above. In this case, “Tlprofile” is set to “2.”
In the MP4 stream “video-enhancedsubset” corresponding to the track EH, the encoded image data (access units), for a predetermined number of pictures, for example, 1 GOP, of the image data having the first enhancement format, the encoded image data (access units), for a predetermined number of pictures, for example, 1 GOP, of the image data having the second enhancement format, or the encoded image data (access units), for a predetermined number of pictures, for example, 1 GOP, of the image data having the third enhancement format is arranged in the “mdat” boxes of the respective movie fragments. Here, the respective access units are constituted by the NAL units such as “PPS,” “PSEI,” “SLICE,” and “SSEI.”
In the MP4 stream “video-enhancedsubset” corresponding to the track EH, a “traf” box is present in the “moof” boxes of the respective movie fragments, and “tfdt” box is present in the “traf” box. The decode time “baseMediaDecodeTime” of the first access unit after the “moof” box is described in the “traf” box.
In addition, a “tfdt” box is present in the “moof” box, a “sgpd” box is present in the “tfdt” box, and a “tscl” box is present in the “sgpd” box. Four parameters of “temporalLayerld,” “tllevel_idc,” “Tlprofile,” and “tlConstantFrameRate” are described in the “tscl” box. “temporalLayerld” exhibits a temporal ID (temporal_id). “tlConstantFrameRate” is set to “1,” which exhibits that the frame rate is constant.
In the “moof” box of the movie fragments each corresponding to the image data having the first enhancement format, “tllevel_idc” exhibits the level of the whole streams of the first enhancement video stream (constituted by the access unit of the image data having the first enhancement format 1), the basic video stream STb, and is made to agree with “sublayer_level_idc[j−2]” of the element of SPS (VPS). In this case, “tllevel_idc” is set to “126.” “Tlprofile” indicates the profile of the whole streams of the first enhancement video stream, the basic video stream STb, and is made to agree with “sublayer_profile_idc[j−2]” of the element of the SPS (or VPS). In this case, “Tlprofile” is set to “2.”
In the “moof” box of the movie fragments each corresponding to the image data having the second enhancement format, “tllevel_idc” exhibits the level of the whole streams of the second enhancement video stream (constituted by the access unit of the image data having the second enhancement format), the first enhancement video stream (constituted by the access unit of the image data having the first enhancement format), and the basic video stream STb, and is made to agree with “sublayer_level_idc[j−1]” of the element of SPS (VPS). In this case, “tllevel_idc” is set to “153.” “Tlprofile” exhibits the profile of the whole streams of the second enhancement video stream, the first enhancement video stream, and the basic video stream STb, and is made to agree with “sublayer_profile_idc[j−1]” of the element of SPS (VPS). In this case, “Tlprofile” is set to “7.”
In addition, in the “moof” box of the movie fragments each corresponding to the image data having the third enhancement format, “tllevel_idc” exhibits the level of the whole streams of the enhancement video stream STe, the basic video stream STb, and is made to agree with “general_level_idc” of the element of SPS (VPS). In this case, “tllevel_idc” is set to “156.” “Tlprofile” exhibits the profile of the whole streams of the enhancement video stream STe, and the basic video stream STb, and is made to agree with “general_profile_idc” of the element of SPS (VPS). In this case, “Tlprofile” is set to “7.”
In the MP4 stream “video-enhancedsubset” corresponding to the track EH, as described above, the access units of the image data, for a predetermined number of pictures, having the first enhancement format, the access units of the image data, for a predetermined number of pictures, having the second enhancement format, or the access units of the image data, for a predetermined number of pictures, having the third enhancement format are arranged in the “mdat” boxes of the respective movie fragments.
An SEI NAL unit having the identification information exhibiting that the stream is the temporal scalable stream, the identification information exhibiting that the image data having the basic format is the image data obtained by executing the mixing processing, and the information associated with the mixing ratios (first, second ratios) is inserted into the respective access units of the image data having the first enhancement format. In addition, an SEI NAL unit having the identification information exhibiting that the stream is the spatial scalable stream, and the information exhibiting the ratio of the spatial scalable stream is inserted into the respective access units of the image data having the second enhancement format.
In addition, an SEI NAL unit having the identification information exhibiting that the stream is the temporal scalable stream, the identification information exhibiting that the image data having the basic format is the image data obtained by executing the mixing processing, the information associated with the mixing ratios (first, second ratios), the identification information exhibiting that the stream is the spatial scalable stream, and the information exhibiting the ratio of the spatial scalable stream is inserted into the respective access units of the image data having the third enhancement format.
In this embodiment, the video scalability SEI (refer to
In the MP4 stream “video-enhancedsubset” corresponding to the track EH, the identification information exhibiting that the stream is the temporal scalable stream, the identification information exhibiting that the image data having the basic format is the image data obtained by executing the mixing processing, and the information associated with the mixing ratios (first, second ratios) are inserted into the “moof” box corresponding to “mdat” having the access unit of the image data having the first enhancement format.
Further, in the MP4 stream “video-enhancedsubset” corresponding to the track EH, the identification information exhibiting that the stream is the spatial scalable stream, the information exhibiting the ratio of the spatial scalable stream are inserted into the “moof” box corresponding to “mdat” having the access unit of the image data having the second enhancement format.
In addition, in the MP4 stream “video-enhanced subset” corresponding to the track EH, the identification information exhibiting that the stream is the temporal scalable stream, the identification information exhibiting that the stream is the temporal stream, the identification information exhibiting that the image data having the basic format is the image data obtained by executing the mixing processing, and the information associated with the mixing ratios (first, second ratios), the information exhibiting that the stream is the spatial scalable stream, and the information exhibiting the ratio of the spatial scalable stream are inserted into the “moof” box corresponding to “mdat” having the access unit of the image data having the third enhancement format.
In this embodiment, a box of “udta” or “lays” is provided under the “moof” box, and a Syntax of a video scalability information descriptor (refer to
In the representation associated with the basic video stream STb (HD Base stream), the description of “frame rate=“60”,” “codes=“hev1.A.L123, xx,” “id=“tag0”” are present. “framerate=“60” & L123 with no dependencyid” exhibits the basic stream of 2K 60P, and ““A”” exhibits a value of 2 exhibiting “Main 10 Profile.” Information associated with the level and the profile agrees with “sublayer_level_idc[j−3],” “sublayer_profile_idc[j−3]” of the elements of SPS (VPS) described above. Incidentally, “sublayer_profile_idc[j−3]”=“Main 10 Profile,” and “sublayer_level_idc[j−3]”=“level 4.1”=“123.” In addition, from the description of “<BaseURL>video-basesubbitstream.mp4</BaseURL>,” a location destination of the basic-video stream STb (HD Base stream) is indicated as “video-basesubbitstream.mp4.”
In the representation associated with the first enhancement video stream, the description of “framerate=“120”,” “codes=“hev1.B. L126, xx”,” “id=“tag1”” is present. “framerate=“120” & L126 with dependencyid tagged tag0” exhibits that the stream of 2K 120P is realized. ““B”” exhibits a value of 2 exhibiting “main 10 Profile.” Information associated with the level and the profile agrees with “sublayer_level_idc[j−2],” “sublayer_profile_idc[j−2]” of the elements of SPS (or VPS) described above. Incidentally, “sublayer_profile_idc[j−2]”=“Main 10 Profile,” and “sublayer_level_idc[j−2]”=“level 4.2”=“126.”
In the sub-representation associated with the second enhancement video stream, the description of “framerate=“60”,” ““codecs=” her1.C.L153, xx“,” “id=“tag2”,” and “dependencyid=“tag0”” is present. “Framerate=“60”& L153 with dependencyid tagged tag0” exhibits that the stream of 4K 60P is realized on the basic stream by enhancement. ““C”” exhibits a value of 7 exhibiting “Scalable Main 10 Profile.” The information associated with the level and the profile agrees with “sublayer_level_idc[j−1],” “sublayer_profile_idc[j−1]” of the elements of SPS (VPS) described above. Incidentally, “sublayer_profile_idc[j−1]”=“Scalable Main 10 Profile,” and “sublayer_level_idc[j−1]”=level 5.1=“153.”
In the sub-representation associated with the third enhancement video stream STb, the description of “framerate=“120”,” ““codecs=” her1.D.L156, xx“,” “id=“tag3”,” and “dependencyid=“tag0, tag1, tag2” is present. “framerate=“120”& L156 with dependencyid tagged tag0, tag1, tag2” exhibits that the stream of 2K 120P is realized on the basic stream on the basic stream by enhancement and the enhancement component is added thereon to realize the stream of 4K 120P. ““D”” exhibits a value of 7 exhibiting “scalable Main 10 Profile.” The information associated with the level and the profile agrees with “general_level_idc,” “general_profile_idc” of the elements of SPS (VPS) described above. Incidentally, “general_profile_idc”=“Scalable Main 10 Profile,” and “general_level_idc”=“level 5.2”=“156.”
In addition, the representation associated with the enhancement video stream STe (UHD EH stream), from the description of “<BaseURL>video-enhancedsubset.mp4 </BaseURL>,” the location destination of the enhancement video stream STe (UHD EH stream) is indicated as “video-enhancedsubset.mp4.”
In such a way, the information exhibiting the response of the scalability is inserted into the MPD file, and it is represented that the spatial scalability, and the temporal scalability are simultaneously realized.
[Example of Configuration of Service Transmission System]
The control portion 101 is configured to include a Central Processing Unit (CPU), and controls operations of the respective portions of the service transmission system 100 on the basis of a control program. The video encoder 102 receives as its input image data Va exhibiting the ultra-high definition (UHD) at the high frame rate (HFR), and outputs the basic video stream STb and the enhancement video streams STe1, STe2, STe3, or the basic video stream STb and the enhancement video stream STe.
The signal processing portion 102b processes the first image data Vb (UHD 60 Hz Base), and obtains the image data Vd (HD 60 Hz Base) becoming image data BS, having the basic format, from which image having high definition at the basic frame rate is to be obtained, and the image data Ve (UHD 60 Hz Enhanced2) becoming image data ES2, having the second enhancement format at the basic frame rate, from which image having ultra-high definition is to be obtained. The signal processing portion 102c processes the second image data Vc (UHD HFR Enhanced), and obtains image data Vf (HD HFR Enhanced1) becoming image data ES1, having the first enhancement format, from which image having high definition at the high frame rate is to be obtained, and image data Vg (UHD HFR Enhanced3) becoming image data ES3, having the third enhancement format at the high frame rate, from which image having ultra-high definition is to be obtained.
The coefficient multiplying portions 112a, 112b and the addition portion 112e are used to execute the mixing processing at a first ratio in units of the temporally continuous two pictures. In the coefficient multiplying portion 112a, multiplying is carried out by a coefficient p, and in the coefficient multiplying portion 112b, multiplying is carried out by a coefficient q. It should be noted that p=0 to 1, and q=1−p. In addition, the coefficient multiplying portions 112c, 112d and the addition portion 112f are used to execute the mixing processing at a second ratio in units of the temporally continuous two pictures. In the coefficient multiplying portion 112c, multiplying is carried out by a coefficient r, and in the coefficient multiplying portion 112d, multiplying is carried out by a coefficient s. It should be noted that r=0 to 1, and s=1−r.
After the image data Va (120 Hz UHD) exhibiting the ultra-high definition at the high frame rate is delayed in the delay circuit 111 by one frame, the resulting image data Va is inputted to each of the coefficient multiplying portions 112a, 112c constituting the arithmetic operation circuit 112. In addition, the image data Va is inputted to each of the coefficient multiplying portions 112b, 112d constituting the arithmetic operation circuit 112 as it is. Outputs from the coefficient multiplying portions 112a, 112b are inputted to the addition portion 112e to be added to each other. In addition, outputs from the coefficient multiplying portions 112c, 112d are inputted to the addition portion 112f to be added to each other.
Here, when the pieces of image data of the temporally continuous two pictures of the image data P are assigned A and B, at a timing at which the output from the delay circuit 111 becomes A, a mixed output of C (=p*A+q*B) is obtained as the output from the addition portion 112e, and the mixed output of D (=r*A+s*B) is obtained as the output from the addition portion 112f.
Outputs from the addition circuits 112e, 112f of the arithmetic operation circuit 112 are inputted to the latch circuit 113. In the latch circuit 113, the outputs from the addition circuits 112e, 112f of the arithmetic operation circuit 112 are latched by using a latch pulse having 60 Hz, thereby obtaining the first image data Vb (UHD 60 Hz Base), and the second image data Vc (UHD HFR Enhanced).
Here, the first image data Vb is obtained by executing the mixing processing at the first ratio in units of the temporally continuous two pictures in the image data Va. In addition, the second image data Vc is obtained by executing the mixing processing at the second ratio in units of the temporally continuous two pictures in the image data Va.
In addition, the image data Vd obtained in the down-scale circuit 121 is inputted to the up-scale circuit 122. The up-scale circuit 122 executes up-scale processing from the high definition to the ultra-high definition for the image data Vd, thereby obtaining the third image data. The third image data has the same definition as that of the first image data Vb. However, the third data is obtained by executing the down-scale processing for the first image data Vb and further by executing the up-scale processing. Thus, the information lost in the down-scale processing is not reproduced.
The first image data Vb and the third image data obtained in the up-scale circuit 122 are inputted to the arithmetic operation circuit 123. The arithmetic operation circuit 123 obtains a difference between the two pieces of image data, thereby obtaining the image data Ve (UHD 60 Hz Enhanced2) becoming image data ES2 having the second enhancement format.
In addition, the image data Vf obtained in the down-scale circuit 131 is inputted to the up-scale circuit 132. The up-scale circuit 132 executes the up-scale processing from the high definition to the ultra-high definition for the image data Vf, thereby obtaining fourth image data. The fourth image data has the same definition as that of the second image data Vc. However, the fourth data is obtained by executing the down-scale processing for the second image data Vc and further by executing the up-scale processing. Thus, the information lost in the down-scale processing is not reproduced.
The second image data Vc and the fourth image data obtained in the up-scale circuit 132 are inputted to the arithmetic operation circuit 133. The arithmetic operation circuit 133 obtains a difference between the two pieces of image data, thereby obtaining the image data Vg (UHD HFR Enhanced3) becoming image data ES3 having the third enhancement format.
Referring back to
As a result, the identification information exhibiting that the stream is the temporal scalable stream, the identification information exhibiting that the image data having the basic format is the image data obtained by executing the mixing processing, and the information associated with the mixing ratios (first, second ratios) are inserted into the respective access units of the image data ES1 having the first enhancement format. In addition, the identification information exhibiting that the stream is the spatial scalable stream, and the information exhibiting the ratio of the spatial scalable stream are inserted into the respective access units of the image data ES2 having the second enhancement format.
In addition, the identification information exhibiting that the stream is the temporal scalable stream, the identification information exhibiting that the image data having the basic format is the image data obtained by executing the mixing processing, the information associated with the mixing ratios (first, second ratios), the identification information exhibiting that the stream is the spatial scalable stream, and the information exhibiting the ratio of the spatial scalable stream are inserted into the respective access units of the image data ES3 having the third enhancement format.
The container encoder 103 produces the container containing the basic video stream STb and the enhancement video streams STe1, STe2, STe3 which are obtained in the video encoder 102, or the basic video stream STb and the enhancement video stream STe which are obtained in the video encoder 102, the MP4 (refer to
In this case, the container encoder 103 provides the box of “udta” or “lays” under the “moof” box in the MP4 stream corresponding to the enhancement video streams STe1, STe2, STe3, or the enhancement video stream STe, and inserts the video scalability information descriptor described above (refer to
As a result, the identification information exhibiting that the stream is the temporal scalable stream, the identification information exhibiting that the image data having the basic format is the image data obtained by executing the mixing processing, and the information associated with the mixing ratios (first, second ratios) are inserted into the “moof” box corresponding to the “mdat” box having the access unit of the image data ES1 having the first enhancement format. In addition, the identification information exhibiting that the stream is the spatial scalable stream, and the information exhibiting the ratio of the spatial scalable stream are inserted into the “moof” box corresponding to the “mdat” box having the access units of the image data ES2 having the second enhancement format.
In addition, the identification information exhibiting that the stream is the temporal scalable stream, the identification information exhibiting that the image data having the basic format is the image data obtained by executing the mixing processing, and the information associated with the mixing ratios (first, second ratios), the identification information exhibiting that the stream is the spatial scalable stream, and the information exhibiting the ratio of the spatial scalable stream are inserted into the “moof” box corresponding to the “mdat” box having the access unit of the image data ES3 having the third enhancement format.
The transmission portion 104 transmits the delivery stream STM of the MP4 obtained in the container encoder 103 to the service receiver 200 with the delivery stream STM of the MP4 being placed on the broadcasting wave or the packet of the Internet.
An operation of the service transmission system 100 depicted in
Here, the access unit of the image data BS, having the basic format, from which the image having the high definition at the basic frame rate is to be obtained is contained in the basic video stream STb. The access unit of the image data ES2, having the second enhancement format, from which the image having the high definition at the high frame rate is to be obtained is contained in the enhancement video stream STb. In addition, the access unit of the image data ES2, having the second enhancement format from which the image having the high definition at the high frame rate is to be obtained is contained in the enhancement video stream STe1.
In addition, the access unit of the image data ES2, having the second enhancement format, from which the image having the ultra-high definition at the basic frame rate is to be obtained is contained in the enhancement video stream STe2. The access unit of the image data ES3, having the third enhancement format, from which the image having the ultra-high definition at the high frame rate is to be obtained is contained in the enhancement video stream STe3. The access units of the image data ES1, ES2, ES3 having the first, second, third enhancement formats, respectively, are contained in the enhancement video streams STe.
In the video encoder 102, the video scalability SEI (refer to
In addition, the identification information exhibiting that the stream is the spatial scalable stream, and the information exhibiting the mixing ratio of the spatial scalable stream are inserted into the respective access units of the image data ES2 of the image data ES2. In addition, the identification information exhibiting that the stream is the temporal scalable stream, the identification information exhibiting that the image data having the basic format is the image data obtained by executing the mixing processing, the information associated with the mixing ratios (first, second ratios), the identification information exhibiting that the stream is the spatial scalable stream, and the information exhibiting the ratio of the spatial scalable stream are inserted into the respective access units of the image data ES3.
The basic video stream STb and the enhancement video streams STe1, STe2, STe3, or the basic video stream STb and the enhancement video stream STe which are obtained in the video encoder 102 are supplied to the container encoder 103. The container encoder 103 produces the MP4 (refer to
In this case, in the container encoder 103, in the MP4 stream corresponding to the enhancement video streams STe1, STe2, STe3, or the MP4 stream corresponding to the enhancement video stream STe, the box of “udta” or “lays” is provided under the “moof” box, and the video scalability information descriptor (refer to
As a result, the identification information exhibiting that the stream is the temporal scalable stream, the identification information exhibiting that the image data having the basic format is the image data obtained by executing the mixing processing, and the information associated with the mixing ratios (first, second ratios) are inserted into the “moof” box corresponding to the “mdat” box having the access unit of the image data ES1. In addition, the identification information exhibiting that the stream is the spatial scalable stream, and the information exhibiting the ratio of the spatial scalable stream are inserted into the “moof” box corresponding to the “mdat” box having the access unit of the image data ES2.
In addition, the identification information exhibiting that the stream is the temporal scalable stream, the identification information exhibiting that the image data having the basic format is the image data obtained by executing the mixing processing, the information associated with the mixing ratios (first, second ratios), the identification information exhibiting that the stream is the spatial scalable stream, and the information exhibiting the ratio of the spatial scalable stream are inserted into the “moof” box corresponding to the “mdat” box having the access unit of the image data ES3.
The delivery stream STM produced in the container encoder 103 is transmitted to the transmission portion 104. The transmission portion 104 transmits the delivery stream STM of the MP4 to the service receiver 200 with the delivery stream STM of the MP4 being placed on the broadcasting wave or the packet of the Internet.
[Example of Configuration of Service Receiver]
The control portion 201 is configured to include a Central Processing Unit (CPU), and controls operations of the respective portions of the service transmitter 200A on the basis of a control program. The reception portion 202 receives the delivery stream STM of the MP4 sent thereto with the delivery stream STM of the MP4 being placed on the broadcasting wave or the packet of the Internet from the service transmission system 100.
The container decoder 103 extracts the basic video stream STb and the enhancement video streams STe1, STe2, STe3, or the basic video stream STb and the enhancement video stream STe from the MP4. As described above, the access unit of the image data BS, having the basic format, from which the image having the high definition at the basic frame rate is to be obtained is contained in the basic video stream STb. In addition, the access unit of the image data ES1, having the first enhancement format, from which the image having the high definition at the high frame rate is to be obtained is contained in the enhancement video stream STe1.
In addition, the access unit of the image data ES2, having the second enhancement format, from which the image having the ultra-high definition at the basic frame rate is to be obtained is contained in the enhancement video stream STe2. The access unit of the image data ES3, having the third enhancement format, from which the image having the ultra-high definition at the high frame rate is to be obtained is contained in the enhancement video stream STe3. In addition, the access units of the image data ES1, ES2, ES3 having the first, second, third enhancement formats, respectively, are contained in the enhancement video stream STE.
In addition, the container decoder 203 extracts the meta information from the MP4, and sends the meta information to the control portion 201. The video scalability information descriptor (refer to
The control portion 201 recognizes that the enhancement by the image data ES1 having the first enhancement format is temporal scalable, the image data BS having the basic format is the image data obtained by executing the mixing processing, the mixing ratios (first, second ratios), and so forth from the video scalability information descriptor. In addition, the control portion 201 recognizes that the enhancement by the image data ES2 having the second enhancement format is spatial scalable, the ratio of the spatial scalable stream, and so forth from the video scalability information descriptor SEI.
In addition, the control portion 201 recognizes that the enhancement by the image data ES3 having the third enhancement format is temporal scalable and spatial scalable, the image data BS having the basic format is the image data obtained by executing the mixing processing, the mixing ratios (first, second ratios), the ratio of the spatial scalable stream, and so forth from the video scalability information descriptor.
The video decoder 204 processes the basic video stream STb and the enhancement video streams STe1, STe2, STe3 or the basic video stream STb and the enhancement video stream STe which are extracted in the container decoder 203, thereby obtaining image data Va′ having the ultra-high definition (UHD) at the high frame rate (HFR). Here, a dash “′” of the image data Va′ means that it is possible that because of through the processing of encoding, decoding, the image data Va′ does not become perfectly the same value as that of the image data Va which is inputted to the video encoder 102 described above (refer to
Here, the video decoder 204 extracts a parameter set or the SEI which is inserted into the access units constituting the video streams and sends the parameter set or the SEI to the control portion 201. The video scalability SEI (refer to
The control portion 201 recognizes that the enhancement by the image data ES1 having the first enhancement format is temporal scalable, the image data BS having the basic format is the image data obtained by executing the mixing processing, the mixing ratios (first, second ratios), and so forth from the video scalability SEI. In addition, the control portion 201 recognizes that the enhancement by the image data ES2 having the second enhancement format is spatial scalable, the ratio of the spatial scalable stream, and so forth from the video scalability SEI.
In addition, the control portion 201 recognizes that the enhancement by the image data ES3 having the third enhancement format is temporal scalable and spatial scalable, the image data BS having the basic format is the image data obtained by executing the mixing processing, the mixing ratios (first, second ratios), the ratio of the spatial scalable stream, and so forth from the video scalability SEI.
Here, the processing of the inverse process 1 is inverse processing to the processing of the process 1 which is executed in the signal processing portion 102a of the video encoder 102 described above. Likewise, the processing of the inverse process 2 is inverse processing to the processing of the process 2 which is executed in the signal processing portion 102b of the video encoder 102 described above. In addition, likewise, the processing of the inverse process 3 is inverse processing to the processing of the process 3 which is executed in the signal processing portion 102c of the video encoder 102 described above.
The signal processing portion 204c processes the image data Vf′ (UHD HFR Enhanced1) as the image data ES1′, having the first enhancement format, from which the image having the high definition and at the high frame rate is to be obtained, and the image data Vf′ (UHD HFR Enhanced3) as the image data ES3′, having the third enhancement format, from which the image having the ultra-high definition at the high frame rate is to be obtained, thereby obtaining the second image data Vc′ (UHD HFR Enhanced) as the image data having the enhancement frame at the high frame rate. The signal processing portion 204a processes the first image data Vb′ (UHD 60 Hz Base), and the second image data Vc′ (UHD HFR Enhanced), thereby obtaining the image data Va′ (120 Hz UHD) exhibiting the ultra-high definition at the high frame rate.
The image data Ve′ (UHD 60 Hz Enhanced2) as the image data ES2′ having the second enhancement format, and the third image data obtained in the up-scale circuit 211 are inputted to the arithmetic operation circuit 212. The arithmetic operation circuit 212 adds the two pieces of image data to each other to obtain the first image data Vb′ (UHD 60 Hz Base) as the image data at the basic frame rate.
The image data Vg′ (UHD 60 Hz Enhanced2) as the image data ES3′ having the third enhancement format, and the fourth image data obtained in the up-scale circuit 221 are inputted to the arithmetic operation circuit 222. The arithmetic operation circuit 222 adds the two pieces of image data to each other to obtain the second image data Vc′ (UHD HFR Enhanced) as the image data having the enhancement frame at the high frame rate.
The coefficient multiplying portions 241a, 241b and the addition portions 241e are used in order to obtain the image data of the first picture in units of the temporally continuous two pictures described above from the first image data Vb′ and the second image data Vc′. The coefficient multiplying portion 241a multiplies the picture by a coefficient u, and the coefficient multiplying portion 241b multiplies the picture by a coefficient v. In addition, the coefficient multiplying portions 241c, 241d and the addition portion 241f are used in order to obtain the image data of the second picture in units of the temporally continuous two pictures described above from the first image data Vb′ and the second image data Vc′. The coefficient multiplying portion 241c multiplies the picture by a coefficient w, and the coefficient multiplying portion 241d multiplies the picture by a coefficient z.
The first image data Vb′ (UHD 60 Hz Base) is inputted to the coefficient multiplying portions 241a, 241c constituting the arithmetic operation circuit 241. In addition, the second image data Vc′ (UHD HFR Enhanced) is inputted to the coefficient multiplying portions 241b, 241d constituting the arithmetic operation circuit 241. Outputs from the coefficient multiplying portions 241a, 241b are inputted to the addition portion 241e to be added to each other. In addition, outputs from the coefficient multiplying portions 241c, 241d are inputted to the addition portion 241f to be added to each other.
In this case, the image data A of the first picture is obtained in units of the temporally continuous two pictures as the output from the addition portion 241e. The image data B of the second picture is obtained in units of the temporally continuous two pictures as the output from the addition portion 241f.
Outputs from the addition portions 241e, 241f of the arithmetic operation circuit 241 are respectively inputted to fixed terminals on a side a, a side b of the switch circuit 242. The switch circuit 242 alternately switches the side a, the side b in a cycle of 120 Hz. The image data Va′ (120 Hz UHD), exhibiting the ultra-high definition at the high frame rate, in which the two pieces of image data A, B are synthesized is obtained from the switch circuit 242.
Here, the arithmetic operation circuit 241, as described above, executes the suitable inverse mixing processing by using the information exhibiting the mixing ratios (first, second ratios) which is inserted into the video scalability SEI (refer to
An operation of the service receiver 200A depicted in
The access unit of the image data BS, having the basic format, from which the image having the high definition at the basic frame rate is to be obtained is contained in the basic video stream STb. In addition, the access unit of the image data ES1, having the first enhancement format, from which the image having the high definition at the high frame rate is to be obtained is contained in the enhancement video stream STe1. In addition, the access unit of the image data ES2, having the second enhancement format, from which the image having the ultra-high definition at the basic frame rate is to be obtained is contained in the enhancement video stream STe2. In addition, the access unit of the image data ES3, having the third enhancement format, from which the image having the ultra-high definition at the high frame rate is to be obtained is contained in the enhancement video stream STe3. In addition, the access units of the image data ES1, ES2, ES3 having the first, second, third enhancement formats, respectively, are contained in the enhancement video stream STe.
In addition, the container decode 203 extracts the meta information from the MP4, and sends the meta information to the control portion 201. The video scalability information descriptor (refer to
The control portion 201 recognizes that the enhancement by the image data ES1 having the first enhancement format is temporal scalable, the image data BS having the basic format is the image data obtained by executing the mixing processing, the mixing ratios (first, second ratios), and so forth from the video scalability information descriptor. In addition, the control portion 201 also recognizes that the enhancement by the image data ES2 having the second enhancement format is spatial scalable, the ratio of the spatial scalable stream, and so forth from the video scalability information descriptor SEI.
In addition, the control portion 201 recognizes that the enhancement by the image data ES3 having the third enhancement format is temporal scalable and spatial scalable, the image data BS having the basic format is the image data obtained by executing the mixing processing, the mixing ratios (first, second ratios), the ratio of the spatial scalable stream, and so forth from the video scalability information descriptor.
The basic video stream STb and the enhancement video streams STe1, STe2, STe3, or the basic video stream STb and the enhancement video stream STe which are extracted in the container decoder 203 are supplied to the video decoder 204. The video decoder 204 processes the basic video stream STb and the enhancement video streams STe1, STe2, STe3, or the basic video stream STb and the enhancement video stream STe, thereby obtaining the image data Va′ exhibiting the ultra-high definition (UHD) at the high frame rate (HFR).
Here, the video decoder 204 extracts the parameter set or the SEI which is inserted into the access unit constituting the video streams, and sends the parameter set or the SEI to the control portion 201. The video scalability SEI (refer to
The control portion 201 recognizes that the enhancement by the image data ES1 having the first enhancement format is temporal scalable, the image data BS having the basic format is the image data obtained by executing the mixing processing, the mixing ratios (first, second ratios), and so forth from the video scalability SEI. In addition, the control portion 201 also recognizes that the enhancement by the image data ES2 having the second enhancement format is spatial scalable, the ratio of the spatial scalable stream, and so forth from the video scalability information descriptor.
In addition, the control portion 201 also recognizes that the enhancement by the image data ES3 having the third enhancement format is temporal scalable and spatial scalable, the image data BS having the basic format is the image data obtained by executing the mixing processing, the mixing ratios (first, second ratios), the ratio of the spatial scalable stream, and so forth from the video scalability information descriptor.
The reception portion 201 receives the delivery stream STM of the MP4 sent with the delivery stream STM being placed on the delivery stream STM the packet of the Internet from the service transmission system 100. The delivery stream STM is supplied to the container encoder 203. The container decoder 203 extracts the basic video stream STb and the enhancement video stream STe1 or the basic video stream STb and the enhancement video stream STe from the MP4.
The basic video stream STb and the enhancement video stream STe1, or the basic video stream STb and the enhancement video stream STe which are extracted in the container decoder 203B are supplied to the video decoder 204B. The video decoder 204B processes the basic video stream STb and the enhancement video stream STe1, or the basic video stream STb and the enhancement video stream STe to obtain the image data Vh′ exhibiting the high definition at the high frame rate.
In this case, in the video decoder 204B, the image data Vd′ (HD 60 Hz Base) as the image data BS' having the basic format, and the image data Vf′ (HD HFR Enhanced1) as the image data ES1′, having the first enhancement format, from which the image having the high definition at the high frame rate is to be obtained are inputted to the similar signal processing portion as the signal processing portion 204a (refer to
The reception portion 201 receives the delivery stream STM of the MP4 sent thereto with the delivery stream STM being placed on the broadcasting wave or the packet of the Internet from the service transmission system 100. The delivery stream STM is supplied to the container encoder 203. The container decoder 203 extracts the basic video stream STb and the enhancement video stream STe2 or the basic video stream STb and the enhancement video stream STe from the MP4.
The basic video stream STb and the enhancement video stream STe2, or the basic video stream STb and the enhancement video stream STe which are extracted in the container decoder 203C are supplied to the video decoder 204C. The video decoder 204C processes the basic video stream STb and the enhancement video stream STe2, or the basic video stream STb and the enhancement video stream STe to obtain the image data Vb′ exhibiting the ultra-high definition at the basic frame rate.
In this case, in the video decoder 204C, the image data Vd′ (HD 60 Hz Base) as the image data BS' having the basic format, and the image data Ve′ (UHD 60 Hz Enhanced2) as the image data ES2′, having the second enhancement format, from which the image having the ultra-high definition at the basic frame rate is to be obtained are inputted to the similar signal processing portion as the signal processing portion 204b (refer to
The reception portion 201 receives the delivery stream STM of the MP4 sent thereto with the delivery stream STM being placed on the broadcasting wave or the packet of the Internet from the service transmission system 100. The delivery stream STM is supplied to the container encoder 203D. The container decoder 203D extracts only the basic video stream STb from the MP4.
The basic video stream STb extracted in the container decoder 203D is supplied to the video decoder 204D. The video decoder 204D processes only the basic video stream STb, thereby obtaining the image data Vd′ exhibiting the high definition at the basic frame rate. In this case, such respective signal processing portions (refer to
As described above, in the transmission/reception system 10 depicted in
For example, in case of the receiver which has the decoding ability to be able to process the image data exhibiting the high definition at the basic frame rate, only the basic video stream is processed, so that the display of the image having the high definition at the basic frame rate can be carried out. In addition, for example, in case of the receiver which has the decoding ability to be able to process the image data existing the ultra-high definition at the high frame rate, both the basic video stream and the enhancement stream are processed, so that the display of the image having the high definition at the high frame rate can be carried out.
In addition, for example, in case of the receiver which has the decoding ability to be able to process the image data existing the ultra-high definition at the basic frame rate, both the basic video stream and the enhancement stream are processed, so that the display of the image having the ultra-high definition at the basic frame rate can be carried out. In addition, for example, in case of the receiver which has the decoding ability to be able to process the image data existing the ultra-high definition at the high frame rate, both the basic video stream and the enhancement stream are processed, so that the display of the image having the ultra-high definition at the high frame rate can be carried out.
In addition, in the transmission/reception system 10 depicted in
It should be noted that in the embodiment described above, the example in which the container is the MP4 (ISOBMFF). However, the present technique is by no means limited to the case where the container is the MP4, and can be similarly applied to the containers having other formats such as MPEG-2 TS and MMT.
For example, in case of MPEG-2 TS, in the container encoder 103 of the service transmission system 100 depicted in
In this case, in the container encoder 103, the video scalability information descriptor (refer to
The access unit (encoded image data) of the basic video stream STb is contained in a payload of the PES packet “video PEST.” The access unit (encoded image data) of the enhancement video stream STe1 is contained in a payload of the PES packet “video PES2.” The access unit (encoded image data) of the enhancement video stream STe2 is contained in a payload of the PES packet “video PES3.” The access unit (encoded image data) of the enhancement video stream STe3 is contained in a payload of the PES packet “video PES4.” The video scalability SEI (refer to
In addition, a Program Map Table (PMT) is contained as Program Specific Information (PSI) in the transport stream. PSI is information describing which of programs the respective elementary streams contained in the transport stream belong to.
A video elementary stream loop (video ES loop) corresponding to the respective video streams is present in PMT. Information associated with a stream type, a packet identifier (PID) and the like is arranged in the video elementary stream loop “video ES loop” so as to correspond to the video stream, and a descriptor describing information associated with the video stream is also arranged in the video elementary stream loop “video ES loop.”
Information associated with a stream type, a packet identifier (PID) and the like is arranged in the video “video ES1 loop” so as to correspond to the basic video stream (video PEST), and a descriptor describing information associated with the video stream is also arranged in “video ES loop.” The stream type is assigned “0×24” indicating the basic video stream.
In addition, information associated with a stream type, a packet identifier (PID) and the like is arranged in “video ES2 loop,” “video ES3 loop,” and “video ES4 loop” so as to correspond to the enhancement video stream (video PES2), the enhancement video stream (video PES3), and the enhancement video stream (video PES4), respectively, and a descriptor describing information associated with these video streams is also arranged therein. The stream type is assigned “0×2×” indicating the enhancement video stream. In addition, a video scalability information descriptor (refer to
The access unit (encoded image data) of the basic video stream STb is contained in the payload of the PES packet “video PES1.” The access unit (encoded image data) of the enhancement video stream STe is contained in the payload of the PES packet “video PES2.” The video scalability SEI (refer to
In addition, the video elementary stream loop (video ES loop) corresponding to the basic video stream “video PES1,” and the enhancement video stream “video PES2” are present under the control of the PMT. Information associated with a stream type, a packet identifier (PID) and the like is arranged in the video elementary stream loop “video ES loop” so as to correspond to the video stream, and a descriptor describing information associated with the video stream is also arranged in the video elementary stream loop “video ES loop.”
Information associated with a stream type, a packet identifier (PID) and the like is arranged in “video ES1 loop” so as to correspond to the basic video stream (video PEST), and a descriptor describing information associated with the video stream is also arranged in the video “video ES loop.” The stream type is assigned “0×24” indicating the basic video stream.
In addition, information associated with a stream type, a packet identifier (PID) and the like is arranged in “video ES2 loop” so as to correspond to the enhancement video stream (video PES2) and a descriptor describing information associated with the video streams is also arranged therein. The stream type is assigned “0×2×” indicating the enhancement video stream. In addition, a video scalability information descriptor (refer to
In addition, for example, in case of MMT, the container encoder 103 of the service transmission system 100 depicted in
In this case, in the container encoder 103, the video scalability information descriptor (refer to
The access unit (encoded image data) of the basic video stream STb is contained in the payload of the MPU packet “video MPU1.” The access unit (encoded image data) of the enhancement video stream STe1 is contained in the payload of the MPU packet “video MPU2.” The access unit (encoded image data) of the enhancement video stream STe2 is contained in the payload of the MPU packet “video MPU3.” The access unit (encoded image data) of the enhancement video stream STe3 is contained in the payload of the MPU packet “video MPU4.” The video scalability SEI (refer to
In addition, in the case where the packet type is a message, various message packets are arranged in the MMT stream. One of the various message packets includes a Packet Access (PA) message packet. A table such as the MPT is contained in the PA message packet. A video asset loop corresponding to the respective assets (video stream) is present in the MPT. Pieces of information associated with an asset type (Asset_type), a packet ID (Packet_id) and the like are arranged in the video asset loop so as to correspond to the assets (video stream), and a descriptor describing the information associated with the video stream concerned is also arranged in the video asset loop.
The pieces of information associated with the asset type, the asset ID, and the like are arranged in “video asset1 loop” so as to correspond to the basic video stream (video MPU1), and the descriptor describing the information associated with the video stream concerned is also arranged in “video asset1 loop.” This asset type is assigned “0×24” indicating the basic video stream.
In addition, the pieces of information associated with the asset type, the asset ID and the like are arranged in “video asset1 loop,” “video asset3 loop,” “video asset4 loop” so as to correspond to the enhancement video stream (video MPU2), the enhancement video stream (video MPU3), the enhancement video stream (video MPU4), respectively. In addition thereto, the descriptor describing the information associated with the video streams is also arranged in “video asset2 loop,” “video asset3 loop,” “video asset4 loop.” This asset type is assigned “0×2×” indicating the enhancement video stream. In addition, a video scalability information descriptor (refer to
The access unit (encoded image data) of the basic video stream STb is contained in the payload of the MPU packet “video MPU1.” The access unit (encoded image data) of the enhancement video stream STe is contained in the payload of the MPU packet “video MPU2.” The video scalability SEI (refer to
In addition, the video asset loop corresponding to the basic video stream “video MPU1,” and the enhancement video stream “video MPU2” is present under the control of the MPT. The pieces of information associated with the asset type, the asset ID and the like are arranged in the video asset loop so as to correspond to the video stream, and the descriptor describing the information associated with the video stream concerned is also arranged in the video asset loop.
The pieces of information associated with the stream type, the packet identifier (PID) and the like are arranged in “video asset1 loop” so as to correspond to the basic video stream (video MPU1). Also, the descriptor for describing the information associated with the video stream is arranged in “video asset loop.” This asset type is assigned “0×24” indicating the basic video stream.
In addition, the pieces of information associated with the asset type, the asset ID and the like are arranged in “video ES2 loop” so as to correspond to the enhancement video stream (video PES2), and the descriptor describing the information associated with the video stream concerned is also arranged in “video ES2 loop.” This asset type is assigned “0×2×” indicating the enhancement video stream. In addition, a video scalability information descriptor (refer to
In addition, the embodiment described above indicates the example in which the number of enhancement video streams is three or one. However, there is considered an example in which the number of enhancement video streams is two. In this case, for example, the access unit of the image data ES1, having the first enhancement format, from which the image having the high definition at the high frame rate is to be obtained is contained in the enhancement video stream STe1. Then, the access unit of the image data ES2, having the second enhancement format, from which the image having the ultra-high definition at the basic frame rate is to be obtained, and the access unit of the image data ES3, having the third enhancement format, from which the image having the ultra-high definition at the high frame rate is to be obtained are contained in the enhancement video stream STe2.
In addition, the present technique can also adopt the following constitutions.
(1) A transmission apparatus, including:
an image processing portion for obtaining image data, having a basic format, from which an image having high definition at a basic frame rate is to be obtained, image data, having a first enhancement format, from which an image having high definition at a high frame rate is to be obtained, image data, having a second enhancement format, from which an image having ultra-high definition at a basic frame rate is to be obtained, and image data, having a third enhancement format, from which an image having ultra-high definition at a high frame rate is to be obtained by processing image data having ultra-high definition at a high frame rate;
an image encoding portion for producing a basic video stream containing encoded image data of the image data having the basic format, and a predetermined number of enhancement video streams containing encoded image data of the image data having the first to third enhancement formats; and
a transmission portion for transmitting a container having a predetermined format and containing the basic stream and the predetermined number of enhancement video streams,
in which the image processing portion executes mixing processing at a first ratio in units of temporally continuous two pictures in the image data having the ultra-high definition at the high frame rate to obtain first image data as image data at a basic frame rate, and executes mixing processing at a second ratio in units of the temporally continuous two pictures to obtain second image data as image data having an enhancement frame at a high frame rate,
executes down-scale processing for the first image data to obtain image data having the basic format, and obtains a difference between third image data obtained by executing up-scale processing for the image data having the basic format, and the first image data to obtain image data having the second enhancement format, and
executes down-scale processing for the second image data to obtain image data having the first enhancement format, and obtains a difference between fourth image data obtained by executing up-scale processing for image data having the first enhancement format, and the second image data to obtain image data having the third enhancement format.
(2) The transmission apparatus according to (1) described above, in which the image encoding portion produces the basic video stream containing encoded image data of the image data having the basic format, three enhancement video streams containing each piece of encoded image data of the image data having the first to third enhancement formats or one enhancement video stream containing the whole of encoded image data of the image data having the first to third enhancement format.
(3) The transmission apparatus according to (1) or (2) described above, further including:
an information inserting portion for inserting identification information exhibiting temporal scalable into the encoded image data of the image data having the first enhancement format, inserting identification information exhibiting spatial scalable into the encoded image data of the image data having the second enhancement format, and inserting identification information exhibiting temporal scalable and spatial scalable into the encoded image data of the image data having the third enhancement format.
(4) The transmission apparatus according to (3) described above, in which the information inserting portion further inserts information exhibiting a ratio of spatial scalable into the encoded image data of the image data having the second and third enhancement formats.
(5) The transmission apparatus according to (3) or (4) described above, in which the information inserting portion further inserts identification information exhibiting that the image data having the basic format is image data obtained by executing the mixing processing into the encoded image data of the image data having the first and third enhancement formats.
(6) The transmission apparatus according to any one of (3) to (5) described above, in which the information inserting portion further inserts information associated with the first ratio and information associated with the second ratio into the encoded image data of the image data having the first and third enhancement formats.
(7) The transmission apparatus according to any one of (1) to (6) described above, further including:
an information inserting portion for inserting identification information exhibiting temporal scalable so as to correspond to the encoded image data of the image data having the first enhancement format into a layer of the container, inserting identification information exhibiting spatial scalable so as to correspond to the encoded image data of the image data having the second enhancement format into the layer of the container, and inserting identification information exhibiting temporal scalable and spatial scalable so as to correspond to the encoded image data of the image data having the third enhancement format into the layer of the container.
(8) The transmission apparatus according to (7), in which the information inserting portion further inserts information exhibiting a ratio of spatial scalable into the layer of the container so as to correspond to each piece of the encoded image data of the image data having the second and third enhancement formats.
(9) The transmission apparatus according to (7) or (8) described above, in which the information inserting portion further inserts identification information exhibiting that the image data having the basic format is image data obtained by executing the mixing processing into the layer of the container so as to correspond to each pieces of the encoded image data of the image data having the first and third enhancement format.
(10) The transmission apparatus according to any one of (7) to (9) described above, in which the information inserting portion further inserts into the layer of the container information associated with the first ratio and information associated with the second ratio so as to correspond to the encoded image data of the image data having the first and third enhancement formats, respectively.
(11) The transmission apparatus according to any one of (1) to (10) described above, further including:
a transmission portion for transmitting a metafile having meta information used to cause a reception apparatus to acquire the basic video stream and the predetermined number of enhancement video streams,
in which information exhibiting response of scalability is inserted into the metafile.
(12) A transmission method, including:
an image processing step of obtaining image data, having a basic format, from which an image having high definition at a basic frame rate is to be obtained, image data, having a first enhancement format, from which an image having high definition at a high frame rate is to be obtained, image data, having a second enhancement format, from which an image having ultra-high definition at a basic frame rate is to be obtained, and image data, having a third enhancement format, from which an image having ultra-high definition at a high frame rate is to be obtained by processing image data having ultra-high definition at a high frame rate;
an image encoding step of producing a basic video stream containing encoded image data of the image data having the basic format, and a predetermined number of enhancement video streams containing encoded image data of the image data having the first to third enhancement formats; and
a transmission step of, by a transmission portion, transmitting a container having a predetermined format and containing the basic stream and the predetermined number of enhancement video stream,
in which in the image processing step, mixing processing at a first ratio in units of temporally continuous two pictures in the image data having the ultra-high definition at the high frame rate is executed to obtain first image data as image data at a basic frame rate, and mixing processing at a second ratio in units of the temporally continuous two pictures is executed to obtain second image data as image data having an enhancement frame at a high frame rate,
down-scale processing is executed for the first image data to obtain image data having the basic format, and a difference between third image data obtained by executing up-scale processing for the image data having the basic format, and the first image data is obtained to obtain image data having the second enhancement format, and
down-scale processing is executed for the second image data to obtain image data having the first enhancement format, and a difference between fourth image data obtained by executing up-scale processing for image data having the first enhancement format, and the second image data is obtained to obtain image data having the third enhancement format.
(13) A reception apparatus, including:
a reception portion for receiving a container having a predetermined format and containing a basic video stream, having encoded image data of image data, having a basic format, from which an image having high definition at a basic frame rate is to be obtained, and a predetermined number of enhancement video streams containing encoded image data of image data, having a first enhancement format, from which image having high definition at a high frame rate is to be obtained, image data, having a second enhancement format, from which image having ultra-high definition at a basic frame rate is to be obtained, and image data, having a third enhancement format, from which image having ultra-high definition at a high frame rate is to be obtained, the image data having the basic format being obtained by executing down-scale processing for first image data obtained by executing mixing processing at a first ratio in units of temporally continuous two pictures in image data having ultra-high definition at a high frame rate,
the image data having the second enhancement format being obtained by obtaining a difference between third image data obtained by executing up-scale processing for the image data having the basic format, and the first image data,
the image data having the first enhancement format being obtained by executing down-scale processing for second image data obtained by executing mixing processing at a second ratio in units of the temporally continuous two pictures,
the image data having the third enhancement format being obtained by obtaining a difference between fourth image data obtained by executing up-scale processing for the image data having the first enhancement format, and the second image data,
the reception apparatus further including:
a processing portion for processing only the basic video stream to obtain image data having high deformation at a basic frame rate, or processing a part of or a whole of the predetermined number of enhancement video streams to obtain image data having high definition at a high frame rate, image data having ultra-high definition at a basic frame rate, or image data having ultra-high definition at a high frame rate.
(14) The reception apparatus according to (13) described above, in which information exhibiting a ratio of spatial scalable is inserted into encoded image data of image data having the second and third enhancement formats, and/or a container position corresponding to the encoded image data, and
when the processing portion obtains the image data having the ultra-high definition at the basic frame rate, or the image data having the ultra-high definition at the high frame rate, the processing portion uses the inserted information exhibiting the ratio of the spatial scalable.
(15) The reception apparatus according to (13) or (14) described above, in which the information at the first ratio, and the information at the second ratio are inserted into the encoded image data of the image data having the first and third enhancement formats, and/or the container position corresponding to the encoded image data, and
when the processing portion obtains the image data having the high definition at the high frame rate, or the image data having the ultra-high definition at the high frame rate, the processing portion uses the inserted information at the first ratio and the inserted information at the second ratio.
(16) A reception method, including:
a reception step of, by a reception portion, receiving a container, having a predetermined format and containing a basic video stream, having encoded image data of image data, having a basic format, from which an image having high definition at a basic frame rate is to be obtained, and a predetermined number of enhancement video streams containing encoded image data of image data, having a first enhancement format, from which image having high definition at a high frame rate is to be obtained, image data, having a second enhancement format, from which image having ultra-high definition at a basic frame rate is to be obtained, and image data, having a third enhancement format, from which image having ultra-high definition at a high frame rate is to be obtained,
the image data having the basic format being obtained by executing down-scale processing for first image data obtained by executing mixing processing at a first ratio in units of temporally continuous two pictures in image data having ultra-high definition at a high frame rate,
the image data having the second enhancement format being obtained by obtaining a difference between third image data obtained by executing up-scale processing for the image data having the basic format, and the first image data,
the image data having the first enhancement format being obtained by executing down-scale processing for second image data obtained by executing mixing processing at a second ratio in units of the temporally continuous two pictures,
the image data having the third enhancement format being obtained by obtaining a difference between fourth image data obtained by executing up-scale processing for the image data having the first enhancement format, and the second image data,
the reception method further including:
a processing step of processing only the basic video stream to obtain image data having high deformation at a basic frame rate, or processing a part of or a whole of the predetermined number of enhancement video streams to obtain image data having high definition at a high frame rate, image data having ultra-high definition at a basic frame rate, or image data having ultra-high definition at a high frame rate.
(17) A transmission apparatus, including:
an image processing portion for obtaining image data, having a basic format, from which an image having high definition at a basic frame rate is to be obtained, image data, having a first enhancement format, from which an image having high definition at a high frame rate is to be obtained, image data, having a second enhancement format, from which an image having ultra-high definition at a basic frame rate is to be obtained, and image data, having a third enhancement format, from which an image having ultra-high definition at a high frame rate is to be obtained by processing image data having ultra-high definition at a high frame rate;
an image encoding portion for producing a basic video stream containing encoded image data of the image data having the basic format, and a predetermined number of enhancement video streams containing encoded image data of the image data having the first to third enhancement formats; and
a transmission portion for transmitting a container having a predetermined format and containing the basic stream and the predetermined number of enhancement video streams.
(18) The transmission apparatus according to (17), further including:
an information inserting portion for inserting identification information exhibiting spatial scalable into the encoded image data of the image data having the second and the third enhancement formats, and/or a container position corresponding to the encoded image data, and inserting identification information exhibiting temporal scalable into the encoded image data of the image data having the first and the third enhancement formats, and/or the container position corresponding to the encoded image data.
(19) The transmission apparatus according to (17) or (18) described above, further including:
a transmission portion for transmitting a metafile having meta information used to cause a reception apparatus to acquire the basic video stream and the predetermined number of enhancement video streams,
in which information exhibiting response of scalability is inserted into the metafile.
(20) A reception apparatus, including:
a reception portion for receiving a container having a predetermined format and containing a basic video stream, having encoded image data of image data, having a basic format, from which an image having high definition at a basic frame rate is to be obtained, and a predetermined number of enhancement video streams containing encoded image data of image data, having a first enhancement format, from which image having high definition at a high frame rate is to be obtained, image data, having a second enhancement format, from which image having ultra-high definition at a basic frame rate is to be obtained, and image data, having a third enhancement format, from which image having ultra-high definition at a high frame rate is to be obtained; and
a processing portion for processing only the basic video stream to obtain image data having high deformation at a basic frame rate, or processing a part of or a whole of the predetermined number of enhancement video streams to obtain image data having high definition at a high frame rate, image data having ultra-high definition at a basic frame rate, or image data having ultra-high definition at a high frame rate.
The main feature of the present technique is that the transmitting the basic video stream containing the encoded image data of the image data, having the basic format, from which the image having the high definition at the basic frame rate is to be obtained, and a predetermined number of enhancement video streams containing encoded image data of the image data, having the first enhancement format, from which the image having the high definition at the high frame rate is to be obtained, the image data, having the second enhancement format, from which the image having the ultra-high definition at the basic frame rate is to be obtained, and the image data, having the third enhancement format, from which the image having the ultra-high definition at the high frame rate is to be obtained enables the image data having the ultra-high definition at the high frame rate to be transmitted, so that the backward compatibility is satisfactorily feasible on the reception side (refer to
In addition, the main feature of the present technique is that the image data having the basic format can be obtained by executing the down-scale processing for the first image data obtained by executing the mixing processing at the first ratio in units of the temporally continuous two pictures in the image data having the ultra-high definition and the high frame rate. As a result, the image having the high definition at the basic frame rate displayed by processing only the basic video stream can be made the smooth image in which the strobing effect is suppressed (refer to
Number | Date | Country | Kind |
---|---|---|---|
JP2015-202464 | Oct 2015 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2016/080085 | 10/11/2016 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2017/065128 | 4/20/2017 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20020090138 | Hamanaka | Jul 2002 | A1 |
20080082482 | Amon | Apr 2008 | A1 |
20130215975 | Samuelsson | Aug 2013 | A1 |
20140168366 | Ichiki | Jun 2014 | A1 |
20140366070 | Lee et al. | Dec 2014 | A1 |
20150229878 | Hwang | Aug 2015 | A1 |
20150245046 | Tsukuba | Aug 2015 | A1 |
20160134902 | Tsukagoshi | May 2016 | A1 |
Number | Date | Country |
---|---|---|
2 903 268 | Aug 2015 | EP |
H11-266457 | Sep 1999 | JP |
2008-543142 | Nov 2008 | JP |
2015-525008 | Aug 2015 | JP |
2008086423 | Jul 2008 | WO |
2014050597 | Apr 2014 | WO |
2014203515 | Dec 2014 | WO |
2015076277 | May 2015 | WO |
2015095361 | Jun 2015 | WO |
2017043504 | Mar 2017 | WO |
Entry |
---|
ISO/IEC 2010, “Text of ISO/IEC 23001-6: Dynamic adaptive streaming over HTTP (DASH)”, MPEG-B Systems, International Organization for Standardization Organisation Internationale De Normalisation ISO/IEC JTC1/SC29/WG11 Coding of Moving Pictures and Audio, Oct. 2010, 10 pages. |
International Search Report dated Nov. 15, 2016 in PCT/JP2016/080085 filed Oct. 11, 2016. |
Stockhammer, et al., “Text of ISO/IEC 23001-6: Dynamic adaptive streaming over HTTP (DASH)”, International Organization for Standardization, ISO/IEC JTC1/SC29/WG11, Guangzhou, China, Oct. 2010, https://www.itscj.ipsj.or.jp/sc29/open/29view/29n11662c.htm. |
Number | Date | Country | |
---|---|---|---|
20190116386 A1 | Apr 2019 | US |