The present invention relates to an image encoding device for and an image encoding method of compression-encoding an image to generate encoded data, a video transcoding device for and a video transcoding method of generating other encoded data having a different feature from the encoded data generated by the image encoding device, and a video stream transmission system for transmitting and receiving the encoded data generated by the image encoding device.
As imaging equipment, display equipment, compression encoding techniques, transmission techniques, and so on progress, services for distribution of UHD (Ultra-High Definition) videos having definition (e.g., definition such as 4K or 8K) exceeding HD (High Definition) have been studied.
Because an ultra-high definition video has a huge amount of video information, an ultra-high definition video is compressed typically by using a video encoding technique at the time of carrying out transmission or storage of a video signal.
Hereafter, it is premised that at the time of transmitting an ultra-high definition video, the ultra-high definition video is handled in a bitstream form in which the video is compressed by using a predetermined video encoding technique.
There is assumed to be a case in which when a user watches and listens to an ultra-high definition video, it becomes difficult for the user to watch and listen to the video even if a fine structure (e.g., character information, a person's face, or the like) in the video exists as information included in the video because the apparent size of display equipment is too small as compared with the number of pixels of the video.
In order to solve this problem, there can be considered a system that displays an entire region of an ultra-high definition video transmitted thereto on main display equipment (e.g., a large-screen TV placed in a living), and also extracts a video of a partial region specified by a user from the entire region of the ultra-high definition video and transmits the video of the partial region to sub display equipment (e.g., a tablet terminal in the user's hand) to enable the user to watch and listen to the video.
Although a partial region video is transmitted from the main display equipment to the sub display equipment in the above-mentioned system, it is desirable to, at the time of transmitting the partial region video, transmit the partial region video in the form of a bitstream including only information about the partial region video.
This is because when an entire region bitstream of an ultra-high definition video is transmitted, just as it is, without compressing the entire region bitstream of the ultra-high definition video into a partial region bitstream (a bitstream including only information about a partial region video), the amount of transmitted information increases very much while the processing load increases because the sub display equipment needs to decode the entire region of the ultra-high definition video.
It is therefore desirable that the main display equipment in the above-mentioned system has a transcoding function of generating an arbitrary partial region bitstream from the entire region bitstream of the ultra-high definition video.
As a method of generating an arbitrary partial region bitstream from the entire region bitstream, for example, the following methods can be considered.
After decoding the entire region of the ultra-high definition video, the main display equipment extracts the decoded image of a partial region specified by a user from the decoded image of the entire region, and encodes the decoded image of the partial region again by using a predetermined video encoding technique.
The main display equipment then generates a partial region bitstream including the encoded data of the partial region which is the result of the encoding, and coding parameters.
However, a problem with the case of using the method 1 is that because it is necessary to encode the decoded image of the partial region again, the processing load on the main display equipment becomes large and the image quality degrades due to the re-encoding.
The method 2 is disclosed by the following patent reference 1, and is a one of, when generating an entire region bitstream, performing tile partitioning to cut off reference between regions of an image.
More specifically, this method is a one of partitioning an entire region into images of rectangular regions which are referred to as tiles, and generating an entire region bitstream by encoding each of the rectangular regions, while imposing limitations on a local decoded image and coding parameters, the local decoded image and coding parameters being referred to at the time of encoding each of the rectangular regions, in such a way that reference across a tile boundary (this reference also including inter-frame reference and entropy encoding) is not carried out.
Because it becomes possible to decode each tile completely and independently by imposing such limitations, by simply extracting the encoded data and the coding parameters of a tile including a partial region specified by a user from the entire region bitstream, a partial region bitstream including the encoded data and the coding parameters of the partial region can be generated.
However, because the extraction of encoded data and coding parameters is carried out on a per tile basis in the case of using the method 2, a partial region bitstream including many regions unnecessary for display is generated and hence this generating process is inefficient when the partial region specified by the user extends across a plurality of tiles and when the tile size is larger than the size of the partial region.
Because the number of parts at each of which the reference is cut off increases as the tile size is reduced in order to improve the efficiency of generation of a partial region bitstream, there arises a problem that the compression efficiency of the entire region bitstream degrades.
Because conventional video encoding devices are configured as above, increase in the processing load and degradation in the image quality can be suppressed by simply partitioning the entire region into the images of rectangular regions (tiles), and then encoding each of the rectangular regions in a state in which reference across a tile boundary is prevented. However, a problem is that when the partial region specified by the user extends across a plurality of tiles, a partial region bitstream including many regions unnecessary for display is generated and hence this generating process is inefficient. On the other hand, a problem is that because the number of parts at each of which the reference is cut off increases as the tile size is reduced in order to improve the efficiency of generation of a partial region bitstream, the compression efficiency of the entire region bitstream degrades.
The present invention is made in order to solve the above-mentioned problems, and it is therefore an object of the present invention to provide a video encoding device and a video encoding method that can generate an entire region bitstream which is suitable for efficient generation, with a low operation amount, of a partial region bitstream without causing a reduction in the compression efficiency of the entire region bitstream.
It is another object of the present invention to provide a video transcoding device and a video transcoding method that can efficiently generate a partial region bitstream with a low operation amount.
It is a further object of the present invention to provide a video stream transmission systems that transmits and receives encoded data generated by an image encoding device.
In accordance with the present invention, there is provided a video encoding device including: a prediction image generator to determine a coding parameter for a coding target block in a picture belonging to a GOP (Group Of Pictures), and to generate a prediction image by using the coding parameter; and a bitstream generator to compression-encode a difference image between the coding target block and the prediction image generated by the prediction image generator, and to multiplex encoded data which is the result of the encoding and the coding parameter to generate a bitstream, in which the bitstream generator multiplexes hint information into the bitstream, the hint information including motion vector limitation information indicating a range in which a search for a motion vector can be performed, GOP size limitation information indicating a GOP size which is the number of pictures belonging to the GOP, and reference configuration specification information indicating a picture to be referred to at the time of decoding each picture belonging to the GOP.
Because the bitstream generator in accordance with the present invention multiplexes the hint information into the bitstream, the hint information including the motion vector limitation information indicating the range in which a search for a motion vector can be performed, the GOP size limitation information indicating the GOP size which is the number of pictures belonging to the GOP, and the reference configuration specification information indicating a picture to be referred to at the time of decoding each picture belonging to the GOP, there is provided an advantage of being able to generate a bitstream of an entire region which is suitable for efficient generation, with a low operation amount, of a bitstream of a partial region without causing a reduction in the compression efficiency of the bitstream of the entire region.
Hereafter, in order to explain this invention in greater detail, the preferred embodiments of the present invention will be described with reference to the accompanying drawings.
In this Embodiment 1, an explanation will be made as to a video encoding device that, by imposing a limitation on a maximum of a motion vector used at the time of performing an inter frame prediction, and a limitation on the number of frames between random access points, carries out encoding in such a way that the propagation range of pixel value information between random access points falls within certain limits, to generate a bitstream of an entire region, and that also multiplexes information indicating a limit value of the maximum of motion vectors, information indicating a limit value of the number of frames between random access points, and so on, as hint information, into the entire region bitstream.
Further, an explanation will be made as to a video transcoding device that decodes the encoded data of the entire region from the entire region bitstream generated by the above-mentioned video encoding device and also refers to the hint information multiplexed into the entire region bitstream, to specify encoded data and a coding parameter required at the time of correctly decoding a display area specified by a user from the encoded data and the coding parameters of the entire region, and that applies the encoded data and the coding parameter to generate a bitstream of a partial region with a low operation amount.
Referring to
More specifically, the video encoding device 1 performs a process of determining coding parameters for a coding target block in a picture belonging to a GOP (Group of pictures) and generating a prediction image by using the coding parameter, and also compression-encoding a difference image between the coding target block and the prediction image and multiplexing encoded data which is the result of the encoding and the coding parameters to generate an entire region bitstream. The video encoding device 1 also performs a process of multiplexing hint information into the entire region bitstream, the hint information including motion vector limitation information indicating a maximum range in which a search for a motion vector can be performed, GOP size limitation information indicating a maximum of a GOP size which is the number of pictures belonging to the GOP, and reference configuration specification information indicating a picture to be referred to at the time of decoding each picture belonging to the GOP, and outputting the entire region bitstream to the video transcoding device 2.
A GOP means a set of a certain random access point picture and pictures which are located continuously after the random access point picture in the decoding order and which are not random access points.
The GOP size limitation information indicates the size of a GOP which is defined as above. While a data configuration, which is referred to as GOPs, exists in a bitstream which a conventional video encoding device generates, information indicating the size of a GOP does not exist in the bitstream. More specifically, the number of pictures which are counted after a random access point picture is decoded until the next random access point picture is decoded is the GOP size, and therefore the decoding device cannot know the GOP size until decoding all of the GOP. The GOP size limitation information provides an advantage of notifying the GOP size to the decoding device before all of a GOP is decoded by multiplexing the GOP size limitation information into, for example, the encoded data of the frame at the beginning of the GOP.
The reference configuration specification information describes a reference configuration between pictures. A conventional video encoding device multiplexes only reference destination frame information of each frame on a per frame basis. More specifically, in order for the decoding device to know according to what type of reference configuration a GOP is encoded, the decoding device needs to decode all of the GOP. The reference configuration specification information provides an advantage of notifying the reference configuration to the decoding side before all of a GOP is decoded by multiplexing the reference configuration specification information into, for example, the encoded data of the frame at the beginning of the GOP.
The video transcoding device 2 is configured with an entire region stream decoder 3 and a partial region transcoder 4, and performs a process of decoding the image of the entire region from the entire region bitstream generated by the video encoding device 1, and outputting the image of the entire region (referred to as the “entire region decoded image” from here on) to an entire region display device 5.
The video transcoding device 2 also performs a process of extracting the hint information from the entire region bitstream generated by the video encoding device 1, and referring to the motion vector limitation information, the GOP size limitation information and the reference configuration specification information which are included in the hint information, to specify an indispensable encoded region which is a region required at the time of decoding a display area of a picture, the display area being indicated by display area information provided from the outside thereof.
The video transcoding device 2 further performs a process of extracting the encoded data and the coding parameters of a coding target block which is included in the indispensable encoded region from the entire region bitstream generated by the video encoding device 1, and generating a partial region bitstream in conformity with an encoding codec set in advance from the encoded data and the coding parameters.
The entire region stream decoder 3 performs a process of extracting the encoded data and the coding parameters of the entire region, and the hint information which are included in the entire region bitstream generated by the video encoding device 1, and decoding the encoded data and the coding parameters of the entire region into an entire region decoded image and outputting the entire region decoded image to the partial region transcoder 4 and the entire region display device 5, and also outputting the encoded data and the coding parameters of the entire region, and the hint information to the partial region transcoder 4.
The partial region transcoder 4 performs a process of referring to the motion vector limitation information, the GOP size limitation information and the reference configuration specification information which are included in the hint information outputted from the entire region stream decoder 3, to specify an indispensable encoded region which is a region required at the time of decoding the display area of a picture, the display area being indicated by the display area information provided from the outside thereof.
An indispensable encoded region determinator is configured with the entire region stream decoder 3 and the partial region transcoder 4.
The partial region transcoder 4 also performs a process of extracting the encoded data and the coding parameters of a coding target block which is included in the above-mentioned indispensable encoded region from among the encoded data and the coding parameters of the entire region which are outputted from the entire region stream decoder 3, and generating a partial region bitstream in conformity with the encoding codec set in advance from the encoded data and the coding parameters of the coding target block.
The partial region transcoder 4 constructs a parameter extractor and a partial region stream generator.
The entire region display device 5 is display equipment to display the entire region decoded image outputted from the entire region stream decoder 3.
The video decoding device 6 decodes an image of a partial region from the partial region bitstream outputted from the partial region transcoder 4, and outputs the image of the partial region (referred to as the “partial region decoded image” from here on) to a partial region display device 7.
The partial region display device 7 is display equipment to display the partial region decoded image outputted from the video decoding device 6.
As a concrete example of practical use, a case in which the video transcoding device 2 is built in a non-portable TV that can receive and play back an ultra-high definition video will be explained.
In this case, the video encoding device 1 is an encoder device existing on a side of a system to distribute an ultra-high definition video, to generate an entire region bitstream which is an object to be distributed.
Therefore, an entire region bitstream generated by the video encoding device 1 which is the encoder device is distributed to the non-portable TV via a predetermined transmission system.
The video transcoding device 2 which is built in the non-portable TV receives the entire region bitstream distributed thereto from the video encoding device 1 and then decodes the entire region bitstream into an entire region decoded image, thereby displaying the entire region decoded image on the entire region display device 5.
The non-portable TV in this example can transmit and receive data to and from a viewer's tablet terminal. When the user operates the tablet terminal to specify an arbitrary display area, the display area information indicating the display area is inputted to the video transcoding device 2 in the non-portable TV, and the video transcoding device 2 generates a partial region bitstream including coding parameters required to play back the display area specified by the user and transmits the partial region bitstream to the tablet terminal.
The video decoding device 6 is built in the tablet terminal, and the video decoding device 6 receives the partial region bitstream transmitted thereto from the video transcoding device 2 and decodes the partial region bitstream into a partial region decoded image, thereby displaying the partial region decoded image on the partial region display device 7.
The tablet terminal can also enlarge the partial region decoded image as appropriate and display the partial region decoded image.
By using the system shown in
Referring to
The encoding controller 11 also performs a process of determining coding parameters on the basis of the GOP size limitation information, the reference configuration specification information, and so on
More specifically, the encoding controller 11 performs a process of determining, as the coding parameters, a coding mode (an intra coding mode, an inter coding mode or a PCM (Pulse Code Modulation) coding mode), a prediction parameter (an intra prediction parameter or an inter prediction parameter) and a PCM coding parameter of each coding target block.
The encoding controller 11 further performs a process of determining, as a coding parameter, a prediction difference coding parameter which is to be referred to when any of an orthogonal transformation process, a quantization process, and so on is performed, and outputting the prediction difference coding parameter to a transformation/quantization unit 18, an inverse quantization/inverse transformation unit 19 and a variable length encoder 23, and also determining, as a coding parameter, a loop filter parameter which is to be referred to when a filtering process is performed and outputting the loop filter parameter to a loop filter 21 and the variable length encoder 23.
In addition, the encoding controller 11 performs a process of outputting the motion vector limitation information to a motion compensation predictor 15, and outputting the hint information (the motion vector limitation information, the GOP size limitation information and the reference configuration specification information) to the variable length encoder 23.
The block partitioner 12 performs a process of, whenever receiving a picture belonging to a GOP (an inputted image), partitioning the picture into blocks each having the coding block size determined by the encoding controller 11, and outputting each coding target block which is a unit for prediction process to a select switch 13 and a subtractor 17.
The select switch 13 performs a process of outputting the coding target block outputted thereto from the block partitioner 12 to an intra predictor 14 when the coding mode determined by the encoding controller 11 is an intra coding mode, outputting the coding target block outputted thereto from the block partitioner 12 to the motion compensation predictor 15 when the coding mode determined by the encoding controller 11 is an inter coding mode, and outputting the coding target block outputted thereto from the block partitioner 12 to a PCM encoder 16 when the coding mode determined by the encoding controller 11 is a PCM coding mode.
The intra predictor 14 performs a process of performing an intra prediction process on the coding target block outputted from the select switch 13 by using the intra prediction parameter determined by the encoding controller 11 to generate an intra prediction image.
The motion compensation predictor 15 performs a process of comparing the coding target block outputted from the select switch 13 with a local decoded image after loop filtering process which is stored in a frame memory 22, to search for a motion vector in a region with the maximum range indicated by the motion vector limitation information outputted thereto from the encoding controller 11, and performing an inter prediction process on the coding target block by using both the motion vector and the inter prediction parameter determined by the encoding controller 11 to generate an inter prediction image. The motion compensation predictor 15 performs a process of outputting, as a coding parameter, the motion vector which the motion compensation predictor has found out to the variable length encoder 23.
The PCM encoder 16 performs a process of, without performing any prediction process, converting pixel data of a region, in the picture (the inputted image), corresponding to the coding target block outputted from the select switch 13 into pixel data having a predetermined bit width by using the PCM coding parameter determined by the encoding controller 11, to generate a PCM signal (encoded data), and outputting the PCM signal to the variable length encoder 23, and also generating a PCM image by similarly converting the pixel data into pixel data having a predetermined bit width, and outputting the PCM image to the loop filter 21.
A prediction image generator is configured with the encoding controller 11, the block partitioner 12, the select switch 13, the intra predictor 14, the motion compensation predictor 15 and the PCM encoder 16.
The subtractor 17 performs a process of subtracting either the intra prediction image generated by the intra predictor 14 or the inter prediction image generated by the motion compensation predictor 15 from the coding target block outputted from the block partitioner 12, and outputting a prediction difference signal (a difference image) which is the result of the subtraction to the transformation/quantization unit 18.
The transformation/quantization unit 18 performs a process of referring to the prediction difference coding parameter determined by the encoding controller 11 and performing an orthogonal transformation process (e.g., an orthogonal transformation process, such as a DCT (discrete cosine transform) or a KL transform in which bases are designed for a specific learning sequence in advance) on the prediction difference signal outputted from the subtractor 17 to calculate transform coefficients, and also referring to the prediction difference coding parameter to quantize the transform coefficients, and outputting the transform coefficients quantized thereby (referred to as the “coefficients after quantization” from here on) to the inverse quantization/inverse transformation unit 19 and the variable length encoder 23.
The inverse quantization/inverse transformation unit 19 performs a process of referring to the prediction difference coding parameter determined by the encoding controller 11 to inverse-quantize the coefficients after quantization outputted from the transformation/quantization unit 18, and also referring to the prediction difference coding parameter and performing an inverse orthogonal transformation process on the transform coefficients after inverse quantization, to calculate a local decoded prediction difference signal corresponding to the prediction difference signal outputted from the subtractor 17.
An adder 20 performs a process of adding the difference image shown by the local decoded prediction difference signal calculated by the inverse quantization/inverse transformation unit 19 and either the intra prediction image generated by the intra predictor 14 or the inter prediction image generated by the motion compensation predictor 15, to calculate a local decoded image corresponding to the coding target block outputted from the block partitioner 12.
The loop filter 21 sequentially performs zero or more types of filtering processes on the basis of the filter parameter outputted thereto from the encoding controller 11. However, in a case in which the loop filter is configured in such a way as not to perform any loop filtering process on a coding target block in a PCM coding mode, the loop filter does not perform any loop filtering process on any coding target block in a PCM coding mode.
The frame memory 22 is a recording medium to store the local decoded image on which a loop filtering process is performed by the loop filter 21.
The variable length encoder 23 performs a process of variable-length-encoding the coefficients after quantization (the encoded data) outputted from the transformation/quantization unit 18, the coding mode (an intra coding mode, an inter coding mode or a PCM coding mode), the prediction parameter (the intra prediction parameter or the inter prediction parameter) or the PCM coding parameter, the prediction difference coding parameter and the filter parameter, which are outputted from the encoding controller 11, the motion vector (when the coding mode is an inter coding mode) outputted from the motion compensation predictor 15, the PCM signal (the encoded data) outputted from the PCM encoder 16, and the hint information (the motion vector limitation information, the GOP size limitation information and the reference configuration specification information) outputted from the encoding controller 11, to generate an entire region bitstream showing those encoded results.
A bitstream generator is configured with the subtractor 17, the transformation/quantization unit 18 and the variable length encoder 23.
In the example shown in
In the case in which the video encoding device 1 is configured with a computer, the frame memory 22 is configured in an internal memory or an external memory of the computer, and a program in which the processes performed by the encoding controller 11, the block partitioner 12, the select switch 13, the intra predictor 14, the motion compensation predictor 15, the PCM encoder 16, the subtractor 17, the transformation/quantization unit 18, the inverse quantization/inverse transformation unit 19, the adder 20, the loop filter 21 and the variable length encoder 23 are described can be stored in a memory of the computer and a CPU of the computer can be made to execute the program stored in the memory.
Referring to
A select switch 32 performs a process of outputting the intra prediction parameter outputted thereto from the variable length code decoder 31 to an intra predictor 33 when the coding mode variable-length-decoded by the variable length code decoder 31 is an intra coding mode, outputting the inter prediction parameter and the motion vector which are outputted thereto from the variable length code decoder 31 to a motion compensator 34 when the coding mode variable-length-decoded by the variable length code decoder 31 is an inter coding mode, and outputting the PCM coding parameter and the PCM signal which are outputted thereto from the variable length code decoder 31 to a PCM decoder 35 when the coding mode variable-length-decoded by the variable length code decoder 31 is a PCM coding mode.
The intra predictor 33 performs a process of performing an intra prediction process on the decoding target block by using the intra prediction parameter outputted thereto from the select switch 32 to generate an intra prediction image.
The motion compensator 34 performs a process of performing an inter prediction process on the decoding target block by using the motion vector and the inter prediction parameter which are outputted thereto from the select switch 32, while referring to a decoded image after loop filtering process which is stored in a frame memory 39, to generate an inter prediction image.
The PCM decoder 35 performs a process of generating a PCM image by using the PCM coding parameter and the PCM signal which are outputted thereto from the select switch 32, and outputting the PCM image to a loop filter 38.
An inverse quantization/inverse transformation unit 36 performs a process of referring the prediction difference coding parameter outputted from the variable length code decoder 31 to inverse-quantize the coefficients after quantization outputted from the variable length code decoder 31, and also referring to the prediction difference coding parameter and performing an inverse orthogonal transformation process on the orthogonal transform coefficients after inverse quantization, to calculate a decoded prediction difference signal.
An adder 37 performs a process of adding a difference image shown by the decoded prediction difference signal calculated by the inverse quantization/inverse transformation unit 36, and either the intra prediction image generated by the intra predictor 33 or the inter prediction image generated by the motion compensator 34, to generate a decoded image before loop filtering process, and also outputting the decoded image before loop filtering process to the loop filter 38 and outside the entire region stream decoder 3.
The loop filter 38 sequentially performs zero or more types of filtering processes on the basis of the filter parameter thereto outputted from the variable length code decoder 31. However, in a case in which the loop filter is configured in such a way as not to perform any loop filtering process on a decoding target block in a PCM coding mode, the loop filter does not perform any loop filtering process on any decoding target block in a PCM coding mode.
The frame memory 39 is a recording medium to store the decoded image on which a loop filtering process is performed by the loop filter 38.
In the example shown in
In the case in which the entire region stream decoder 3 is configured with a computer, the frame memory 39 is configured in an internal memory or an external memory of the computer, and a program in which the processes performed by the variable length code decoder 31, the select switch 32, the intra predictor 33, the motion compensator 34, the PCM decoder 35, the inverse quantization/inverse transformation unit 36, the adder 37 and the loop filter 38 are described can be stored in a memory of the computer and a CPU of the computer can be made to execute the program stored in the memory.
Referring to
When the indispensable encoded regions in the pictures belonging to a GOP have different sizes (for example, when the GOP size maximum indicated by the GOP size limitation information is N, and the reference configuration specification information indicates that an immediately preceding picture (frame) is referred to at the time of decoding each picture (frame), the size of the indispensable encoded region of the Nth picture (frame), among the indispensable encoded regions of the N pictures (frames), is the smallest and the size of the indispensable encoded region of the first picture (frame), among the indispensable encoded regions, is the largest), the transcode controller 41 determines, for example, the indispensable encoded region having the largest size to be the target region to be transcoded. Therefore, the following relation: the indispensable encoded region ⊂ the target region to be transcoded is established.
The transcode controller 41 also performs a process of generating header information of a partial region bitstream on the basis of the target region to be transcoded information, and outputting the header information to a variable length encoder 46.
A coding parameter extractor 42 performs a process of extracting, from among the encoded data and the coding parameters of the entire region which are outputted from the entire region stream decoder 3, the encoded data (the count after quantization or the PCM signal) and the coding parameters (the coding mode (an intra coding mode, an inter coding mode or a PCM coding mode), the prediction parameter (the intra prediction parameter or the inter prediction parameter) or the PCM coding parameter, the motion vector (when the coding mode is an inter coding mode), the prediction difference coding parameter, the loop filter parameter and the hint information (the motion vector limitation information, the GOP size limitation information and the reference configuration specification information)) of a coding target block included in the indispensable encoded region indicated by the indispensable encoded region information outputted from the transcode controller 41, and outputting the encoded data and the coding parameters of the coding target block to an external reference block encoder 43 and a select switch 45.
The external reference block encoder 43 performs a process of, when a coding target block included in the indispensable encoded region indicated by the indispensable encoded region information outputted from the transcode controller 41 (a coding target block belonging to a boundary of the indispensable encoded region) is an external reference block on which intra encoding is performed by referring to the value of a pixel located outside the indispensable encoded region, encoding the decoded image of the coding target block by using an encoding method of not using the value of any pixel located outside the indispensable encoded region for prediction reference, and outputting encoded data which is the result of the encoding and the coding parameters used for the encoding of the above-mentioned decoded image to the select switch 45.
For example, when an intra coding mode in an intra encoding method of referring to the value of a pixel at a screen edge of a coding target block is used as the encoding method of not using the value of any pixel located outside the indispensable encoded region for prediction reference, the external reference block encoder generates an intra prediction image in the intra coding mode and also extracts the decoded image of the coding target block (an external reference block) from the entire region decoded image before loop filtering process outputted from the entire region stream decoder 3. The external reference block encoder then compression-encodes a difference image between the decoded image of the coding target block and the above-mentioned intra prediction image, and outputs encoded data (coefficients after quantization) which is the result of the encoding and the intra prediction parameter (a coding parameter) used when generating the above-mentioned intra prediction image to the select switch 45.
Further, when a PCM coding mode is used as the encoding method of not using the value of any pixel located outside the indispensable encoded region for prediction reference, the external reference block encoder extracts the decoded image of the coding target block (an external reference block) from the entire region decoded image before loop filtering process outputted from the entire region stream decoder 3. The external reference block encoder then performs PCM encoding on the decoded image, and outputs a PCM signal which is the result of the encoding and the PCM coding parameter (a coding parameter) used for the PCM encoding of the decoded image to the select switch 45.
An unnecessary block encoder 44 performs a process of encoding a coding target block (an unnecessary block) located outside the indispensable encoded region and inside the target region to be transcoded in, for example, a skip mode in an inter encoding method, and outputting encoded data which is the result of the encoding and the coding parameters used for the encoding of the above-mentioned coding target block to the select switch 45.
Although the example of performing encoding in the skip mode in the inter encoding method, and outputting the coding parameters used for the encoding in the skip mode to the select switch 45 is shown above, this embodiment is not limited to this example, and the unnecessary block encoder can alternatively output a dummy parameter set in advance to the select switch 45.
The select switch 45 performs a process of referring to the target region to be transcoded information and the indispensable encoded region information which are outputted thereto from the transcode controller 41, selecting the encoded data and the coding parameters which are outputted thereto from the coding parameter extractor 42 when the coding target block included in the indispensable encoded region is not an external reference block, selecting the encoded data and the coding parameters which are outputted thereto from the external reference block encoder 43 when the coding target block included in the indispensable encoded region is an external reference block, and selecting the encoded data and the coding parameters which are outputted thereto from the unnecessary block encoder 44 when the coding target block is an unnecessary block, and outputting the encoded data and the coding parameters which are selected thereby to the variable length encoder 46.
The variable length encoder 46 performs a process of variable-length-encoding the encoded data and the coding parameters which are outputted thereto from the select switch 45, to generate a partial region bitstream showing those encoded results, and also multiplexing the header information of the partial region bitstream outputted from the transcode controller 41 into that partial region bitstream and outputting the partial region bitstream after multiplexing of the header information (a partial region stream in conformity with a predetermined encoding codec).
Referring to
In the case in which the partial region transcoder 4 is configured with a computer, a program in which the processes performed by the transcode controller 41, the coding parameter extractor 42, the external reference block encoder 43, the unnecessary block encoder 44, the select switch 45 and the variable length encoder 46 are described can be stored in a memory of the computer and a CPU of the computer can be made to execute the program stored in the memory.
Next, operations will be explained.
The video encoding device 1 shown in
The video encoding device 1 is further characterized in that the video encoding device limits the GOP size and also limits the reference configuration to a specific pattern, the GOP size and the reference configuration serving as abase when determining a coding mode (an intra coding mode, an inter coding mode or a PCM coding mode) and a coding parameter (an intra prediction parameter, an inter prediction parameter or a PCM coding parameter), and multiplexes the hint information indicating the pieces of limitation information (the motion vector limitation information, the GOP size limitation information and the reference configuration specification information) into the entire region bitstream.
In the image encoding process, the compression efficiency is improved by using the fact that a typical image has a high correlation in its image features with a region spatiotemporally adjacent thereto. In the inter encoding (inter-frame reference encoding), the compression efficiency is improved by using the above-mentioned characteristics to predict the features of an image by referring to a region which exists in an already-encoded frame close in time to the image and which is spatially close to the image.
At that time, because there is a case in which an object in an image moves between frames, a process (a motion-compensated prediction process) of searching for a region having a high correlation, expressing a phase difference between a target region for prediction and the region having a high correlation by using information referred to as a motion vector, and absorbing the movement of the object.
Therefore, even when it is desired to decode only a specific limited region of a specific frame from the encoded stream, it is necessary to also decode a region to which is pointed by a motion vector, the region being in another frame which is referred to by that frame.
Because a frame decoded with inter-frame reference may be referred to by a subsequent frame, there is a case in which information about a certain region in the decoded image of a certain frame propagates to a region having a wider range than a subsequent frame in inter-frame reference. For example, there is a possibility that the propagation range of decoded image information practically spreads without limitation when no limitation is imposed on the maximum of motion vectors.
The video encoding device 1 in accordance with this
Embodiment 1 has a configuration in which fixed limitations are imposed on motion vectors, the GOP size and the reference configuration, thereby suppressing the propagation of decoded image information to a fixed range and making it possible to transmit, as the hint information, those pieces of limitation information to the decoding side.
The format of a video signal to be processed by the video encoding device 1 shown in
The gradation of each pixel can be an 8-bit, 10-bit, or 12-bit one.
In the following explanation, for the sake of convenience, the video signal of the inputted image is assumed to be, unless otherwise specified, a YUV signal, and a case of handling signals having a 4:2:0 format in which the two chrominance components U and V are subsampled with respect to the luminance component Y will be described.
Further, a data unit to be processed which corresponds to each frame of the video signal is referred to as a “picture.”
Although the explanation is made in this Embodiment 1 by assuming that a “picture” is a video frame signal on which progressive scanning is performed, when the video signal is an interlaced signal, a “picture” can be a field image signal which is a unit which constructs a video frame.
Hereafter, the processing performed by the video encoding device 1 will be explained.
The encoding controller 11 hierarchically partitions each image region having a predetermined largest coding block (a CTU or a macroblock) size into coding target blocks each having a coding block size until the number of hierarchical layers partitioned reaches its predetermined upper limit and determines a coding mode for each of the coding target blocks (step ST1 of
Referring to
By performing the hierarchical partitioning with a block having a CTU size being set as a starting point until the depth of the hierarchy reaches a predetermined depth which is set separately according to a quadtree structure, the coding target blocks can be acquired.
At the depth of n, each coding target block is an image region having a size of (Ln, Mn).
Although Ln can be the same as or different from Mn, the case of Ln=Mn is shown in
Hereafter, the coding block size determined by the encoding controller 11 is defined as the size of (Ln, Mn) in the luminance component of each coding target block.
Because quadtree partitioning is performed, (Ln+1, Mn+1)=(Ln/2, Mn/2) is always established.
In the case of a color video signal (4:4:4 format) in which all the color components have the same sample number, such as an RGB signal, all the color components have a size of (Ln, Mn), while in the case of handling a 4:2:0 format, a corresponding chrominance component has a coding block size of (Ln/2, Mn/2).
Hereafter, each coding target block in the nth hierarchical layer is expressed by Bn, and a coding mode selectable for each coding target block Bn is expressed by m(Bn).
In the case of a color video signal which consists of a plurality of color components, the coding mode m(Bn) can be configured in such a way that an individual mode is used for each color component, or can be configured in such a way that a common mode is used for all the color components. Hereafter, an explanation will be made by assuming that the coding mode indicates a coding mode for the luminance component of each coding block when having a 4:2:0 format in a YUV signal unless otherwise specified.
The coding mode m(Bn) is one of one or more intra coding modes (generically referred to as “INTRA”), one or more inter coding modes (generically referred to as “INTER”), and one or more PCM coding modes, and the encoding controller 11 selects a coding mode for each coding target block Bn from among all the coding modes available in the picture currently being processed or a subset of those coding modes.
Each coding target block Bn is further partitioned into one or more units (partitions) for prediction process by the block partitioner 12, as shown in
Hereafter, each partition belonging to a coding target block Bn is expressed by Pin (i shows a partition number in the nth hierarchical layer).
How the partitioning of each coding target block Bn into partitions is performed is included as information in the coding mode m(Bn).
While a prediction process is performed on every partition Pin according to the coding mode m(Bn), a prediction parameter is selected for each coding target block Bn or each partition Pin.
The encoding controller 11 generates such a block partitioning state as shown in, for example,
Hatched portions shown in
Each node enclosed by □ shown in
The encoding controller 11 also outputs the motion vector limitation information provided therefor from the outside thereof to the motion compensation predictor 15.
The motion vector limitation information is used for implementing a function of limiting the maximum of the lengths of motion vectors, thereby imposing a limitation on up to which region of a subsequent frame information of a partial region of the decoded image of a certain frame spreads in a reference relation. This motion vector limitation information can be a fixed value for all the frames or can be a different value for each frame.
The encoding controller 11 further outputs the coding mode, the prediction difference signal parameter, the intra prediction parameter, the inter prediction parameter, the PCM coding parameter and the loop filter parameter, which are determined thereby, to the variable length encoder 23.
Further, the encoding controller outputs the motion vector limitation information, the GOP size limitation information and the reference configuration specification information, which are provided therefor the outside thereof, to the variable length encoder 23.
The process by the encoding controller 11 is performed on each picture inputted to the video encoding device 1. The type of each picture, the configuration of reference between pictures, and so on are controlled, in the encoding controller 11, according to the reference configuration specification information and the GOP size limitation information which are provided from the outside thereof, and other encoding control information, and the coding mode and the coding parameters are determined in the above-mentioned way according to those pieces of picture type information.
When the picture type is I picture, the coding mode is limited to an intra coding mode or a PCM coding mode.
Further, when the picture type is B picture or P picture, the coding mode is determined to be an intra coding mode, an inter coding mode or a PCM coding mode.
Further, when the picture type is B picture or P picture, it is also possible to perform a control operation of limiting the use of intra coding modes and the use of PCM coding modes for all the pictures according to other encoding control information.
When the coding mode m(Bn) determined by the encoding controller 11 is an intra coding mode (when m(Bn)∈INTRA), the select switch 13 outputs the coding target block Bn outputted from the block partitioner 12 to the intra predictor 14 (step ST2).
When the coding mode m(Bn) determined by the encoding controller 11 is an inter coding mode (when m(Bn)∈INTER), the select switch 13 outputs the coding target block Bn outputted from the block partitioner 12 to the motion compensation predictor 15 (step ST3).
Further, when the coding mode m(Bn) determined by the encoding controller 11 is a PCM coding mode, the select switch 13 outputs the coding target block Bn outputted from the block partitioner 12 to the PCM encoder 16 (step ST3).
When the coding mode m(Bn) determined by the encoding controller 11 is an intra coding mode (when m(Bn)∈INTRA), and the intra predictor 14 receives the coding target block Bn from the select switch 13, the intra predictor 14 performs an intra prediction process on each partition Pin in the coding target block Bn by using the intra prediction parameter determined by the encoding controller 11, to generate an intra prediction image PINTRAin (step ST4).
When the coding mode m(Bn) determined by the encoding controller 11 is an inter coding mode (when m(Bn)∈INTER) and the motion compensation predictor 15 receives the coding target block Bn from the select switch 13, the motion compensation predictor 15 compares each partition Pin in the coding target block Bn and a motion-compensated prediction reference image stored in the frame memory 22 (a local decoded image of another frame after a loop filtering process is performed thereon), and searches for a motion vector.
However, when searching for a motion vector, the motion compensation predictor imposes a limitation on the length of the motion vector in such a way that the length of the motion vector does not exceed the maximum indicated by the motion vector limitation information outputted from the encoding controller 11 (searches for a motion vector in the region having the maximum range indicated by the motion vector limitation information).
The maximum indicated by the motion vector limitation information can be fixed for all the frames or can be different for each frame.
As an alternative, the maximum can be changed for each combination of the current frame and the reference destination frame. For example, the maximum of motion vectors can be specified in proportion to the difference in absolute value between the POC (Picture Order Count: a counter value which is incremented by 1 every time frames are counted in chronological order) of the current frame and the POC of the reference destination frame. In general, because the amount of inter-frame motion increases with increase in the difference in the POC between frames, it is rational to specify the maximum according to the rule as mentioned above.
Thus, information indicating according to what rule the maximum of motion vectors is specified can also be included in the motion vector limitation information.
After finding out a motion vector, the motion compensation predictor 15 performs an inter prediction process on each partition Pin in the coding target block Bn by using both the motion vector and the inter prediction parameter determined by the encoding controller 11, to generate an inter prediction image PINTERin (step ST5).
When the coding mode m(Bn) determined by the encoding controller 11 is a PCM coding mode and the PCM encoder 16 receives the coding target block Bn from the select switch 13, the PCM encoder 16 performs a process of reducing the pixel gradation on each pixel included in the coding target block Bn on the basis of the PCM coding parameter outputted from the encoding controller 11, and outputs, as a PCM signal, the pixel value whose gradation is reduced to the variable length encoder 23 (step ST6).
Further, after reducing the gradation, the PCM encoder 16 outputs, as a PCM image (a local decoded image in the PCM coding mode), the pixel value whose gradation is returned again to its original gradation to the loop filter 21.
Because the process of reducing the gradation of each pixel is not performed when the PCM coding parameter indicates that a gradation reduction is not performed, the value of each pixel in the coding target block Bn can be encoded without degradation.
When receiving the coding target block Bn from the block partitioner 12, the subtractor 17 subtracts either the intra prediction image PINTRAin generated by the intra predictor 14 or the inter prediction image PINTERin generated by the motion compensation predictor 15 from each partition Pin in the coding target block Bn, and outputs a prediction difference signal which is the result of the subtraction to the transformation/quantization unit 18 (step ST7).
When receiving the prediction difference signal from the subtractor 17, the transformation/quantization unit 18 refers to the prediction difference coding parameter determined by the encoding controller 11 and performs an orthogonal transformation process (e.g., an orthogonal transformation process such as a DCT (discrete cosine transform) or a KL transform in which bases are designed for a specific learning sequence in advance) on the prediction difference signal, to calculate transform coefficients of the prediction difference signal.
The transformation/quantization unit 18 also refers to the prediction difference coding parameter to quantize the transform coefficients, and outputs the transform coefficients after quantization which are the transform coefficients quantized thereby to the inverse quantization/inverse transformation unit 19 and the variable length encoder 23 (step ST8).
When receiving the transform coefficients after quantization from the transformation/quantization unit 18, the inverse quantization/inverse transformation unit 19 refers to the prediction difference coding parameter determined by the encoding controller 11 to inverse-quantize the transform coefficients after quantization.
The inverse quantization/inverse transformation unit 19 also refers to the prediction difference coding parameter and performs an inverse orthogonal transformation process (e.g., an inverse DCT or an inverse KL transform) on the transform coefficients after inverse quantization, to calculate a local decoded prediction difference signal corresponding to the prediction difference signal outputted from the subtractor 17 (step ST9).
When receiving the local decoded prediction difference signal from the inverse quantization/inverse transformation unit 19, the adder 20 adds the difference image shown by the local decoded prediction difference signal and either the intra prediction image PINTRAin generated by the intra predictor 14 or the inter prediction image PINTERin generated by the motion compensation predictor 15 to calculate, as a local decoded partition image or a group of local decoded partition images, a local decoded image corresponding to the coding target block Bn outputted from the block partitioner 12 (step ST10).
When the processes of steps ST2 to ST10 on all the coding target blocks Bn are completed (steps ST11 and ST12), the loop filter 21 uses the local decoded image (the local decoded image before loop filtering process) outputted from the adder 20 to perform zero or more types of loop filtering processes on the basis of the loop filter parameter outputted from the encoding controller 11, and outputs the local decoded image after loop filtering process to the frame memory 22 (step ST13).
In the case in which the loop filter is configured in such a way as not to perform any loop filtering process on a coding target block Bn in a PCM coding mode, the loop filter does not perform any loop filtering process on each coding target block Bn in a PCM coding mode.
The variable length encoder 23 variable-length-encodes the coefficients after quantization (the encoded data) outputted from the transformation/quantization unit 18, the coding mode m(Bn), the prediction parameter (the intra prediction parameter or the inter prediction parameter) or the PCM coding parameter, the prediction difference coding parameter and the filter parameter, which are outputted from the encoding controller 11, the motion vector (when the coding mode is an inter coding mode) outputted from the motion compensation predictor 15, the PCM signal (the encoded data) outputted from the PCM encoder 16 and the hint information (the motion vector limitation information, the GOP size limitation information and the reference configuration specification information) outputted from the encoding controller 11, to generate an entire region bitstream showing those encoded results. (step ST14).
Next, the details of processing performed by the entire region stream decoder 3 of the video transcoding device 2 will be explained.
When receiving the entire region bitstream generated by the video encoding device 1, the variable length code decoder 31 determines a largest coding block size and an upper limit on the number of hierarchical layers partitioned by using the same method as that of the encoding controller 11 shown in
When information indicating the largest coding block size and the upper limit on the number of hierarchical layers partitioned is multiplexed into the entire region bitstream, the variable length code decoder can determine the largest coding block size and the upper limit on the number of hierarchical layers partitioned by decoding the information.
Next, the variable length code decoder 31 decodes the coding mode which is multiplexed into the entire region bitstream and which is assigned to each largest coding block, and also decodes information (tile partitioning control information) indicating the partitioning state of each largest coding block, the partitioning state being included in the coding mode (step ST21 of
After decoding the information indicating the partitioning state of each largest coding block, the variable length code decoder 31 also specifies decoding target blocks (coding target blocks) into which each largest coding block is partitioned hierarchically on the basis of the partitioning state of the largest coding block.
The variable length code decoder 31 further partitions each of the decoding target blocks into one or more units for prediction process on the basis of the partitioning state of the decoding target block, and decodes the coding parameters assigned to each coding target block or each unit for prediction process (step ST21).
When the coding mode assigned to a decoding target block (a coding target block) is an intra coding mode, the variable length code decoder 31 decodes the intra prediction parameter for each of one or more partitions included in the decoding target block (step ST21).
When the coding mode assigned to a decoding target block is an inter coding mode, the variable length code decoder decodes the inter prediction parameter for the decoding target block or for each of one or more partitions included in the decoding target block (step ST21).
When the coding mode assigned to a decoding target block is a PCM coding mode, the variable length code decoder decodes the PCM signal and the PCM coding parameter which are assigned to the decoding target block (step ST21).
When the coding mode assigned to a decoding target block is an intra coding mode or an inter coding mode, the variable length code decoder 31 further partitions each partition which is a unit for prediction process into one or more partitions each of which is a unit for transformation process on the basis of information about a transformation block size, this information being included in the prediction difference coding parameter included in the coding parameters, and decodes the coefficients after quantization for each partition which is a unit for transformation process (step ST21).
In addition, the variable length code decoder 31 decodes the filter parameter multiplexed into the entire region bitstream and outputs the filter parameter to the loop filter (step ST21).
The variable length code decoder 31 outputs all the decoded coding parameters (the coding mode, the intra prediction parameter, the inter prediction parameter, the PCM coding parameter, the motion vector, the prediction difference coding parameter and the loop filter parameter), the hint information (the motion vector limitation information, the GOP size limitation information and the reference configuration specification information) and the encoded data (the coefficients after quantization and the PCM signal) to the partial region transcoder 4 shown in
When the coding mode m (Bn) variable-length-decoded by the variable length code decoder 31 is an intra coding mode (when m (Bn)∈INTRA), the select switch 32 outputs the intra prediction parameter variable-length-decoded by the variable length code decoder 31 to the intra predictor 33 (step ST22).
When the coding mode m (Bn) variable-length-decoded by the variable length code decoder 31 is an inter coding mode (when m(Bn)∈INTER), the select switch 32 outputs the inter prediction parameter and the motion vector which are variable-length-decoded by the variable length code decoder 31 to the motion compensator 34 (step ST23).
When the coding mode m(Bn) variable-length-decoded by the variable length code decoder 31 is a PCM coding mode, the select switch 32 outputs the PCM signal and the PCM coding parameter which are variable-length-decoded by the variable length code decoder 31 to the PCM decoder 35 (step ST23).
When the coding mode m(Bn) variable-length-decoded by the variable length code decoder 31 is an intra coding mode (when m(Bn)∈INTRA) and the intra predictor 33 receives the intra prediction parameter from the select switch 32, the intra predictor 33 performs an intra prediction process on each partition Pin in the decoding target block Bn by using the intra prediction parameter, according to the same procedure as that of the intra predictor 14 shown in
When the coding mode m(Bn) variable-length-decoded by the variable length code decoder 31 is an inter coding mode (when m(Bn)∈INTER) and the motion compensator 34 receives the inter prediction parameter and the motion vector from the select switch 32, the motion compensator 34 performs an inter prediction process on either the decoding target block Bn or each partition Pin by using the motion vector and the inter prediction parameter while referring to the decoded image after loop filtering process stored in the frame memory 39, to generate an inter prediction image PINTERin and outputs the inter prediction image PINTERin to the adder 37 (step ST25).
When the coding mode m (Bn) variable-length-decoded by the variable length code decoder 31 is a PCM coding mode and the PCM decoder 35 receives the PCM signal and the PCM coding parameter from the select switch 32, the PCM decoder 35 performs a process of restoring the gradation of the PCM signal corresponding to each pixel of the decoding target block Bn to the gradation of a decoded image on the basis of the PCM coding parameter, and outputs the decoded image before loop filtering process of the restored decoding target block Bn to the loop filter 38 (step ST26). The PCM decoder also outputs the decoded image before loop filtering process to the partial region transcoder 4 shown in
When receiving the coefficients after quantization and the prediction difference coding parameter from the variable length code decoder 31, the inverse quantization/inverse transformation unit 36 refers to the prediction difference coding parameter to inverse-quantize the coefficients after quantization, according to the same procedure as that of the inverse quantization/inverse transformation unit 19 shown in
The adder 37 adds the difference image shown by the decoded prediction difference signal calculated by the inverse quantization/inverse transformation unit 36 and either the intra prediction image PINTRAin generated by the intra predictor 33 or the inter prediction image PINTERin generated by the motion compensator 34, and outputs, as a group of one or more decoded partition images included in the decoding target block, a decoded image to the loop filter 38 (step ST28). The adder also outputs the decoded image before loop filtering process to the partial region transcoder 4 shown in
When the processes of steps ST21 to ST28 on all the coding target blocks Bn in the picture are completed (steps ST29 and ST30), the loop filter 38 performs zero or more types of loop filtering processes on the decoded image before loop filtering process outputted from either the adder 37 or the PCM decoder 35 on the basis of the filter parameter outputted from the variable length code decoder 31, and stores the decoded image after loop filtering process in the frame memory 39 (step ST31).
As an alternative, the decoded image after loop filtering process can be outputted to outside the video transcoding device 2.
Next, the details of processing performed by the partial region transcoder 4 of the video transcoding device 2 will be explained.
When receiving the hint information (the motion vector limitation information, the GOP size limitation information and the reference configuration specification information) from the entire region stream decoder 3 shown in
When the indispensable encoded regions in the pictures belonging to a GOP have different sizes, the transcode controller determines, for example, an indispensable encoded region having the largest size to be a target region to be transcoded common among the pictures.
Hereafter, the process of specifying the target region to be transcoded information and the indispensable encoded region which is performed by the transcode controller 41 will be explained concretely.
In the video decoding process, a process of generating a decoded image by adding the prediction image which is acquired by performing a motion-compensated prediction process (an image at a position which is determined on the basis of a motion vector while a decoded frame is referred to), and the prediction difference signal is performed.
When a random access point frame which is a frame at the head of a GOP is expressed by F1 and a frame which refers to a frame Fn is expressed by Fn+1, and a region to be decoded in the frame Fn, is a partial region Pn, it is necessary to, for the frame Fn−1, determine the whole of a partial region Pn−1 of the frame Fn−1 to which the partial region Pn of the frame Fn refers to be a target region to be decoded.
At that time, because the inter-frame reference in the motion-compensated prediction process is the one in which the frame Fn refers to the frame Fn−1, the frame Fn−1 refers to the frame Fn−2, and which is performed in a multistage way within the GOP until the inter-frame reference reaches F1 which is a frame which does not refer to any frame, the inter-frame dependency relation propagates from the frame F1 to the frame Fn.
Therefore, in the reference configuration within the GOP, in order to correctly decode a partial region PN of a frame FN belonging to a final stage, it is necessary to determine the partial regions P1 to PN of frames from the frame F1 up to the frame FN, which are based on the propagation of the above-mentioned dependency relation, to be target regions to be decoded.
Supposing that the video transcoding device 2 processes, instead of the entire region bitstream outputted from the video encoding device 1 shown in
Further, because the maximum of the GOP size and the maximum of motion vectors, and the reference configuration differ according to bitstreams, the shape and the size of the partial region Pn of each frame Fn for the same display area specification information are indefinite according to bitstreams, and it becomes difficult to handle the partial region both in the transcoding process and in a process of decoding the stream after transcoding.
However, in accordance with this Embodiment 1, because in the entire region bitstream outputted from the video encoding device 1 shown in
It is hereafter premised that, for simplification of implementation, when the partial region Pn of each frame Fn is not a rectangular region, a rectangular region PLn including the partial region Pn is determined to be the target to be decoded.
Further, there will be considered a case in which the maximum of the absolute values of the horizontal and vertical components of a motion vector, the maximum being indicated by the motion vector limitation information included in the hint information, is a fixed value V [pixels] in all the frames. It is further assumed that the rectangular region to be decoded of the frame Fn which refers to the frame Fn−1 is PLn.
In this case, the rectangular region to be decoded PLn−1 of the frame Fn−1 can be determined to be a rectangular region in which V pixels are fixedly added to the periphery of the rectangular region to be decoded PLn without analyzing the value of a motion vector. This is because the maximum of motion vectors is V, and hence the amount of protrusion of a motion vector pointing to a point outside the rectangular region to be decoded PLn is suppressed to at most V pixels.
Because the dependency relation of the reference propagates from the random access point frame F1, the rectangular region to be decoded PL1 of the frame F1 can be determined as a rectangular region in which V*(n−1) pixels are added to the periphery of the rectangular region to be decoded PLn.
It is further possible to similarly determine the rectangular region to be decoded PLn of each frame Fn even when the maximum of motion vectors is different in each frame. For example, when the maximum of motion vectors of the frame Fn is expressed by Vn, with respect to the rectangular region to be decoded PLN of the frame FN, the rectangular region to be decoded PLn of the frame Fn (n<N) is determined as a rectangular region in which pixels Vsum are added to the periphery of the rectangular region to be decoded PLN.
Further, when the maximum of motion vectors is specified in such a way as to be in proportion to the difference in absolute value between the POC of the frame Fn and that of the reference destination frame Fn−1, as described in the explanation of the video encoding device 1, the rectangular region to be decoded is determined as follows.
It is considered first that when the difference in absolute value between the POC of the frame Fn and that of the reference destination frame Fn−1 is expressed by d(Fn−Fn−1), the maximum of motion vectors in the case in which Fn refers to Fn−1 is determined to be ad (Fn−Fn−1) (a is a fixed coefficient).
In this case, with respect to the rectangular region to be decoded PLN of the frame FN, the rectangular region to be decoded PLn of the frame Fn(n<N) is determined as a rectangular region in which pixels Vsum are added to the periphery of the rectangular region to be decoded PLN.
Further, the above-mentioned equation can be simplified as follows.
V
sum
=αd(FN−Fn)
More specifically, the value of Vsum is decided from the difference in absolute value between the POC of FN and that of Fn.
Further, when the rectangular region to be decoded PLm of the frame Fm (m<n) (a rectangular region in which pixels Vsum′ are added to the periphery of the rectangular region to be decoded PLN), with respect to the rectangular region to be decoded PLn of the frame Fn, is called a dependence rectangular region PDLnm with respect to the frame Fm of the rectangular region to be decoded PLn, in the case in which the reference configuration is a one in which a certain frame is referred to by a plurality of frames, the largest dependence rectangular region, among the dependence rectangular regions which are determined from the reference source frames, is determined to be the rectangular region to be decoded of that frame.
As mentioned above, in order to, consistently within a GOP, generate a partial region bitstream which makes it possible to correctly decode a specific partial region, it is necessary to include at least information about the rectangular region to be decoded of each frame in the bitstream. Further, because it is impossible or difficult to change the size (the numbers of pixels in horizontal and vertical directions) of each frame within a GOP, it is necessary to set the size of each frame in a GOP to be equal to a size including the largest rectangular region to be decoded (in many cases, the rectangular region to be decoded of the frame F1) among the rectangular regions to be decoded of all the frames in the GOP.
On the basis of these requirements, when the display area information is provided from the outside thereof, the transcode controller 41 determines the indispensable encoded region and the target region to be transcoded according to the following procedure.
When receiving the indispensable encoded region information from the transcode controller 41, the coding parameter extractor 42 extracts, from among the encoded data and the coding parameters of the entire region which are outputted from the entire region stream decoder 3, the encoded data (the count after quantization or the PCM signal) and the coding parameters (the coding mode (an intra coding mode, an inter coding mode or a PCM coding mode), the prediction parameter (the intra prediction parameter or the inter prediction parameter) or the PCM coding parameter, the motion vector (when the coding mode is an inter coding mode), the prediction difference coding parameter, the loop filter parameter, and the hint information (the motion vector limitation information, the GOP size limitation information and the reference configuration specification information)) of a coding target block included in the indispensable encoded region indicated by the indispensable encoded region information (including a coding target block only a part of which is included in the indispensable encoded region), and outputs the encoded data and the coding parameters of the coding target block which are extracted thereby to the external reference block encoder 43 and the select switch 45 (step ST42).
When receiving the indispensable encoded region information from the transcode controller 41, the external reference block encoder 43 checks to see whether or not each coding target block included in the indispensable encoded region indicated by the indispensable encoded region information (each coding target block belonging to a boundary of the indispensable encoded region) is an external reference block on which intra encoding is performed by referring to the value of a pixel located outside the indispensable encoded region (step ST43).
Whether or not each coding target block included in the indispensable encoded region is an external reference block can be determined by extracting the coding parameters of that coding target block from the encoded data and the coding parameters of the entire region which are outputted from the entire region stream decoder 3, and then checking the coding mode and the prediction parameter which are included in the coding parameters.
When a coding target block included in the indispensable encoded region is an external reference block, the external reference block encoder 43 encodes the decoded image of the coding target block by using an encoding method of not using the value of any pixel located outside the indispensable encoded region for prediction reference, and outputs encoded data which is the result of the encoding and the coding parameters used for the encoding of the above-mentioned decoded image to the select switch 45 (step ST44).
The reason why, when a coding target block is an external reference block, instead of extracting the encoded data and the coding parameters of that coding target block from the encoded data and the coding parameters of the entire region, the encoded data and the coding parameters of the coding target block are newly re-determined is that because a region located outside the indispensable encoded region is close to the decoded image before transcoding and no decoded result is guaranteed, an intra encoding prediction referring to the region results in a prediction result different from a one which will be originally provided.
As a method of re-determining the encoded data and the coding parameters, the following methods are provided, for example.
In the case of using this method, a result which is completely the same as the inputted decoded image can be decoded, depending on the accuracy of the PCM encoding.
In the case of using this method, a result which is completely the same as the inputted decoded image can be decoded, depending on the accuracy of the PCM encoding. Depending on the size of the external reference block, the code amount can be further reduced as compared with the case of using the method (1).
The determination of the coding parameters by using the method (3) can be carried out by using the same method as the encoding method for use in the video encoding device 1 shown in
The unnecessary block encoder 44 encodes each coding target block (unnecessary block) which is located outside the indispensable encoded region and inside the target region to be transcoded, in, for example, a skip mode in the inter encoding method, and outputs encoded data which is the result of the encoding and the coding parameters used for the encoding of the above-mentioned coding target block to the select switch 45 (step ST45).
Because an unnecessary block is needed for making the frame size uniform within each GOP while an unnecessary block is a coding target block belonging to an image region which is not used for display and reference from a subsequent frame, the result of decoding an unnecessary block can be any result.
Therefore, it is desirable to use coding parameters whose code amount is reduced to as small as possible, for an unnecessary block.
Therefore, for example, a method of avoiding performing block partitioning on an unnecessary block as much as possible, but encoding an unnecessary block in a skip mode of an inter encoding prediction (in a mode of not encoding vector information (there is a case in which prediction vector information is excluded) and the coefficients after quantization) is used.
In the case of using the above-mentioned method (2) as the method of determining the coding parameters of an external reference block, when the coding target block is a one which is not included in the indispensable encoded region, but is referred to by the external reference block, the external reference block encoder 43 needs to determine the coding parameters by using the above-mentioned method (2).
The select switch 45 refers to the target region to be transcoded information and the indispensable encoded region information which are outputted from the transcode controller 41, to check to see whether or not the coding target block is an unnecessary block, and, when the coding target block is included in the indispensable encoded region, checks to see whether or not the coding target block is an external reference block (steps ST46 and ST47).
When the coding target block is included in the indispensable encoded region, but is not an external reference block, the select switch 45 outputs the encoded data and the coding parameters which are outputted from the coding parameter extractor 42 to the variable length encoder 46 (step ST48).
When the coding target block is an external reference block, the select switch 45 outputs the encoded data and the coding parameters which are outputted from the external reference block encoder 43 to the variable length encoder 46 (step ST49).
Further, when the coding target block is an unnecessary block, the select switch 45 outputs the encoded data and the coding parameters which are outputted from the unnecessary block encoder 44 to the variable length encoder 46 (step ST50).
When receiving the encoded data and the coding parameters of the coding target block from the select switch 45, the variable length encoder 46 entropy-encodes the encoded data and the coding parameters to generate a partial region bitstream showing those encoded results (step ST51).
Because the coding target block is extracted as long as it is included in the target region to be transcoded, there is a case in which an adjacent coding target block existing in the original bitstream does not exist in the partial region bitstream. Therefore, it is necessary to redo the prediction process on the coding parameters by using information about an adjacent coding target block, such as a motion vector and a filter parameter. The process by the variable length encoder 46 including such the prediction process is performed by using the same method as that of the variable length encoder 23 of the video encoding device 1.
Further, because the partial region bitstream includes a wider range than the display area indicated by the display area information, the variable length encoder 46 acquires the header information of the partial region bitstream, the header information indicating which area the display area is from the transcode controller 41, multiplexes the header information into that partial region bitstream, and outputs the partial region bitstream after multiplexing of the header information (a partial region stream in conformity with a predetermined encoding codec) to the video decoding device 6.
The partial region bitstream generated by the variable length encoder 46 is configured in such a way as to be able to be decoded by the video decoding device 6.
The video decoding device 6 has the same functions as those of the entire region stream decoder 3. However, the video decoding device 6 can be configured in such a way as not to have the function, which the entire region stream decoder 3 has, of outputting the hint information, the coding parameters and the decoded image before loop filtering process to the outside thereof.
Further, the video decoding device 6 can be configured in such a way as to carry out the decoding by using a means different from that of the entire region stream decoder 3. In this case, the variable length encoder 46 of the partial region transcoder 4 performs variable length encoding on the encoded data and the coding parameters in such a way as to be compatible with the decoder (an encoding codec for partial region bitstream) of the video decoding device 6.
As can be seen from the above description, because the variable length encoder 23 of the video encoding device 1 in accordance with this Embodiment 1 is configured in such a way as to multiplex the hint information into the entire region bitstream, the hint information including the motion vector limitation information indicating the maximum range in which a search for a motion vector can be performed, the GOP size limitation information indicating the maximum of the GOP size which is the number of pictures belonging to a GOP, and the reference configuration specification information indicating a picture which is referred to at the time of decoding each picture belonging to the GOP, there is provided an advantage of being able to generate an entire region bitstream which is suitable for efficient generation of a partial region bitstream with a low operation amount without causing a reduction in the compression efficiency of the entire region bitstream.
More specifically, because the video encoding device 1 in accordance with this Embodiment 1 encodes the inputted image by imposing limitations on the maximum of motion vectors, the GOP size and the reference configuration of the inputted image in such a way that they meet the motion vector limitation information, the GOP size limitation information and the reference configuration specification information, respectively, the video encoding device can suppress the range in which information about a specified region of the decoded image of a certain frame propagates to a subsequent frame in inter-frame reference to a specific range, in the motion-compensated prediction process using inter-frame reference. Further, because the variable length encoder 23 multiplexes, as the hint information, the motion vector limitation information, the GOP size limitation information and the reference configuration specification information into the entire region bitstream, there is provided an advantage of being able to explicitly notify the range of propagation of decoded image information to the video transcoding device 2.
On the other hand, because the video transcoding device 2 is configured in such a way that when receiving the entire region bitstream outputted from the video encoding device 1, the entire region stream decoder 3 decodes the encoded data, the coding parameters, the hint information and the decoded image before loop filtering process of the entire region and outputs them to the partial region transcoder 4, the partial region transcoder 4 specifies the indispensable encoded region which is a region required in order to decode the display area information of each frame, and the target region to be transcoded meaning the image size of a partial region bitstream on the basis of the display area information and the hint information which are inputted thereto, assigns the coding parameters outputted from the entire region stream decoder 3, just as they are, to a coding target block which belongs to the inside of the indispensable encoded region and which does not have to refer to information of a region located outside the indispensable encoded region, regenerates encoded data and coding parameters for a coding target block which belongs to the inside of the indispensable encoded region and which needs to refer to information of a region located outside the indispensable encoded region, assigns a dummy coding parameter with a small code amount to a coding target block which is located outside the indispensable encoded region, but inside the target region to be transcoded, and multiplexes, as a partial region bitstream, the encoded data and the coding parameters, which are assigned in this way, of the coding target block in the target region to be transcoded, together with appropriate header information of the partial region bitstream, there is provided an advantage of being able to decode an image which is the same as or close to the decoded image of a partial region corresponding to the display area information, the decoded image being included in the decoded image of the entire region bitstream inputted to the video transcoding device 2, and also acquire a partial region bitstream having a smaller size than the entire region bitstream, with a low operation amount.
In this Embodiment 2, an example in which a video encoding device and a video transcoding device as shown in above-mentioned Embodiment 1 are applied to a system different from that in accordance with above-mentioned Embodiment 1 will be explained.
Referring to
The video distribution device 53 is configured with an entire region stream decoder 54, a partial region transcoder 55, and a distribution controller 56, and has a function of generating partial region streams on the basis of the entire region bitstream generated by the video encoding device 51 and pieces of display area specification information inputted thereto from video decoding devices 50-1 to 50-N, and outputting the partial region streams generated thereby to the video decoding devices 50-1 to 50-N.
The entire region stream decoder 54 has the same functions as those of the entire region stream decoder 3 shown in
The partial region transcoder 55 has the same functions as those of the partial region transcoder 4 shown in
The distribution controller 56 has a function of receiving the pieces of display area information outputted from the video decoding devices 50-1 to 50-N, and outputting the pieces of display area information to the partial region transcoder 55. The distribution controller also has a function of, when receiving a partial region bitstream outputted from the partial region transcoder 55, outputting that partial region bitstream to the video decoding device that has outputted the display area information used at the time of generating that partial region bitstream.
The entire region display device 57 is display equipment to display the entire region decoded image outputted from the entire region stream decoder 54.
Each of the video decoding devices 50-1 to 50-N outputs display area information to the video distribution device 53, and also decodes the image of a partial region from the partial region bitstream outputted from the video distribution device 53 on the basis of the display area information, to generate a partial region decoded image.
Each of the partial region display devices 51-1 to 51-N displays the partial region decoded image from the corresponding one of the video decoding devices 50-1 to 50-N.
As a concrete example of practical use, an example in which the video distribution device 53 is built in a surveillance camera recorder that stores a monitor video image having high definition will be explained.
In this case, the video encoding device 51 is an encoder device disposed in a surveillance camera that supplies monitor video image data to the camera recorder and can acquire a high definition video, and that generates an entire region bitstream which is an object to be distributed. An entire region bitstream generated by the video encoding device 51 which is an encoder device is stored in the storage 52 which is built in the camera recorder.
The camera recorder can decode the entire region bitstream stored in the storage 52 by using the entire region stream decoder 54, and can display an entire region decoded image generated thereby on the entire region display device 57 which is connected directly thereto.
The camera recorder can also distribute monitor video image data to the display terminals (tablet terminals, smart phones, PCs, and so on) of a plurality of users staying at distant places. Monitor video image data is distributed to the display terminals of users staying at distant places via a predetermined transmission system. However, there can be considered a case in which it is difficult to transmit all of the entire region bitstream, depending on the transmission capacity of the transmission system. In this system, when a user operates his or her display terminal to specify an arbitrary display area, and then makes a request for monitor video image data, display area information indicating the display area is inputted to the video distribution device 53 of the camera recorder via the predetermined transmission system, and the video distribution device 53 generates a partial region bitstream including the coding parameters required to play back the image of the display area specified by the user, and transmits the partial region bitstream to the display terminal which is the request source via the predetermined transmission system.
By thus specifying only a necessary region in the monitor video image, it becomes possible to suppress the amount of transmitted data, and for the user to watch and listen to the monitor video image at a distant place. Further, because the system makes it possible to specify a display area individually for each user, the system enables a user who, for example, can use a transmission line having a large transmission capacity to specify a larger display area.
The video decoding devices 50-1 to 50-N are built in the display terminals of the users, and receive partial region bitstreams transmitted from the video distribution device 53 and decode the partial region bitstreams into partial region decoded images, respectively, so that the partial region decoded images are displayed in the display terminals, respectively.
As mentioned above, by using the system shown in
In this Embodiment 3, a video stream transmission system for employing the video encoding device and the video transcoding device which are shown in any of above-mentioned Embodiments 1 and 2 more efficiently will be explained.
In this Embodiment 3, a state in which an entire region image is partitioned into subpictures, such as slices or tiles, is assumed.
A video encoding device 1 is the same as the video encoding device 1 shown in above-mentioned Embodiment 1 (or the video encoding device 51 shown in above-mentioned Embodiment 2). After generating a bitstream (a bitstream of each subpicture) into which hint information is multiplexed, the video encoding device combines the bitstreams of the subpictures for the entire region image, and outputs an entire region bitstream (an entire region stream) which is a bitstream of the entire region image.
In the example of
A MUXER 61 is a multiplexing transmission device to multiplex the entire region bitstream outputted from the video encoding device 1, and subpicture information indicating both the state of partitioning into the subpictures in the entire region image and the data position of the bitstream of each of the subpictures, the bitstream being included in the entire region bitstream, into a multiplexed signal in a transmission format set in advance, and to transmit the multiplexed signal.
A DEMUXER 62 is a demultiplexing device to receive the multiplexed signal transmitted by the MUXER 61 and demultiplex the multiplexed signal into the entire region bitstream and the subpicture information which are included in the multiplexed signal, and to refer to both the subpicture information and display area information indicating a subpicture which is a target to be decoded, to extract the bitstream of the subpicture which is a target to be decoded from the entire region bitstream.
Next, operations will be explained.
For example, when the entire region image is partitioned into six subpictures (Sub-pic), as shown in
At that time, when the video decoding device 6 decodes only a partial region in the entire region image, the display area information indicating subpictures which are a target to be decoded is inputted to the DEMUXER 62 and the video transcoding device 2.
In the example of
Although in a decoding device each subpicture needs to be a one that has been encoded on a per coding unit basis (e.g., on a per HEVC basis or on a per NAL basis in H.264) which makes it possible to decode each subpicture independently, because it is assumed that the video decoding device 6 in accordance with this Embodiment 3 can carry out the decoding on a per NAL basis, the video decoding device 6 can decode the subpictures which are a target to be decoded even if the video transcoding device 2 generates a bitstream by using only NALs corresponding to Sub-pic1 and Sub-pic4.
The MUXER 61 receives an input of the subpicture information indicating both the state of partitioning into the subpictures in the entire region image and the data position of the bitstream of each of the subpictures, the bitstream being included in the entire region bitstream, from the outside thereof.
In the example of
Although the example in which the subpicture information includes both the information indicating the partitioning state, and the data position information is shown above, the subpicture information can further include other information.
When receiving the entire region bitstream from the video encoding device 1, the MUXER 61 multiplexes the entire region bitstream and the subpicture information into a multiplexed signal in a transmission format set in advance, and transmits the multiplexed signal.
As the transmission format in this example, for example, a transmission format defined in MPEG-2 TS, MMT (MPRG Media Transport) or the like can be considered. The above-mentioned subpicture information is multiplexed, as a descriptor of one of these transmission systems, together with the entire region bitstream. However, because the above-mentioned transmission format is an example, another transmission format can be alternatively used.
When receiving the multiplexed signal transmitted by the MUXER 61, the DEMUXER 62 demultiplexes the multiplexed signal into the entire region bitstream and the subpicture information which are included in the multiplexed signal.
The DEMUXER 62 also specifies the subpictures which are a target to be decoded by referring to the display area information provided therefor from the outside thereof and indicating the subpictures which are a target to be decoded. In the example of
After specifying the subpictures which are a target to be decoded, the DEMUXER 62 refers to the subpicture information separated from the multiplexed signal, to specify the bitstream of each of the subpictures which are a target to be decoded, the bitstream being included in the entire region bitstream, and extracts the bitstreams of the subpictures which are a target to be decoded from the entire region bitstream.
In the example of
After the DEMUXER 62 extracts the bitstreams of the subpictures which are a target to be decoded, the video transcoding device 2 generates a partial region bitstream from those bitstreams and outputs the partial region bitstream to the video decoding device 6, like that in accordance with above-mentioned Embodiment 1.
As can be seen from the above description, because the video stream transmission system in accordance with this Embodiment 3 includes the MUXER 61 to multiplex the entire region bitstream outputted from the video encoding device 1, and the subpicture information indicating both the state of partitioning into the subpictures in the entire region image and the data position of the bitstream of each of the subpictures, the bitstream being included in the entire region bitstream, into a multiplexed signal in a transmission format set in advance, and to transmit the multiplexed signal, and the DEMUXER 62 to receive the multiplexed signal transmitted by the MUXER 61 and demultiplex the multiplexed signal into the entire region bitstream and the subpicture information which are included in the multiplexed signal, and to refer to both the subpicture information and the display area information indicating subpictures which are a target to be decoded, to extract the bitstreams of the subpictures which are a target to be decoded from the entire region bitstream, there is provided an advantage of making it sufficient to transmit only a bitstream included in the entire region bitstream generated by the video encoding device 1 and required for the decoding, thereby being able to reduce the transmission amount of the bitstream.
Although the example in which the DEMUXER 62 outputs the bitstream extracted from the entire region bitstream to the video transcoding device 2 is shown in this Embodiment 3, the video transcoding device 2 can be omitted and the bitstream extracted from the entire region bitstream can be outputted to the video decoding device 6, as shown in
In this case, while the size of the bitstream inputted to the video decoding device 6 becomes large as compared with the configuration shown in
Although the example in which the DEMUXER 62 extracts the bitstreams of subpictures which are a target to be decoded from the entire region bitstream by referring to the subpicture information and the display area information is shown in above-mentioned Embodiment 3, the MUXER 61 can alternatively extract the bitstreams of subpictures which are a target to be decoded from the entire region bitstream by referring to the subpicture information and the display area information, as shown in
In this case, the MUXER 61 multiplexes the bitstreams of subpictures which are a target to be decoded, the bitstreams being extracted from the entire region bitstream, into a multiplexed signal in a transmission format set in advance, and transmits that multiplexed signal to the DEMUXER 62.
The DEMUXER 62 receives the multiplexed signal transmitted by the MUXER 61, demultiplexes the multiplexed signal into the bitstreams of the subpictures which are a target to be decoded, the bitstreams being included in the multiplexed signal, and outputs those bitstreams to the video transcoding device 2 or the video decoding device 6.
This Embodiment 4 provides an advantage of being able to further reduce the transmission amount of the bitstream as compared with above-mentioned Embodiment 3.
The MUXER 61 can acquire the display area information from, for example, either the video decoding device 6 which is the decoding side or the video encoding device 1 which is the transmission side. As an alternative, the MUXER can acquire the display area information through a user's input.
While the invention has been described in its preferred embodiments, it is to be understood that an arbitrary combination of two or more of the above-mentioned embodiments can be made, various changes can be made in an arbitrary component in accordance with any one of the above-mentioned embodiments, and an arbitrary component in accordance with any one of the above-mentioned embodiments can be omitted within the scope of the invention.
The video encoding device in accordance with the present invention is suitable for devices or the likes that need to efficiently generate a partial region bitstream with a low operation amount without causing a reduction in the compression efficiency of the entire region bitstream.
1 video encoding device, 2 video transcoding device, 3 entire region stream decoder (indispensable encoded region determinator), 4 partial region transcoder (indispensable encoded region determinator, parameter extractor, partial region stream generator), 5 entire region display device, 6 video decoding device, 7 partial region display device, 11 encoding controller (prediction image generator), 12 block partitioner (prediction image generator), 13 select switch (prediction image generator), 14 intra predictor (prediction image generator), 15 motion compensation predictor (prediction image generator), 16 PCM encoder, 17 subtractor (bitstream generator), 18 transformation/quantization unit (bitstream generator), 19 inverse quantization/inverse transformation unit, 20 adder, 21 loop filter, 22 frame memory, 23 variable length encoder (bit stream generator), 31 variable length code decoder, 32 select switch, 33 intra predictor, 34 motion compensator, 35 PCM decoder, 36 inverse quantization/inverse transformation unit, 37 adder, 38 loop filter, 39 frame memory, 41 transcode controller, 42 coding parameter extractor, 43 external reference block encoder, 44 unnecessary block encoder, 45 select switch, 46 variable length encoder, 51 video encoding device, 52 storage, 53 video distribution device, 54 entire region stream decoder, 55 partial region transcoder, 56 distribution controller, 57 entire region display device, 50-1 to 50-N video decoding device, 51-1 to 51-N partial region display device, 61 MUXER (multiplexing transmission device), and 62 DEMUXER (demultiplexing device).
Number | Date | Country | Kind |
---|---|---|---|
2013-185196 | Sep 2013 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2014/073532 | 9/5/2014 | WO | 00 |