1. Technical Field of the Invention
Embodiments of the invention relate to the field of generating streams of data including a plurality of encoded data blocks, wherein such kind of streams are transmitted to a receiver and decoded for presentation of the data within the stream. More particularly, and not by way of any limitation, the invention is related to the field of media transmission, reception and playback, and embodiments of the invention concern a fast tune-in into a stream transmission over IP networks using a second stream for a tune-in.
2. Description of the Related Art
In the field of media transmission over an IP network (for example IPTV systems) video data is transmitted in encoded and compressed form, and popular video compression standards, such as MPEG-2 and JVT/H.264/MPEG AVC, use intra-coding and inter-coding. For proper decoding a decoder decodes a compressed video sequence beginning with an intra-coded picture (e.g. an I-picture or I-frame) and then continues to decode the subsequent inter-coded pictures (e.g. the P-pictures or P-frames and/or the B-pictures or B-frames). A group of pictures (GOP) may include an I-picture and several subsequent P-pictures and/or B-pictures, wherein I-pictures require more bits to code than P-pictures or B-pictures for the same quality of the video. Upon receipt of the video stream on a particular channel, for example after changing to this channel or after turning on the receiver, decoding has to wait until the first I-picture is received. To minimize the delay in the coding of the video stream I-pictures are sent frequently, i.e. are included within the video stream at a fixed distance, for example every 0.5 seconds.
One problem of current IPTV systems is the so-called tune-in time into streams that are distributed over multicast IP. The delay between the initialization of tuning into a channel and rendering the content of this channel is due to several effects of which client pre-buffering time and acquisition time for random access points within the stream to be switched to are the dominant ones. Both effects are direct implications of the design of modern video codec schemes. In differential video coding schemes, like MPEG-2 video or MPEG-4 AVC/H.264, only a few pictures of a stream are self-contained, e.g. the above-mentioned I-pictures. These pictures include all information that is necessary to decode the complete picture. Most other pictures are differentially coded and depend on one or more previously transmitted and decoded pictures, e.g. the above-mentioned P-pictures or B-pictures. In other words, the P-pictures or B-pictures do not include all information that is necessary to decode a complete picture, rather additional information from preceding or following pictures is required.
To obtain the best coding efficiency at a given bit rate, the number of I-pictures should be low. On the other hand, the I-pictures serve as random access points (RAP) to the stream where decoding can be started. Hence, there is a delay when tuning into a new stream, since the client (receiver) has to wait for a random access point within the stream to arrive, before it can start decoding and displaying video.
In differential coding schemes, the encoded video bit rate is not necessarily constant but rather depends on the complexity of the video scene. Within a video stream, the variation in coded picture size can be large, for example the I-pictures can be many times as large as differentially encoded pictures, the P-pictures and B-pictures. Upon transmitting such a bit stream over a channel with constant channel bit rate, the client needs to pre-buffer incoming picture data so that the video can be played with the same rate as it was sampled. This buffer needs to be large enough to avoid buffer overflow and shall only be emptied on reaching a certain buffer fullness for avoiding buffer underrun during playout.
Whenever the buffer cannot be filled instantly to the point where the client can start emptying it, delay occurs before rendering can be started.
This functionality is disadvantageous as the receiver which begins receiving a program on a specific channel, for example following a channel change or turning on the receiver must wait until the random access point, for example an I-picture is received, so that decoding can start. Thus, the distance of random access points within the main stream is one of the main causes for the tune-in delay.
One approach to reduce such delay in a multicast linear TV scenario is to send a second stream in parallel to the main stream, wherein the second stream has a higher frequency of random access points. This second stream is for example called the “tune-in stream” or the “side stream”.
During the transition period (tT-tR) starting at tR, the receiver or client decodes the pictures derived from the tune-in stream 102 until a full resolution I-picture arrives on the main stream at time tT. Once this I-picture arrives, the low resolution stream is stopped and the full quality pictures of the main stream are decoded and rendered.
The main stream and the side stream may be received at the client or receiver using different scenarios, one being the simultaneous transmission of the main stream and the side stream to the receiver. Such an approach is, for example, described in US 2007/0098079 A1 the disclosure of which is incorporated here-with by reference. Alternatively, the receiver may obtain only a single stream for decoding from a server which provides both, the main stream and the side stream. Upon initiating a channel change or upon turning on the receiver, a respective request for tuning into a specific channel is forwarded to the server which then provides information on the basis of the side stream or tune-in stream until high-quality information, namely the first I-picture, of the main stream becomes available. Such an approach is for example described in US 2007/0248165 A1 the disclosure of which is incorporated herewith by reference.
As mentioned above, the tune-in stream 102 is encoded with a substantially lower bit rate which results in lower video quality than the main stream. This may be accomplished by a reduced image resolution or more aggressive lossy coding parameters, for example a higher quantization is used during encoding of the side stream.
While this approach of providing the lower quality side stream is advantageous for reducing the tune-in delay as discussed above with regard to
Therefore, a need exists to provide an approach avoiding visible artifacts when switching into a main stream.
One embodiment of the invention provides a method for generating a stream of data comprising a plurality of encoded data blocks, the plurality of encoded data blocks comprises a plurality of self-contained blocks including all information for decoding the block and a plurality of blocks including only partial information for decoding, wherein a distance of self-contained blocks in the stream of data is dependent on the content encoded in the stream, wherein the stream is a main stream. Tuning into the main stream is effected via a secondary stream comprising at least a subset of the data blocks of the main stream encoded at a quality different from a quality of the data blocks of the main stream, wherein the self-contained blocks of the main stream are inserted at positions within the main stream where differences in the quality of the data encoded in the main and secondary streams are less detectable.
Another embodiment of the invention provides an encoder for generating a stream of data comprising a plurality of encoded data blocks, the plurality of encoded data blocks comprising a plurality of self-contained blocks, including all information for decoding the block and a plurality of blocks including only partial information for decoding, wherein the encoder is configured to vary a distance of self-contained blocks in the stream dependent on the content encoded in the stream, wherein the stream is a main stream. Tuning into the main stream is effected via a secondary stream comprising at least a subset of the data blocks of the main stream encoded at a quality different from a quality of the data blocks of the main stream, wherein the self-contained blocks of the main stream are inserted at positions within the main stream where differences in the quality of the data encoded in the main and secondary streams are less detectable.
Yet another embodiment of the invention provides a method for tuning into a main stream, wherein a main stream and a secondary stream are provided. The main stream comprises a plurality of encoded data blocks, the plurality of encoded data blocks comprising a plurality of self-contained blocks including all information for decoding the block and a plurality of blocks including only partial information for decoding. The secondary stream comprises at least a subset of the source data of the main stream encoded at a different, typically lower, quality than the data blocks of the main stream, wherein the self-contained blocks in the main stream are inserted at positions where differences in the quality data encoded in the main and secondary streams are subjectively less detectable. Upon receiving a tune-in request tuning into the secondary stream and the main stream occurs, and the secondary stream is decoded until a self-contained block arrives on the main stream or a required main stream decoder buffer-fill level is reached. Upon arrival of the self-contained block on the main stream or reaching the required main stream decoder buffer-fill level, decoding of the secondary stream is stopped and decoding of the main stream starts.
Further embodiments of the invention provide a decoder for receiving encoded data and for providing decoded output data. The decoder comprises an input for receiving a main stream and a secondary stream. The main stream comprises a plurality of encoded data blocks, the plurality of encoded data blocks comprising a plurality of self-contained blocks including all information for decoding the block and a plurality of blocks including only partial information for decoding. The secondary stream comprises at least a subset of the data blocks of the main stream encoded at a lower quality than the data blocks of the main stream, wherein the self-contained blocks in the main stream are inserted at positions where differences in the quality data encoded in the main and secondary streams are less detectable. Further, the decoder comprises a control input for receiving a tune-in request signal and a decoding portion coupled to the input and to the control input for producing decoded output data. The decoding portion is adapted to tune into the secondary stream upon receipt of a tune-in request, to decode the secondary stream until a self-contained block arrives on the main stream or a required main stream decoder buffer-fill level is reached, and to stop decoding of the secondary stream and to start decoding of the main stream upon arrival of the self-contained block on the main stream or reaching the required main stream decoder buffer-fill level.
Embodiments of the invention concern encoders, streaming servers, network components and clients or receivers for multimedia distribution systems for using a tune-in stream for fast channel change wherein switching from the tune-in stream to the main stream is subjectively hidden through content-adaptive coding.
Other features, elements, steps, characteristics and advantages of the present invention will become more apparent from the following detailed description of preferred embodiments of the present invention with reference to the attached drawings.
The embodiments of the invention will be described in the following with reference to the accompanying drawings, wherein:
a is a block diagram of an encoder in accordance with an embodiment of the invention;
b is a block diagram of the encoder of
a is a block diagram of a decoder in accordance with an embodiment of the invention;
b is a block diagram of the decoder of
a is a block diagram of a decoder in accordance with another embodiment of the invention;
b is a block diagram of the decoder of
In the following, embodiments of the invention will be described. One embodiment of the invention concerns an approach for generating a stream of data, for example a video stream or an audio stream wherein the stream of data comprises a plurality of encoded data blocks which comprise a plurality of self-contained blocks including all information for decoding the block and a plurality of blocks including only partial information for decoding. In accordance with embodiments of the invention, the stream of data may be a video stream being encoded using intra- and inter-coding. The encoded stream may comprise I-pictures as random access points and P-pictures and/or B-pictures. Within the encoded data stream the distance of self-contained blocks, for example the I-pictures, is varied dependent on the content of the stream.
The above described approach for determining a position of I-pictures within a video stream results in the placement of an I-picture within the encoded video stream dependent on the content so that the actual distance between consecutive I-pictures varies. Thus, I-pictures are inserted into the video stream not at a fixed distance, i.e. the static GOP length of for example one second or five seconds—as used in conventional approaches—does no longer exist. Rather, the distance of consecutive I-pictures is varied. In accordance with embodiments of the invention the distance may be varied within a certain time window, e.g. a time window having a maximum distance of five seconds and a minimum distance of one second. This solution is advantageous as it may increase the bit rate efficiency in the video stream, because I-pictures for random access points are placed at positions within the stream where the bit demand is higher anyway, even for P-pictures, for example at scene cuts or during high camera motion, where motion prediction is requiring high information in the P-pictures.
In accordance with an embodiment of the invention using the encoder 200 as shown in
Generating the main stream in this manner is advantageous as upon tuning into the main stream either upon turning on a receiver or switching from another channel a smooth transmission from the tune-in stream to the main stream is achieved, which may be completely undetectable by a viewer. Further, this solution may increase the bit rate efficiency in the main stream, because I-pictures for random access points are placed at points in the stream where the bit demand is higher anyway, even for P-pictures. In addition, bits for additional I-pictures at a fixed distance may be saved without losing accessibility to the channel as accessibility is ensured by providing the tune-in stream that may have the I-pictures at a fixed distance and at a higher rate, i.e. the number of I-pictures, as discussed above, is higher than the number of I-pictures in the main stream. This may result in the necessity to encode the tune-in stream with a lower quality to meet the bitrate requirements for transmitting the information. An additional advantage is that the above described approach is a pure head-end solution which preferably exists inside encoding device 202 and requires no additional network devices or the like. Further, it is advantageous that effectively no additional client device complexity is added. All the client has to do is to monitor the main stream actively for the first I-picture, rather than counting a fixed number of pictures after tune-in in case of static transition periods. Alternatively, the tune-in stream may include additional signaling about the next I-picture in the main stream indicating when the next I-picture occurs in the main stream. This enables the client to switch to the main stream without monitoring same.
Placing the I-pictures within the main stream dependent on the content is advantageous. However, in a situation of very long static scenes, it may be that any predefined, maximum allowed I-picture distance (that is set to not exceed a maximum transition period length) will be exceeded, so that it will be necessary to place an I-picture within the static sequence. In this situation, the switching from the low-quality tune-in stream to the high-quality main stream would potentially be visible. For such situations, i.e., a situation where an I-picture must be placed irrespective of the content, embodiments of the invention teach to modify the tune-in stream by gradually enhancing the quality of the tune-in stream during the transition period, for example by lowering the quantization parameters step by step up to the I-picture boundary of the main stream. The quantization parameter is one of the main parameters that controls the video quality during the encoding process. A lower quantization parameter corresponds to a more fine grain quantization of the encoded data coefficients and thus less visible encoding distortions like blocking artifacts.
As far as
Since the I-picture in the main stream is placed within the main stream at a position where differences in image quality are less detectable, the transition from the side stream to the main stream upon receipt of the I-picture in the main stream is less detectable and may be completely undetectable by the viewers. Thus, switching from the low-quality side stream to the high-quality main stream is subjectively hidden due to the fact that the I-pictures in the main stream were placed in positions determined on the basis of the content of the main stream.
Further, the decoder 400 comprises a decoder portion 412. The decoder portion 412 comprises a switching element 414 for selectively applying to an input of the decoder portion either the main stream 408 or the side stream 410. Further, the decoder portion has a control input 416 for receiving control signals, for example the above described channel change request signal. Further, the decoder portion 412 monitors the main stream as is schematically illustrated by the dashed line 418.
The decoder 400 operates in a manner as described above with regard to
In many cases switching from the tune-in stream to the main stream in a manner as described above with regard to
Using only a single decoding element and signaling to the decoding element when the switching will occur is advantageous as, first of all, only one decoder is necessary in the client device thereby reducing its complexity. Also, this approach works fine for time-shifted (delayed) tune-in streams. In addition, there is no need for the client to implicitly calculate or estimate the correct switch-over point. The client only needs to read the signaling messages from the streams. The encoder sends a message that signals the point in time, when the client is allowed to switch to the main stream and start decoding the first main stream data. This signal is sent along with the stream data, e.g. embedded in the tune-in stream data (e.g. as an RTP header extension) or e.g. as a separate signal flow (e.g. as an RTCP message).
The client device is monitoring this signaling, and on arrival of a message the client acts according to this message, i.e. stops decoding of the tune-in stream data and starts decoding the main stream data.
This has the advantage that the client can avoid parsing of stream data (e.g. parsing header information of the encoded picture data for a picture identifier, this identifier being different for different encoding schemes like H.264 or MPEG-2), and only needs to read signaling messages. Another advantage is that the client not only gets the information about the arrival of the I-picture, but also when the I-picture is ready for decoding (e.g. when the input buffer for the main stream is filled to a level that avoids buffer-underrun and overflow in the subsequent stream decoding).
Referring to
Referring to
An alternative signaling aspect is now described. As mentioned above, the client needs to wait for the first I-picture data of the main stream before being able to decode the main stream. Any data of the main stream that is received before the first I-picture data cannot be used and has to be deleted. It is advantageous to avoid the reception of such data to minimize the time when both streams (tune-in stream and main stream) need to be received from the client to save bandwidth on the client connection to the network (e.g. a DSL line) for other applications that may also make use of the bandwidth (that is usually constraint on such e.g. DSL line, compared to the core network bandwidth).
Thus, it is advantageous to signal, when the next I-picture (i.e. the first packets/bits of the picture) can be received by the client. This information enables the client to join the main stream just before the I-picture data will arrive and avoids the client to join the main stream ahead of time (e.g. in parallel to the tune-in stream, as described in
Such signaling can be done similar to the above describe “I-picture ready for decoding” message, e.g. embedded in the tune-in stream or as a separate data flow. Further it may be useful to send a number of messages, e.g. “I-picture starts in 3 seconds”, “I-picture starts in 2 seconds”, “I-picture starts in 1 second”, for error robustness and to enable the client to cope with potential network jitter (i.e. different network delays between transmission of the signaling and the stream data).
In accordance with further embodiments the decoder 400 may comprise a post-processing unit 422 which may further reduce the subjective difference between the tune-in stream and the main stream when switching from the tune-in stream to the main stream. Preferably, the post-processing unit 422 manipulates the main stream decoding. To be more specific, a filtering of the decoded pictures from the main stream may be applied, starting from the first picture of the main stream over a specific period of time, for example 0.5 seconds. This filtering may result in a blurring of the picture and in embodiments the filter coefficients may be modified over the above mentioned period of time, thereby reducing the blurring effect or filtering effect from picture to picture, thereby gradually changing from the low-quality of the tune-in stream to the high-quality of the main stream. Alternatively, post-processing may be achieved by decoding the first picture after the switch only with a subset of coefficients which is known as reduced resolution update in the H.263 standard. Following the first picture for each subsequent picture an increased number of coefficients is used until the full number of coefficients is applied and a full quality decoding of the main stream is achieved. This approach also leads to reduced image details in the first pictures after the switch and thereby contributes to the subjective hiding of the switch from the low-quality stream to the high-quality stream.
The embodiments described so far operated on the basis of receivers which obtained both the main stream and the side stream at the same time, however, the invention is not limited to such an environment.
In accordance with another aspect of the invention simple distortions upon switching from a tune-in stream to a main stream may be avoided by providing one or more additional tune-in streams which are encoded at different quality levels between the quality level of the main stream and the quality level of the first tune-in stream. This approach might be combined with the above-mentioned and described approach of introducing the I-pictures at the varying distances, however, the approach of using a plurality of tune-streams might also be used with conventional main stream encoding approaches, i.e. with conventional main streams having their I-pictures at fixed distances, i.e. positioned irrespective of the content.
The second tune-in stream may be used to avoid any undesired jump in the presented image quality during tune-in. The second tune-in stream may be encoded in a way that—during a part of a transition period—the quality is gradually increased from the tune-in stream quality Qi to the full quality Qf of the main stream (100%), as is shown in
A tune-in process may look like it is shown in
In addition, embodiments allow only the second tune-in stream to be encoded with an adaptive GOP length and the tune-in stream and the main streams may have a fixed GOP length. In other words, in such an embodiment the main stream and the tune-in stream are conventionally encoded and only the second tune-in stream is encoded in such a manner that the I-pictures are placed within the second tune-in stream dependent on the content.
It is further noted that the invention is not limited to the use of only two tune-in streams, rather it is possible to have N tune-in streams instead of two, thereby enabling a finer granularity and smoothness upon increasing the quality.
Embodiments as described above included tune-in streams or side streams which included both, I-pictures and P-pictures or B-pictures. However, the invention is not limited to such tune-in streams. In alternative embodiments the tune-in streams may consist only of single tune-in pictures, e.g. the side stream may only transmit the random access points or I-pictures. The intermediate pictures are decoded using the data from the main stream, e.g. information from the P-pictures and/or B-pictures of the main stream. The embodiments described above describe two separate streams, the main stream and the tune-in stream. However, the invention is not limited to such embodiments, rather the above methods can also be adapted to scalable video coding (SVC) using for example two layers, an enhancement layer having a longer I-picture distance and a base layer having a shorter I-picture distance. In such a scenario the base layer corresponds to the tune-in stream and is designed like the above-described tune-in stream. However, the above-mentioned use of more than one tune-in stream can also be realized by a scalable video coding approach, for example by applying a three-layer SVC with one base layer and two enhancement layers. Scalable video coding and using this approach for tuning into a main stream upon a channel change are e.g. described in WO 2008 138546 A2 and US 2003/0007562 A1 the disclosures of which are incorporated herewith by reference.
Subjectively hiding the switch from the low-quality stream to the high-quality stream is achieved in the same manner as above, i.e. during switching only the base layer is used for decoding until an I-picture in an enhancement layer is received. Upon receipt of this I-picture in the enhancement layer decoding is done on the basis of the information from the enhancement layer. Further, while embodiments of the invention are described with regard to an internet protocol TV system (IPTV system) it is noted that the invention is not limited to such an environment. Rather, the embodiments described above can be used and applied to any multimedia distribution system, e.g., broadcasting systems. Since embodiments of the invention provide a pure head-end solution, the solution of tune-in streams with an adaptive transition period length may also be applied to any multimedia distribution system.
In accordance with a further embodiment of the invention an additional switching picture for every I-picture may be sent in the main stream or in the tune-in stream or independent from the streams. The switching picture is an alternative encoding of an I-picture at the same sampling instance and it is only used after switching. During normal decoding of the main stream, in the steady state of channel reception the normal I-picture from the main stream is used. The switching picture is encoded in a way that also helps to reduce the visual difference between tune-in and main streams. It is encoded at a quality between the main stream and the tune-in stream. Alternatively, a complete “switching GOP” may be sent instead of a switching picture to control the “quality ramp-up” from the intermediate tune-in quality to the full main stream quality.
Further, it is noted that the embodiments were described in combination with video data, however it is noted that the invention is not limited to the transmission of video data, rather the principles described in the embodiments above can be applied to any kind of data which is to be encoded in a data stream. To be more specific, the above described principles also apply to audio data or other kind of timed multimedia data that uses the principle of differential encoding, utilizing the principle of different types of transmitted data fragments within a data stream, like full information (that enables the client to decode the full presentation of the encoded multimedia data) and delta (or update) information (that contains only differential information that the client can only use for a full presentation of the encoded multimedia data if preceding information was received). Examples of such multimedia data, besides video, are graphics data, vector graphics data, 3D graphics data in general, e.g. wireframe and texture data, or 2D or 3D scene representation data.
It should be understood that depending on the circumstances, the methods of embodiments of the invention may also be implemented in software. Implementation may occur on a digital storage medium, in particular a disc, a DVD or a CD with electronically readable control signals which can interact with a programmable computer system such that the respective method is executed. Generally, the invention thus also consists in a computer program product with a program code stored on a machine-readable carrier for performing the inventive method, when the computer program product runs on a PC and/or a microcontroller. In other words, the invention may thus be realized as a computer program with a program code for performing the method when, the computer program runs on a computer and/or a microcontroller.
It is noted that the above description illustrates the principles of the invention, but it will be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown here, embody the principles of the invention without departing from the spirit or scope of the invention. It will be appreciated by those skilled in the art that the block diagrams presented herein represent conceptual views of illustrative circuitry embodying the principles of the invention. Similarly, it will be appreciated that any flow-charts, flow-diagrams, transmission diagrams and the like represent various processes which may be substantially presented in a computer readable media and so executed by a computer or processor whether or not such a computer or processor is implicitly shown. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with the appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, it may implicitly include, without limitation, a digital signal processor hardware, read-only memory for storing software, random access memory and non-volatile storage.
Embodiments of the invention were described in the context of a multi-channel transmission system in which the main stream and the secondary stream are associated with a channel of a multi-channel transmission system, and a tune-in request indicates a change from a current channel of the multi-channel transmission system to a new channel of the multi-channel transmission system.
However, the invention is not limited to such embodiments. Rather, the invention, in general, is concerned with improving the tune-in characteristics upon tuning into a stream which comprises a main stream and at least a secondary stream as described in detail above, wherein the stream may be a single stream which is provided to a user, e.g. over a network, like the Internet.
The stream containing e.g. a video contents may be provided by a service provider such that a user may tune into the stream at any time. In such a situation, after receiving the tune-in request the stream including both the main and the secondary streams is received by the user, and the secondary stream is decoded until the self-contained block arrives on the main stream and the required main stream decoder buffer-fill level is reached.
In another embodiment of the invention, the stream is obtained by a user on the user's demand, e.g. from a service provider. The stream (e.g. video on demand) is received by the user and when tuning into the stream decoding of the stream starts. Again, decoding of the stream is done on the basis of the secondary stream until the required main stream decoder buffer-fill level is reached. This approach is chosen despite the fact that the stream, due to being obtained on demand, may be provided to the user such that a I-picture is present in the main stream upon tuning into the stream. Nevertheless, decoding will not start immediately, rather, a predefined main stream decoder buffer fill level will be obtained to ensure continuous decoding of the stream even in case of temporal interruptions of the stream (e.g. due to delayed stream packets due to network traffic). However, the amount of data to be buffered for the main stream is quite high. Therefore, also in such a situation the secondary stream is used at the beginning as the amount of information or data to be buffered in the secondary stream decoder is lower than the amount to be buffered in the main stream decoder. Thus, when using the secondary stream decoding will start earlier as the required fill lever for the secondary stream buffer is reached fast. Once the required main stream decoder buffer-fill level is reached decoding of the main stream starts.
In the description of the embodiments of the invention, the self-contained blocks and the non-self-contained blocks of the streams were named as I-pictures and P- or B-pictures, respectively. It is noted, that the term “picture”, in general, determines an encoded contents that includes data or information that is necessary to decode the contents of the block. In case of I-pictures all data or information is included that is necessary to decode the complete contents of the block, whereas in case of P- or B-pictures not all information is included that is necessary to decode a complete picture, rather additional information from preceding or following pictures is required. Alternatively, the I-, P- and B-pictures may be named I-, P- and B-frames.
While preferred embodiments of the present invention have been described above, it is to be understood that variations and modifications will be apparent to those skilled in the art without departing the scope and spirit of the present invention. The scope of the present invention, therefore, is to be determined solely by the following claims.
Number | Date | Country | |
---|---|---|---|
61113663 | Nov 2008 | US |