1. Technical Field
The present invention relates to a statistical multiplexer for coding and multiplexing multiple channels of digital television data.
2. Related Art
Digital television has become increasingly popular due to the high quality video image it provides, along with informational and entertainment features, such as pay-per-view, electronic program guides, Internet hyperlinks, and so forth. Such television data can be communicated to a user, for example, via a broadband communication network, such as a satellite or cable television network, or via a computer network. The video data can include high definition (HD) and standard-definition (SD) television (TV).
However, due to the bandwidth limitations of the communication channel, it is necessary to adjust a bit rate of the digital video programs that are encoded and multiplexed for transmission in a single compressed bit stream. A goal of such bit rate adjustment is to meet the constraint on the total bit rate of the multiplexed stream, while also maintaining a satisfactory video quality for each program.
Accordingly, various statistical multiplexers have been developed that evaluate statistical information of the source video that is being encoded, and allocate bits for coding the different video channels. For example, video channels that have hard-to-compress video, such as a fast motion scene, can be allocated more bits, while channels with relatively easy to compress scenes, such as scenes with little motion, can be allocated fewer bits.
In MPEG-2 and MPEG-4 digital video systems, the complexity of a video frame is measured by the product of the quantization level (QL) used to encode that frame and the number of bits used for coding the frame (R). This means the complexity of a frame is not known until it has been encoded. As a result, the complexity information always lags behind the actual encoding process, which requires the buffering of a number of frames prior to encoding, thereby adding expense and complexity. This kind of look-behind information may be avoided by using some pre-encoding statistics about the video, such as motion estimation (ME) scores or a need parameter (NP) to provide a complexity measure. However, the relationship between the pre-encoding statistics of a video frame and the complexity of that frame may not be direct, and sometimes the relationship may change due to the changing subject matter of the source video.
Previous statistical multiplexing systems employed a number of individual encoders that encode data from a number of incoming channels of source video data. The system dynamically allocated bits to the individual encoders to encode frames of video data from the channels. The system used pre-encoding statistics of the source video frames that are closely related to the complexity of the frames, and account for changing content in the source video, to dynamically allocate bits. With more channels included in video content and increasing density of the data in high density systems it is desirable to continue to improve the performance of such multiplexing systems.
Embodiments of the present invention provide improvements to a statistical multiplexer (statmux) system for encoding and decoding multiple channels of digital television data. In particular, the system provides improved algorithms to better determine bit rate by identifying film mode and group of picture (GOP) structural changes.
Film mode provides a lower frame per second rate of 24 frames per second fps, as opposed to SD at 30 fps, or 720p HD at 60 fps. The non-film SD and HD modes provide a ratio of 3:2 which can readily be managed to control bit rate, while film mode cannot. Thus, in film mode when the 24 fps rate is detected, instead of determining bit rate from a simple viewing the next picture in the look ahead buffer (LAB) as conventionally done for SD and HD modes, the system looks at a start time stamp of specific data in the LAB to better determine the data rate when in film mode.
Accounting for GOP structural changes includes identifying the number of pictures N and M. The number N refers to the number of pictures between I type pictures in data provided to an encoder. The number (M) refers to the number of pictures between P type pictures. N technically references the size of a GOP, while N references the size of a sub-group within the GOP, In previous systems, the a fixed number was used to estimate N and M. In the present system to better account for GOP structural changes and determine bit rate the actual numbers for N and M are determined.
Further details of the present invention are explained with the help of the attached drawings in which:
The encoded data provided to a multiplexer (mux) 8 is combined into a single bitstream that is provided to a transport packet buffer 10. The transport packet buffer 10 then provides the compressed and multiplexed video channels to a transmitter 12 for transmission to a remote receiver that will decode and provide the individual channels to a television or other video display device.
The encoders 41-4N can be either for standard definition (SD) television or high definition (HD) television. A block diagram of an SD encoder 20 is shown in
A block diagram of the HD encoder 30 is shown in
Both the SD encoder of
Control information such as the NP and ST are exchanged between the encoders and the statmux controller to control the Bitrate Queue (BRQ) in each controller for the system to maximize efficiency. For the NP, each encoder will provide the NP information to the statmux controller 6 to indicate the difficulty of the content being encoded. The statmux controller 6 will then use this NP data to determine what ratio of bits to give each encoder. For ST, each encoder will receive state information from the statmux controller 6. This ST is updated with the BRQ data in regular intervals, for example every 2 milliseconds. The ST information can include a minimum bitrate, nominal bitrate and a command that can be set to hold the bitrate constant. There is a separate BRQ for each encoder that will contain equally spaced bitrates to be applied. As mentioned above, an example of the BRQ application period is 2 milliseconds. In one example to enable efficient operation all PCI bus accesses by the statmux controller and encoders are via writes. No PCI reads are performed, so data is always stored at the destination. Further information about statistical data determination, such as NP and BRQ, are described in more detail to follow.
Both the SD and HD encoders can be combined into a single statmux system, such as that illustrated in
The system of
A key part of a statistically multiplexed multi-channel encoding system of the invention is the calculation of NP. The visual characteristics and complexity information regarding the source video are collected and condensed into this single parameter, which is referred to as the “need parameter.” The NP is calculated for each video channel, and is updated once per frame whenever a new video frame is processed by an encoder. Optionally, the NP can be updated more often, such as multiple times per frame. Moreover, for field-picture mode, the NP can be updated once per field.
The current frame motion estimation (ME) scores, average frame ME scores and current frame activity are preferably directly applied in the calculation of the NP. Optionally, a table look-up may be used. The NP calculation functions in an encoder provide the NP according to the current picture type in the beginning of a new frame (such as HD or SD), and pass the NP to the statmux controller. The NP must arrive at the statmux controller no later than, e.g., two quantization level/bit rate (QL/BR) cycles before the start of the slice encoding at the encoder. This lead time ensures the statmux controller has enough processing time for bandwidth allocation.
During operation of a statmux system, such as illustrated in
For the high-definition encoder that processes multiple panels of a frame in parallel, such as illustrated in
As the statmux controller 6 receives updated NP data, it reallocates the bandwidths for all the video services based on the latest information. The bandwidth allocation is sent back to each encoder, such as 41-4N of
The statmux controller keeps an approximate Video Buffering Verifier (VBV) model for each encoder, such as is known from the MPEG standard, to ensure that each frame from each encoder is encoded within acceptable size limits. The VBV model is only approximate because the actual transmission rate changes that occur at the decode time of a frame cannot be precisely modeled in advance, at the time of encoding. The statmux controller 6 also keeps a bit accurate model of the BRQ, and calculates the minimum and maximum limits on the transmission rate before any transmission rate change is issued. Since all the video services need not be frame-synchronized, the encoding bit rates and transmission rates are updated as frequently as the statmux controller 6 can handle.
Initially, the encoder 41 includes a video capture module 50. The video capture module 50 provides a Video Input Clock signal, illustrated in
The encoder 41 further includes a Pre-Look Ahead Buffer (Pre-LAB) 52 that receives the output of the video capture module 50. The PreLAB 52 includes a few pipeline states before a frame is placed in the Look Ahead Buffer (LAB) 58. These stages include some early Motion Estimation (ME) stages 54, Inverse Telecine stage 55 to transfer cinema signals to television, and the Group of Pictures (GOP) stage 56. The ME stage 54 is provided in addition to the ME stage information from the compressor of the encoder 41 and is used to determine the NP that helps the statmux controller 6 determine bandwidth need for the individual signal prior to encoding.
The output of the Pre-LAB 52 is provided to the Look Ahead Buffer (LAB) 58. The LAB 58 will buffer a fixed number of frames, for example a fixed 30 frames, regardless of the input format. With a fixed 30 frames, the clock timing of the LAB 58 will be different when 30 frames per second (fps) vs. a 60 fps output is desired.
The output of the LAB 58 is provided to the compressor and other components of the encoder 41. The final output of encoder 41 is then provided to multiplexer 8. The multiplexer 8 provides a Transport Stream (TS) Output Clock that clocks the output packets from the multiplexer 8. The TS output clock, as shown in
Other time references relative to the video input clock and the TS output clock are also illustrated in
The second state 82 is the “Bitrate Queue (BRQ) Driven and Need Parameter (NP) Sent” state. In state 82, the encoder state machine will transition to the BRQ driven state and start sending NP data to the controller once the encoder starts receiving bitrates. The encoder only sends NP data to the statmux controller when it is receiving BRQ data.
The third and final state 84 is the “Nominal Bitrate No NP” state. This nominal bitrate no NP state 84 is entered when a Hold command is sent by the statmux controller. The hold command is only used when the statmux controller is ceasing to function for any reason, such as when it is recovering from a failure. In the hold state all encoders in the system are sent to a nominal bitrate while the statmux controller is rebooted. No NP data should be sent by the encoders in this state. To prevent errors, the encoders should not transmit on the PCI bus while the controller is recovering.
Appendix A shows an example of C++ code used for allocation of bitrates to individual encoders by the statmux controller. Initially in the code a total bits to allocate variable (bits_to_allocate) is set and the NP is initialized to zero. The remaining code will cause the statmux controller to allocate bits for each time slot (e.g. every 2 ms) based on the current NP value for each encoder.
Under the heading “Compute the total NP and assign a minBitrate to all channels” the code of Appendix A computes the total NP from the values provided by the encoders and then assigns a minimum bit rate to each channel formed by the output at each encoder. Each encoder will first receive its minimum bitrate allocation of bits for the next available time slot after the time slot being processed.
Next under the headings “Allocate remaining bits based on complexity” and “Looping to handle case where channels hit max bitrate,” the remaining bits available (of the original total) are allocated to each of the encoders in a linear mapping based on the NP received from that individual encoder. If an encoder then receives more bits than the maximum bitrate for that encoder, those bits are then given to the other encoders in a second iteration of the linear mapping.
Embodiments of the present invention provide for an improved determination of NP. The embodiments described take into account factors such as scene changes, and difficult frames that are identified in the video data provided for encoding.
A. Scene Change
After determining that a scene change will occur, the coded ratios stored in an encoder may not provide accurate determination for complexity that is provided as part of the NP data for the statmux controller. In the past, when determining a complexity the encoder looked only at the current picture and previous picture history. If a new scene is significantly more complex and requires a higher bit rate, the data complexity determination based only on current or previous data may not be adequate.
Discussion of a complexity determination will be made with respect to Appendix B, which includes C++ program code. The code of Appendix B is provided as an example to accomplish detection of a scene change and provide statistical information for the NP to enable controlling bit rate for an encoder. Also, reference will be made to
First in part A of the code of Appendix B and in step 90 of
Next in the code of Appendix B and in step 91 of
Next in Appendix B and step 92 in
If a scene change is detected, data within the new scene is specifically evaluated for complexity beginning in the code of section B of Appendix B. Initially for the new scene after the scene change, a best estimate is made for the coded ratios for the new scene. To do this, the code initially looks at the I, P and B type pictures. All I pictures from a scene tend to code similarly, and the same is true for P and B type pictures. However, the I, P and B type pictures can individually code very differently, so it is important to group complexity calculations by picture type. To ensure such a grouping, the code in section B determines on average what percentage of the pictures will be I, P and B type. These percentages are then used when determining the overall complexity.
Next, in a step of the code labeled “avgsScores, Pic_type counts, and difficult frame counts” calculations are made to determine complexity values for the current scene and the new scene using the average scores, picture type counts and difficult frame counts. Note from the code labeled “Do not include statistics from a previous scene” that only the new and current scene are evaluated. Data from the “previous scene,” where the previous scene is the scene immediately preceding the current scene is not included in the complexity determination.
Finally, in the code labeled “Using the standard GOP estimate if end of scene is not found or scene is longer than N” a limitation is made on the complexity evaluation. The limitation is based in part on the size of the LAB. If the entire scene does not fit within the LAB, the complexity determination is limited by the LAB size. Further if the length of the scene is longer than N, the maximum data that can be determined and provided to the statmux controller for the bit rate statistical analysis. N will be a limiting factor on the total complexity analysis.
The code in step C of Appendix B and in step 94 of
B. Difficult Frames
In one embodiment, the code further considers a particular class of “difficult frames.” Detecting difficult frames is also illustrated at step 95 in
With a determination of difficult frames as well as complexity due to scene changes, the code of step D of Appendix B and step 96 of
C. Stream Alignment
In circumstances described to follow, the bitrate output allocated to the encoders to a time period can be misaligned with that time period, causing the bit output to exceed a maximum that will be accepted by components downstream from the encoders supplying the multiplexer. Bits can thus be dropped or left on the floor when such misalignment occurs. Code in C++ for preventing such misalignment is provided in Appendix C. Appendix C will be referenced in conjunction with the description to follow.
In one example statmux encoder, bit rates are allocated to the encoders in 2 msec time periods. A misalignment refers to when the bitrate allocations for a given 2 msec allocation is applied in the encoders during the wrong 2 msec time period. In some current devices, the bitrate allocation can be off by as much as 30 msec. So embodiments of the present invention take steps to ensure that the misalignment will not overflow the components downstream from the encoders over that 30 msec time. The multiplexer itself does not limit the total bandwidth for the muxed stream, but components such as buffers downstream from the encoders do. In the case of misalignment, for the multiplexer, rather than getting a perfectly multiplexed stream of 36 mbps, it will wavier a little bit when the bitrates are changing. In a perfect world with no misalignment between the streams from the encoders, no correction algorithm would be necessary.
To better understand the misalignment and how embodiments of the present invention can correct the misalignment, in a further example situation, assume four encoders are providing bits to a multiplexer. Assume a maximum of 20K bits are allocated in a 2 msec time period (referred to herein as time period 1). The four encoders are allocated bit rates for time period 1, as follows:
encoder 1: 10K
encoder 2: 5K
encoder 3: 5K
encoder 4: 0K
Then in a second time 2 msec period following subsequent to the first (referred to herein as time period 2), according to a change in need, the bit rates for the four encoders are reallocated as follows:
encoder 1: 5K
encoder 2: 5K
encoder 3: 0K
encoder 4: 10K
Now suppose the encoder 4 is misaligned and during time period 1 it jumps ahead to time period 2. Now the actual bit rate output during frame 1 will be as follows:
encoder 1: 10K
encoder 2: 5K
encoder 3: 5K
encoder 4: 10K
The total bit rate output will then be 30K, which exceeds the 20K maximum output to the multiplier by 10K. In this case, 10K bits will be dropped.
Encoder misalignment will occur on occasion. In one example system with 2 msec bitrate entries, accuracy of a number of parameters at worst case was set to 5 msec (although 30 msec accuracies can be required as noted above). The 5 msec inaccuracies included the 2 msec video clock timestamp inaccuracy, a 2 msec clock smoothing of timestamps and a 1 msec ability to fix the TS delay.
As noted from the example above, there can be a significant change in bit rate from an encoder from time period to time period. For instance, encoder 4 jumped from 0K bits in frame 1 to 10K bits in frame 2. With such a significant change, when misalignment occurs, bits can be dropped. Accordingly, embodiments of the present invention limit the percentage increase of an encoder's output from time period to time period to prevent bits from being dropped when misalignment occurs.
Referring now to the code of Appendix C, a sample program is shown to limit the increase of an encoder's output from time period to time period. Initially the maximum buffer size is defined as 20K bits, following the above example where the multiplexer can receive a maximum of 20K bits in a 2 msec time period. Further, the maximum alignment error is set, here to 5 msec, also following the above example situation. Further, a sample period is set dividing the 2 msec time period selected into 90 KHz segments. Further, with a maximum number of delay cycles desired determined, an accumulator step size to evaluate bit rate increase is identified.
Next, a maximum bitrate increase is defined based on the maximum packet buffer output and the accumulator step size. In an initialization and subsequent reconfiguration steps the maximum bitrate increase is adjusted based on downstream constraints such as the number of channels. Finally, the maximum bitrate for an encoder is set as the lesser of the previous channel bitrate or the previous bitrate plus the maximum bitrate increase when an increase is indicated in the need parameter provided from the encoder.
Next in step 1006 to begin calculations to determine maximum bit rate increase, a sample period is set by dividing each time period, for example 2 msec as indicated above, into small segments. Next in 1008, a maximum number of delay cycles is determined, and an accumulator step size for evaluating bit rate increase during a sample period is created. In step 1010 the maximum bitrate increase is defined based on the maximum packet buffer output and the accumulator step size. In step 1012, the maximum bitrate increase is adjusted based on downstream constraints such as the number of channels. Finally, in step 1014 the bitrate for an encoder during a time period as the lesser of the previous time period bitrate or the previous bitrate plus the maximum bitrate increase when an increase is indicated in the need parameter provided from the encoder.
D. Film Mode
Film mode provides a special case for determining need parameter. The film mode provided for embodiments of the present invention is different from previous film mode determinations because complexity parameters are not provided based solely on the next picture provided in the LAB. Instead, the start time for the data in the LAB is determined from looking at data in the LAB.
Film mode provides signals at 24 frames per second. Non-film modes include SD and HD modes, with SD mode operating at 30 frames per second. HD mode at 720P operates at 60 frames per second. Also in non-film mode, sometimes the display provides 3 time slots and other times it provides 2 time slots for display of the same frame. Hence the term 3:2 mode for strict non-film data. With inverse telesini, or non-film mode, 3 time slots will be constantly displayed per frame. The encoder without being aware of the shift will throw away the extra time slot when 3 times slots are used in a frame. Thus in embodiments of the present invention, the frame rate is determined to indicate if the picture is in 3:2 mode or not. The system looks at three seconds of data to determine if 2 or 3 time slots are being displayed per frame. If the system is using film mode, the system will look at 1.5 second intervals. Further, the system will further provide NP data to ensure that the encoder produces 60 frames per second for two time slots in HD TV mode and not 24 frames.
When in film mode, each encoder will have to ensure that the need parameter (NP) sent to the statmux controller represents the fixed Decode Time Stamp (DTS) to account for the difference between the TS Delay and: 24 frames per second for film mode, 30 frames per second when in SD non-film mode; or 60 frames per second when in 720P HD non-film mode. In film mode, embodiments of the invention require going into the LAB in order to determine the NP, rather than using the next picture to be encoded as in non-film mode.
Appendix D provides code for determining need parameter (NP) for film mode to account for duration for embodiments of the invention. Initially, the code determines a target DTS in the LAB by using the Presentation Time Stamp (PTS) of the last frame captured. The target DTS is found in the LAB based on frames per second (fps) being produced to the decoder, 60 fps for HD mode, 30 fps for SD, or 24 fps for film mode. Next, the target DTS is fine tuned for best Video Quality (VQ). Next, using the target DTS, the LAB is searched to find the target frame for determining NP complexity data. For larger frames, a variable “tmp” is set to account for the LAB length. If a first condition occurs, 2 time slots are added to tmp. If a second condition occurs, 4 timeslots are added to tmp. If a third condition occurs, tmp has a single field added. Finally, the need parameter (Needparam) is determined for providing to the statmux controller to determine a bit rate for the encoder.
F. GOP Structure Changes
The GOP structure changes include determining the actual M and N values for a stream rather than using fixed values. M and N are defined as follows: N refers to the size of the GOP: number of pictures between I pictures. M refers to the size of a subGOP: number of pictures between P pictures. So in this stream: IBBPBBPBBPBBPBBPBBI, N=19 and M=3.
Previous systems, used a fixed M and N as the nominal M and N for a stream. Embodiments of the present invention consider what the actual computed M and N are for the scene.
The following code illustrates the use of M and N in determining need parameter (NP). M can be recomputed on each picture. The more predictable, the larger the M factor. For still content, M should be maximized. This code uses the computed M “u8M” for the target frame as the M factor for the calculation. Previous systems used a fixed M, such as 3.
Code for the N factor can additionally be found in Appendix B under the text heading “//Compute avgScores, pic_type counts, and difficult frame counts.” In this code, the N factor is the variable N++. For a value for N++, the code calculates the number of pictures between the first two I pictures in the LAB. This value is used for N, unless it is greater than a value maxN, where maxN will be a maximum calculation value provided as part of the NP to the statmux controller.
Although the present invention has been described above with particularity, this was merely to teach one of ordinary skill in the art how to make and use the invention. Many additional modifications will fall within the scope of the invention, as that scope is defined by the following claims.
This is a Continuation-in-Part of U.S. patent application Ser. No. 13/664,373 filed Oct. 30, 2012 and is further a Continuation-in-Part of U.S. patent application Ser. No. 13/657,624 filed on Oct. 22, 2012, both of which are incorporated by reference herein in their entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 13657624 | Oct 2012 | US |
Child | 13802158 | US | |
Parent | 13664373 | Oct 2012 | US |
Child | 13657624 | US |