The present invention relates generally to providing data over a video stream, and more specifically to providing data over a block based video stream.
Many different compression algorithms have been developed in the past for digitally encoding video and audio information to minimize the bandwidth required to transmit this information for a given picture quality. Several multimedia specification committees have established and proposed standards for encoding/compressing and decoding/decompressing audio and video information. The most widely accepted international standards have been proposed by the Moving Pictures Expert Groups (MPEG), and are generally referred to as the MPEG-1, MPEG-2 and MPEG-4 standards. These MPEG standards for moving picture compression are used in a variety of current video playback products, including digital versatile (or video) disk (DVD) players, multimedia PCs having DVD playback capability, and satellite broadcast digital video.
In general, in accordance with the MPEG standards, the audio and video data comprising a multimedia data stream are encoded/compressed in an intelligent manner using a compression technique generally know as “motion coding”. More particularly, rather than transmitting each video frame in its entirety, MPEG uses motion estimation for only those parts of sequential pictures that vary due to motion, where possible. In general, the picture elements or “pixels” of a picture are specified relative to those of a previously transmitted reference frame using motion vectors that specify the location of a 16-by-16 array of pixels or “macroblock” within the current frame relative to its original location within the reference frame. Three main types of video frames or pictures are specified by MPEG, namely, I-type, P-type, and B-type pictures.
An I frame is coded using only the information contained in that frame, and hence, is referred to an “intra-coded”.
A P frame is coded/compressed using motion compensated prediction (or ‘motion estimation”) based upon information from a past reference frame (either I-type or P-type).
A B frame is coded/compressed using motion compensated prediction (or “motion estimation”) based upon information from either a past and/or a future reference frame (either I-type or P-type), or both. B frame pictures are usually inserted between I-type or P-type pictures, or combinations of either.
The MPEG protocol supports transmission of audio stream data, video stream data, and other non-audio/video stream data. It is often desirable for content, such as the video content to be blocked based on access control techniques to prevent unauthorized access. This information is generally sent using the non-audio/video packet capabilities of the MPEG protocol. However, such transmission of access control information lends itself to be bypassed by merely separating the video stream data from the non-video stream data. Therefore, it would be useful to provide access control information and other information in a manner that didn't lend itself to being separated from the video stream.
Therefore it can be seen that a system and method for transmitting non-video information in a block based multimedia protocol would be useful.
The present disclosure relates generally to providing non-video data to a block based video stream. In addition, the present disclosure relates generally to retrieving the non-video data from block-based video streams.
At step 110, a location within a block-based video stream is identified for the storage of non-video data. As indicated in block 116, there are numerous considerations which can be taken into account in identifying a specific location for the non-video based data. For example, the specific location where non-video data is to be stored can be dependent upon a specific group of pictures, a specific frame within a group of pictures, or a specific block within a group or frame of pictures. For example, within a group of pictures a specific frame selected by its frame type, or frame location, could be identified to contain the non-video data. Furthermore, within an identified frame, specific blocks may be identified to contain the non-video data. Within an identified block, the location to store non-video data can be restricted to specific samples associated with the block. Likewise, within each identified sample, it is possible for the location of non-video data to be further limited by a specific bit location. It will be appreciated, that any type of information uniquely identifying a group of pictures, a frame, a block, or a sample within a block that is associated with the MPEG protocol can be used to identify a specific location where non-video data is to be stored.
At step 111, a video data at the location identified at step 110, is replaced with non-video data. As indicated in block 117, such information can be: data authentification information, such as would be used to allow a user to view specific multimedia presentation material; user control information, such as channel select, volume control, or picture control information that would be provided by the user; or system control information, whereby a system between an end user and an content provider of the video content inserts digital information for any of a number of purposes including data authentification, control or identification.
At step 112, a determination is made whether more non-video data is to be inserted in the correct video stream, if so the flow proceeds back to step 110. If not, the flow terminates at step 113. Specific implementations of the method of
Upon receiving the combined video/non-video data, the data extraction module 125 extracts non-video data from the video data and provides it to appropriate destinations. For example, the non-video data can be provided to an external source, or to an internal source, such as control module 127. The video data or the video/non-video data can be processed to create rendered video and provided to a display device. The data insertion module 120 further comprises a video encode module 121 coupled to a control module 122. In operation, the video encode module 121 receives video-in data and non-video data. The non-video data can be received from an external source, as indicated, or from an internal control module 122. Examples of an internal control module would include user authentification modules, identification insertion modules, as well as other control modules for controlling the flow of video information. In addition, it will be appreciated that the data inserted does not have to be related to the content of the video stream.
Data extraction system 125 further includes a video decode module 126 which receives the video/non-video data and extracts the non-video data. In one embodiment, at least a portion of the non-video data is provided to a control module 127. It will be appreciated that control module 127 working in unison with the control module 122, can be used to provide a wide range of control features between the data insertion system 120 and the data extraction system 125. Since the video data replaced by the non-video information is lost, the video decode module 126 can decode the non-video information, or replace it with a predefined value.
The back end video decompression system 143 can receive video data to be decompressed either directly from the non-video data extraction module 142, or from the front end decompression module 141. Where the information is received from the front end decompression module 141, it will be appreciated that the back end video decompression module receives both the video data and the non-video data. In one embodiment, this is acceptable in that the non-video data has been inserted into the video stream such that any artifacts introduced into the picture as a result of the non-video data would be either imperceptible to the viewer or degrade the quality by a level whereby it would be acceptable. In an alternate embodiment, the non-video data could be replaced by other data by the non-video data extraction module 142. For example, all of the locations containing non-video data could be set to a specific value. However, it will be appreciated that the original video data contained at the storage location of the non-video data will not be recovered.
In operation, when a non-compressed video signal is received, a motion compensation is performed by block 151. As previously discussed, one aspect of motion compensation would be to determine motion vectors. These motion vectors would generally be associated with the specific macroblock. Referring to
The block of data 210 is received at the discreet cosinus transform block 152, where a discreet cosinus transform is performed. It will be appreciated that the discreet cosinus transform will convert the block of data 210 from a spatial domain into a frequency domain. Because most of the information associated with specific blocks of video tend to have low frequency components, it is a well recognized property of video that, as a general rule, video information stored within the frequency domain will be more compacted than the information stored in the spatial domain. This is represented in
Next, at the quantization module 153, a quantization is performed on the data block 211 generated by module 153. By quantizing the data, the amount of data representing a frame is further reduced. This is illustrated in
At the data reorganization module 144 the information received from the quantization module 153 is reorganized. Often referred as a zigzag operation, the data reorganization module 154 operates on the fact that the non-zero data of quantized information following the discreet cosinus transform step of 152 results in the non-zero data being generally located at the low frequency corner (top left) at the macroblock 212. By reorganizing the data to place the data stream samples containing data adjacent to one another, the back end compression step performed by compression module 137 can be more efficient. The reorganized video data 214 is provided to the non-video data insertion module 136 which inserts the non-video data prior to providing it to the compression module 137.
Module 161 provides its output stream to the de-quantification 162 which performs a de-quantification to provide a block representation similar to that of 211 in
Following de-quantification, an inverse discreet transform module 163 performs an inverse discreet transform function on the frequency domain block in order to provide a block of spatial data such as block 210 representing video data. Lastly, the motion compensation module 164 can perform any additional decompression such as motion compensation prior to providing the video-out.
The specific B frame is illustrated as stream 320 in
The first macroblock of stream 320, B2 (00), is further illustrated as data block 330 to comprise four blocks of luminance data, Y1–Y4, and two blocks of chrominance data, U and V. Likewise, the second macroblock of B frame B2, B2 (01), is illustrated as data block 335 which also include four luminance blocks and two chrominance blocks. In the specific embodiment illustrated, the block 330 representing the macroblock B2 (00) stores a single bit of non-video data in each of the chrominance blocks. For example, a first bit of data, B0, is stored within the U chrominance block of the macroblock B2 (00) while a second bit of data, B 1, is stored within the V chrominance block of macroblock B2, (00). In a similar manner, the block 335, representing macroblock B2 (01), stores a single bit of information in each of the two chrominance blocks as illustrated. The chrominance block U of data block 330 is further illustrated as samples 340. As illustrated, 64 samples reside within each 8×8 block of data. Therefore, samples S1 through S64 are represented. In a similar manner, the V chrominance data is also represented as samples 342. The sample S64 within sample 340 represents the last word of the chrominance data block U of block B2(00) data stream. By the last sample, it is referred to as the sample that would be at one end of a specific block of data transmission of a specific block of data. For example, the last sample of a chrominance block, Sample 64 of block 340 is further represented by byte 350. Byte 350 is illustrated to have seven bits of zeros followed by an unknown bit X. The unknown Bit X represents the non-data to be inserted. The first sample of the V, in stream 330, block immediately following the last sample of the U block is illustrated as sample 351. 351 is illustrated as having 8 bits set to X. It will be appreciated, that a bit set to X represents a bit that is associated with a byte that is expected to have non-zero value.
Therefore, during compression, the samples of a block are encoded only if one of them is different from zero. A flag specifies the presence or not of the block samples in the video stream. In an embodiment, the data bit will be inserted in the last bit of the last sample of a chrominance block. For example, byte 350 is the last sample of U chrominance block 340, and byte 352 is the last sample of the V chrominance block 342. By limiting data insertion to only those blocks of data that are non-zero, an efficient compression is realized when entropy encoding, involving a run length encoding is implemented by block 360. Note that even though all bytes 350–353 are illustrated as flowing to compressed run length data 360, each block is generally compressed independently of each other block. By limiting data insertion to only those blocks of data that are chrominance blocks, artifacts resulting from the insertion of data are minimized.
In a specific embodiment, the data inserted in the data stream can include a specific bit code representing a start code. For example, a series of zeros of a specified length followed by a one can indicate a data start indicator. Once the data start indicator is recognized, the inserted data stream can be retrieved. Where the data to be inserted has a fixed length, it can be repeated in a continuous manner. In another embodiment, additional codes can be inserted in place of the start code, or following the start code to indicate the presence and/or types of data being sent.
It will now be appreciated that non-video data can be inserted in a video stream in a manner that will reduce artifacts in displayed video, and won't have significant impact on data size.
The various functions and components in the present application may be implemented using an information-handling machine such as a data processor, or a plurality of processing devices. Such a data processor may be a microprocessor, microcontroller, microcomputer, digital signal processor, state machine, logic circuitry, and/or any device that manipulates digital information based on operational instruction, or in a predefined manner. Generally, the various functions, and systems represented by block diagrams are readily implemented by one of ordinary skill in the art using one or more of the implementation techniques listed herein. When a data processor for issuing instructions is used, the instruction may be stored in memory. Such a memory may be a single memory device or a plurality of memory devices. Such a memory device may be read-only memory device, random access memory device, magnetic tape memory, floppy disk memory, hard drive memory, external tape, and/or any device that stores digital information. Note that when the data processor implements one or more of its functions via a state machine or logic circuitry, the memory storing the corresponding instructions may be embedded within the circuitry that includes a state machine and/or logic circuitry, or it may be unnecessary because the function is performed using combinational logic. Such an information handling machine may be a system, or part of a system, such as a computer, a personal digital assistant (PDA), a hand held computing device, a cable set-top box, an Internet capable device, such as a cellular phone, and the like.
In the preceding detailed description of the figures, reference has been made to the accompanying drawings, which form a part thereof, and in which is shown by way of illustration specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that logical, mechanical, chemical and electrical changes may be made without departing from the spirit or scope of the disclosure. To avoid detail not necessary to enable those skilled in the art to practice the disclosure, the description may omit certain information known to those skilled in the art. Furthermore, many other varied embodiments that incorporate the teachings of the disclosure may be easily constructed by those skilled in the art. Accordingly, the present disclosure is not intended to be limited to the specific form set forth herein, but on the contrary, it is intended to cover such alternatives, modifications, and equivalents, as can be reasonably included within the spirit and scope of the disclosure. The preceding detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present disclosure is defined only by the appended claims.
| Number | Name | Date | Kind |
|---|---|---|---|
| 4866395 | Hosteller | Sep 1989 | A |
| 5027203 | Samad et al. | Jun 1991 | A |
| 5093847 | Cheng | Mar 1992 | A |
| 5115812 | Sano et al. | May 1992 | A |
| 5253056 | Puri | Oct 1993 | A |
| 5475434 | Kim | Dec 1995 | A |
| 5563950 | Easter et al. | Oct 1996 | A |
| 5602589 | Vishwanath et al. | Feb 1997 | A |
| 5635985 | Boyce et al. | Jun 1997 | A |
| 5644361 | Ran et al. | Jul 1997 | A |
| 5650825 | Naimpally et al. | Jul 1997 | A |
| 5652749 | Davenport et al. | Jul 1997 | A |
| 5732391 | Fiocca | Mar 1998 | A |
| 5737020 | Hall et al. | Apr 1998 | A |
| 5740028 | Sugiyama et al. | Apr 1998 | A |
| 5844545 | Suzuki et al. | Dec 1998 | A |
| 5850443 | Van Oorschot et al. | Dec 1998 | A |
| 5940130 | Nilsson et al. | Aug 1999 | A |
| 5996029 | Sugiyama et al. | Nov 1999 | A |
| 6005623 | Takahashi et al. | Dec 1999 | A |
| 6005624 | Vainsencher | Dec 1999 | A |
| 6014694 | Aharoni et al. | Jan 2000 | A |
| 6040863 | Kato | Mar 2000 | A |
| 6081295 | Adolph et al. | Jun 2000 | A |
| 6141693 | Perlman et al. | Oct 2000 | A |
| 6144402 | Norsworthy et al. | Nov 2000 | A |
| 6167084 | Wang et al. | Dec 2000 | A |
| 6182203 | Simar, Jr. et al. | Jan 2001 | B1 |
| 6208745 | Florencio et al. | Mar 2001 | B1 |
| 6215821 | Chen | Apr 2001 | B1 |
| 6219358 | Pinder et al. | Apr 2001 | B1 |
| 6222886 | Yogeshwar | Apr 2001 | B1 |
| 6226041 | Florencio et al. | May 2001 | B1 |
| 6236683 | Mougeat et al. | May 2001 | B1 |
| 6259741 | Chen et al. | Jul 2001 | B1 |
| 6263022 | Chen et al. | Jul 2001 | B1 |
| 6300973 | Feder et al. | Oct 2001 | B1 |
| 6307939 | Vigarie | Oct 2001 | B1 |
| 6314138 | Lemaguet | Nov 2001 | B1 |
| 6323904 | Knee | Nov 2001 | B1 |
| 6366614 | Pian et al. | Apr 2002 | B1 |
| 6373530 | Birks et al. | Apr 2002 | B1 |
| 6385248 | Pearlstein et al. | May 2002 | B1 |
| 6438168 | Arye | Aug 2002 | B1 |
| 6480541 | Girod et al. | Nov 2002 | B1 |
| 6507618 | Wee et al. | Jan 2003 | B1 |
| 6526099 | Christopoulos et al. | Feb 2003 | B1 |
| 6549561 | Crawford | Apr 2003 | B1 |
| 6563953 | Lin et al. | May 2003 | B1 |
| 6584509 | Putzolu | Jun 2003 | B1 |
| 6594311 | Pearlstein | Jul 2003 | B1 |
| 6621866 | Florencio et al. | Sep 2003 | B1 |
| 6687384 | Isnardi | Feb 2004 | B1 |
| 6714202 | Dorrell | Mar 2004 | B1 |
| 6724726 | Coudreuse | Apr 2004 | B1 |
| 6748020 | Eifrig et al. | Jun 2004 | B1 |
| 20010026591 | Keren et al. | Oct 2001 | A1 |
| 20020106022 | Takahashi et al. | Aug 2002 | A1 |
| 20020110193 | Kyoon et al. | Aug 2002 | A1 |
| 20020138259 | Kawahara | Sep 2002 | A1 |
| 20020145931 | Pitts | Oct 2002 | A1 |
| 20020196851 | Arnaud | Dec 2002 | A1 |
| 20030093661 | Loh et al. | May 2003 | A1 |
| 20030152148 | Laksono | Aug 2003 | A1 |
| Number | Date | Country |
|---|---|---|
| 0661826 | Jul 1995 | EP |
| 0739138 | Oct 1996 | EP |
| 0805599 | Nov 1997 | EP |
| 0855805 | Jul 1998 | EP |
| 0896300 | Feb 1999 | EP |
| 0901285 | Feb 1999 | EP |
| 0955607 | Nov 1999 | EP |
| 1032214 | Aug 2000 | EP |
| 1087625 | Mar 2001 | EP |
| 07-210670 | Aug 1995 | JP |
| WO 0195633 | Dec 2001 | WO |
| WO 02080518 | Oct 2002 | WO |