Quantization table adjustment

Information

  • Patent Grant
  • 6687407
  • Patent Number
    6,687,407
  • Date Filed
    Tuesday, April 22, 2003
    22 years ago
  • Date Issued
    Tuesday, February 3, 2004
    21 years ago
Abstract
The application discloses a media processing method that includes accessing a compressed image data set representing a time series of different compressed video images. These video images have been compressed to differing degrees based on sizes of the compressed video images with a plurality of different quantization tables. The compressed image data set can then be decompressed to retrieve the time series of different video images, and this step of decompressing can be operative independent of information indicating a difference between quantization tables used to compress the images.
Description




BACKGROUND OF THE INVENTION




This invention relates to hardware designs coupled with software-based algorithms for capture, compression, decompression, and playback of digital image sequences, particularly in an editing environment.




The idea of taking motion video, digitizing it, compressing the digital datastream, and storing it on some kind of media for later playback is not new. RCA's Sarnoff labs began working on this in the early days of the video disk, seeking to create a digital rather than an analog approach. This technology has since become known as Digital Video Interactive (DVI).




Another group, led by Phillips in Europe, has also worked on a digital motion video approach for a product they call CDI (Compact Disk Interactive). Both DVI and CDI seek to store motion video and sound on CD-ROM disks for playback in low cost players. In the case of DVI, the compression is done in batch mode, and takes a long time, but the playback hardware is low cost. CDI is less specific about the compression approach, and mainly provides a format for the data to be stored on the disk.




A few years ago, a standards-making body known as CCITT, based in France, working in conjunction with ISO, the International Standards Organization, created a working group to focus on image compression. This group, called the Joint Photographic Experts Group (JPEG) met for many years to determine the most effective way to compress digital images. They evaluated a wide range of compression schemes, including vector quantization (the technique used by DVI) and DCT (Discrete Cosine Transform). After exhaustive qualitative tests and careful study, the JPEG group picked the DCT approach, and also defined in detail the various ways this approach could be used for image compression. The group published a proposed ISO standard that is generally referred to as the JPEG standard. This standard is now in its final form, and is awaiting ratification by ISO, which is expected.




The JPEG standard has wide implications for image capture and storage, image transmission, and image playback. A color photograph can be compressed by 10 to 1 with virtually no visible loss of quality. Compression of 30 to 1 can be achieved with loss that is so minimal that most people cannot see the difference. Compression factors of 100 to 1 and more can be achieved while maintaining image quality acceptable for a wide range of purposes.




The creation of the JPEG standard has spurred a variety of important hardware developments. The DCT algorithm used by the JPEG standard is extremely complex. It requires converting an image from the spatial domain to the frequency domain, the quantization of the various frequency components, followed by Huffman coding of the resulting components. The conversion from spatial to frequency domain, the quantization, and the Huffman coding are all computationally intensive. Hardware vendors have responded by building specialized integrated circuits to implement the JPEG algorithm.




One vendor, C-Cube of San Jose, Calif., has created a JPEG chip (the CL55OB) that not only implements the JPEG standard in hardware, but can process an image with a resolution of, for example, 720×488 pixels (CCIRR601 video standard) in just {fraction (1/30)}th of a second. This means that the JPEG algorithm can be applied to a digitized video sequence, and the resulting compressed data can be stored for later playback. The same chip can be used to compress or decompress images or image sequences. The availability of this JPEG chip has spurred computer vendors and system integrators to design new products that incorporate the JPEG chip for motion video. However, the implementation of the chip in a hardware and software environment capable of processing images with a resolution of 640×480 pixels or greater at a rate of 30 frames per second in an editing environment introduces multiple problems.




It is often desirable to vary the quality of an image during compression in order to optimize the degree of data compression. For example, during some portions of a sequence, detail may not be important, and quality can be sacrificed by compressing the data to a greater degree. Other portions may require greater quality, and hence this greater degree of compression may be unsuitable. In prior implementations of the JPEG algorithm, quality is adjusted by scaling the elements of a quantization table (discussed in detail hereinbelow). If these elements are scaled during compression, they must be correspondingly re-scaled during decompression in order to obtain a suitable image. This re-scaling is cumbersome to implement and can cause delays during playback. The present invention is a method that allows for quality changes during compression to enable optimum data compression for all portions of a sequence, while allowing playback with a single quantization table.




SUMMARY OF THE INVENTION




This invention relates to an apparatus and method for adjusting the post decompression quality of a compressed image. The image quality adjustment is performed by constructing a quantization table that specifies the high frequency image components to be filtered, and by subsequently filtering out those components specified by the table.











BRIEF DESCRIPTION OF THE DRAWING





FIG. 1

is a block diagram of a video image capture and playback system implementing data compression,





FIG. 2

is a schematic illustration of data compression and decompression according to the JPEG algorithm.











DESCRIPTION OF THE PREFERRED EMBODIMENT




A block diagram according to a preferred embodiment of a system for capture, compression, storage, decompression, and playback of images is illustrated in FIG.


1


.




As shown, an image digitizer (frame grabber)


10


, captures and digitizes the images from an analog source, such as videotape. Image digitizer


10


may be, for example, a TrueVision NuVista+ board. However, the NuVista+ board is preferably modified and augmented with a pixel engine as described in copending application “Image Digitizer Including Pixel Engine” by B. Joshua Rosen et al., filed Dec. 13, 1991, to provide better data throughput for a variety of image formats and modes of operation.




The compression processor


12


compresses the data according to a compression algorithm. Preferably, this algorithm is the JPEG algorithm, introduced above. As discussed above, C-Cube produces a compression processor (CL55OB) based on the JPEG algorithm that is appropriate for use as compression processor


12


. However, other embodiments are within the scope of the invention. Compression processor


12


may be a processor that implements the new MPEG (Motion Picture Experts Group) algorithm, or a processor that implements any of a variety of other image compression algorithms known to those skilled in the art.




The compressed data from the processor


12


is preferably input to a compressed data buffer


14


which is interfaced to host computer


16


connected to disk


18


. The compressed data buffer


14


preferably implements a DMA process in order to absorb speed differences between compression processor


12


and disk


18


, and further to permit data transfer between processor


12


and disk


18


with a single pass through the CPU of host computer


16


. The host computer


16


may be, for example, an Apple Macintosh.




JPEG Encoding and Decoding




Detailed discussions of the JPEG algorithm and its implementation are contained in “The JPEG Still Picture Compression Standard” by G. K. Wallace, in Communications of the ACM, Vol. 34, April 1991, and in “Digital Compression and Coding of Continuous-Tone Still Images, Part 1, Requirements and Guidelines,” ISOIIEC JTC1 Committee Draft 10918-1, Febuary, 1991, both of which are incorporated herein by reference.





FIG. 2

illustrates the key steps in data compression and decompression according to the JPEG algorithm for a single component of what will generally be a three-component image. In the JPEG standard, an image described in the RGB color space will be transformed into the YUV color space via a 3×3 multiplier prior to compression. This conversion sacrifices some color information, but preserves the more important detail information.




The algorithm works with blocks of 8×8 pixels from the image. Each 8×8 block is input to the compressor, goes through the illustrated steps, and the compressed data is output as a data stream.




The first step in the JPEG algorithm is a Forward Discrete Cosine Transform (FDCT). As described in Wallace, cited above, each 8×8 block of pixels can be thought of as a 64-point discrete signal which is a function of two spatial dimensions. The FDCT computes the “spectrum” of this signal in the form of 64 two-dimensional “spatial frequencies,” termed DCT coefficients. The DCT coefficients represent the relative amounts of the two-dimensional spatial frequencies contained in the 64-point discrete signal. The coefficient with zero frequency in both dimensions is called the “DC coefficient” and the remaining 63 coefficients are called the “AC coefficients.” Typically each pixel component corresponds to 8 bits, as is the case in 24 bit color. According to the JPEG algorithm, each coefficient is described by greater than 8 bits. In the C-Cube chip discussed above, the number of bits per coefficient is 12. Therefore, at this point, the algorithm has actually led to an expansion, rather than a compression of data. However, since pixel values usually vary slowly across an image, most of the pixel information will be contained in the lower spatial frequencies. For typical 8×8 pixel blocks, most of the spatial frequencies at the high end of the spectrum will have zero or negligible amplitude. Data compression can then be achieved by “throwing out” these coefficients, which is the purpose of the next step.




The next step in the JPEG algorithm is quantization, wherein each of the 64 DCT coefficients is quantized in accordance with a 64-element quantization table. This table is specified by the user. The C-Cube chip allows user adjustability of this table via software inputs to the chip. Each element in the table is any integer from 1 to 255, according to the JPEG standard. Each element is the quantizer step size for a corresponding DCT coefficient. Quantization is achieved by dividing each DCT coefficient by its corresponding quantizer step, size, and rounding to the nearest integer, a very lossy process. The elements of the table are chosen so that the generally large lower frequency components are represented by a smaller number of bits, and the negligible higher frequency components become zero. The goal is to represent each DCT coefficient by no more precision than is necessary for a desired image quality. Since the coefficients, therefore, depend on human visual parameters, the table is sometimes called a psycho-visual weighing table.




Compression is achieved by the use of run-length encoding, which puts an end-of-block code at the start of the sequence of zeros that will typically form the end of the 64 coefficient string. The zeros, therefore, don't contribute to the length of the data stream.




After the coefficients have been quantized, they are ordered into a “zig-zag” sequence, as illustrated in FIG.


2


. This sequence facilitates the run-length encoding. Before going on to this step, it should be noted, that since the DC coefficient is generally one of the largest coefficients, and furthermore since it is a measure of the average value of the 64 pixels in the 8×8 block, there is generally a strong correlation between the DC coefficients of adjacent blocks, and therefore, the DC component is encoded as the difference from the DC term of the previous block in the compression order.




The final step is entropy coding, wherein additional compression is achieved by encoding the quantized DCT coefficients according to their statistical characteristics. This is a lossless step. As this step is not as relevant to the methods of the present invention as those of the previous steps, the reader is referred to Wallace, cited above for a detailed discussion.




The above steps are essentially reversed, as illustrated in

FIG. 1



b


, during playback. Here too, the reader is referred to Wallace for further details.




Image Quality Adjustment




From the above discussion, it can be seen that image quality can be adjusted by scaling the values of the quantization table. For higher quality images, the elements should be small, since the larger the elements, the greater the loss.




In prior art systems, this is precisely the technique used to adjust image quality during image capture. A variable quality scaling factor (1-255) called the quantization factor or Q-factor is used with JPEG to adjust the degree of quantization of the compressed image. For sequences requiring high quality, low Q-factors are used. For sequences in which quality can be sacrificed, high Q-factors are used. It can be imagined that a user may want to continuously adjust the quality over the range of the Q-factor at the time of capture as scenes change.




The problem with the above method is that if the quantization table values are scaled during image capture, they must be correspondingly descaled during image playback. To illustrate the importance of this, imagine the result if the quantization table element corresponding to the DC coefficient is multiplied by a factor of 10 at some point during image capture in an effort to increase the degree of data compression. If at playback, the original quantization table is used (prior to the upward scaling), the DC coefficient will be 10 times too small. Since the DC component primarily corresponds to brightness, the result is dramatic.




The method of the present invention is an alternate method for adjusting quality during image capture which permits playback using a single quantization table. According to the invention, the DCT coefficients are filtered during image capture according to the following technique.




As has already been discussed, the DC coefficient is the most important in terms of human perception. The higher the frequency of a coefficient, the finer the detail it describes in an image. Humans are much less sensitive to these high frequency components. Therefore, according to the invention, if image quality is to be lowered to further compress the data, the high frequency components are filtered out. The cut-off frequency of the filter determines the degree of compression. This method is in clear contradistinction to the prior method of adjusting the Q-factor.




As described above and illustrated in

FIG. 2

, the coefficients are sequenced in a zig-zag pattern as part of the quantization step. A filter according to one embodiment of the invention can be characterized as a diagonal line indicating the cutoff frequency. The effect of throwing out the higher frequency components is a blur of the image to an extent determined by the cutoff frequency. This artifact is often acceptable, depending on the scene and the quality required.




Furthermore, the artifact caused by the filtering can be made more tolerable to the eye by adjusting the filter in the following manner. If in addition to throwing out all frequency components above cutoff, the frequency components just below cutoff are muted, the artifact is made less harsh.




The filter described above can be created by hand-creating quantization tables. For all frequencies above cutoff, the table elements should be large, preferably as large as possible without overflowing the arithmetic of the system. For frequencies below cutoff the table elements can be exactly as used in standard JPEG implementations. However, preferably, the table elements below but near cut-off are increased by some amount to mute the corresponding frequency components as described above. Preferably, this muting is greatest at cutoff, decreasing as the DC coefficient is approached.




The filter can be easily adjusted during image capture to control the degree of data compression by changing the quantization table. In one mode of operation, the filter is user adjusted. However, in another mode of operation, the filter may be automatically adjusted by the system when it senses bottlenecks forming. In this mode, the interrupt routine gets activated on each frame. It computes the current frame size and compares it with the desired target size, then it adjusts the table by moving the filter cut-off frequency to approach the target.




As stated above, the invention was developed as a method for adjusting quality during image capture in such a way that playback can take place in the absence of the history of such adjustment. It should be clear that this is achieved when the images are played back using the original quantization tables. This is because only the least important coefficients are affected by the filtering. In contrast, in the prior methods for quality adjustment, all coefficients were affected to the same degree.




Subsampling introduces artifacts called aliases to the signal. These frequencies can be predicted and removed by increasing the Q table entries for them.



Claims
  • 1. A media processing method, comprising:accessing a compressed image data set representing a time series of different compressed video images that have been compressed to differing degrees based on sizes of the compressed video images with a plurality of different quantization tables and including a first compressed video image compressed with a first quantization table and a second video image compressed with a second quantization table, and decompressing the compressed image data set to retrieve the time series of different video images, wherein the step of decompressing the compressed image data set includes a step of decompressing the first compressed video image to obtain a first video image and a step of decompressing the second video image to obtain a second video image, and wherein the step of decompressing is operative independent of information indicating a difference between the first and second quantization tables.
  • 2. The method of claim 1 further including the step of editing the time series of video images.
  • 3. The method of claim 2 wherein the step of editing includes introducing transitions between portions of the time series of video images.
  • 4. The method of claim 1 wherein step of decompressing the first compressed video image and the step of decompressing the second compressed video image are each based on a same quantization table.
  • 5. The method of claim 1 wherein the step of accessing accesses a compressed image data set representing a time series of different compressed video images that are compressed to differing degrees by adjusting an extent of quantization.
  • 6. The method of claim 1 wherein the step of accessing accesses a compressed image data set representing a time series of different compressed video images that are compressed to differing degrees by different quantization tables.
  • 7. The method of claim 1 wherein the step of decompressing operates according to a JPEG decompression standard.
  • 8. The method of claim 1 wherein the step of accessing accesses a compressed image data set representing a time series of different compressed digitized video images.
  • 9. The method of claim 1 wherein the step of decompressing includes steps of decoding data from the compressed image data set to produce sets of coefficients and steps of performing inverse transforms on the sets of coefficients to generate the decompressed video images.
  • 10. The method of claim 1 wherein the step of decompressing takes place at a video playback rate for the time series of different video images.
  • 11. The method of claim 10 wherein the step of decompressing takes place at a rate of around 30 frames per second.
Parent Case Info

This application is a continuation of Ser. No. 10/197,682, Jul. 17, 2002, U.S. Pat. No. 6,553,142, which is a continuation of Ser. No. 09/723,575, filed Nov. 28, 2000, U.S. Pat. No. 6,489,969 which is a continuation of Ser. No. 09/370,749, filed Aug. 9, 1999, U.S. Pat. No. 6,249,280, which is a continuation of Ser. No. 08/676,689, filed Jul. 10, 1996, U.S. Pat. No. 6,118,444, which is a continuation of Ser. No. 08/270,442, filed Jul. 5, 1994, U.S. Pat. No. 5,577,190, which is a both a continuation of Ser. No. 07/866,829, filed Apr. 10, 1992, U.S. Pat. No. 5,355,450, and a continuation-in-part of Ser. No. 08/400,993, filed Mar. 15, 1994, abandoned which is a continuation of Ser. No. 07/807,117, Filed Dec. 13, 1991, abandoned all of which are herein incorporated by reference.

US Referenced Citations (79)
Number Name Date Kind
3813485 Arps May 1974 A
4191971 Dischert et al. Mar 1980 A
4302775 Widergren et al. Nov 1981 A
4394774 Widergren et al. Jul 1983 A
4574351 Dang et al. Mar 1986 A
4599689 Berman Jul 1986 A
4672441 Hoelzlwimmer et al. Jun 1987 A
4704628 Chen et al. Nov 1987 A
4704730 Turner et al. Nov 1987 A
4707738 Ferre et al. Nov 1987 A
4729020 Schaphorst et al. Mar 1988 A
4734767 Kaneko et al. Mar 1988 A
4785349 Keith et al. Nov 1988 A
4797742 Sugiyama et al. Jan 1989 A
4809067 Kikuchi et al. Feb 1989 A
4814871 Keesen et al. Mar 1989 A
4839724 Keesen et al. Jun 1989 A
4849812 Borgers et al. Jul 1989 A
4890161 Kondo Dec 1989 A
4897855 Acaompora Jan 1990 A
4937685 Barker et al. Jun 1990 A
4951139 Hamilton et al. Aug 1990 A
4962463 Crossno et al. Oct 1990 A
4982282 Saito et al. Jan 1991 A
4985766 Morrison et al. Jan 1991 A
4988982 Rayner et al. Jan 1991 A
5006931 Shirota Apr 1991 A
5021891 Lee Jun 1991 A
5038209 Hang Aug 1991 A
5046119 Hoffert et al. Sep 1991 A
5047853 Hoffert et al. Sep 1991 A
5050230 Jones et al. Sep 1991 A
5061924 Mailhot Oct 1991 A
5068745 Shimura Nov 1991 A
5073821 Juri Dec 1991 A
5107345 Lee Apr 1992 A
5109451 Aono et al. Apr 1992 A
5122875 Raychaudhuri et al. Jun 1992 A
5130797 Murakami et al. Jul 1992 A
5138459 Roberts et al. Aug 1992 A
5146564 Evans et al. Sep 1992 A
5150208 Otaka et al. Sep 1992 A
5164980 Bush et al. Nov 1992 A
5168374 Morimoto Dec 1992 A
5170264 Saito et al. Dec 1992 A
5179651 Taaffe et al. Jan 1993 A
5191548 Balkanski et al. Mar 1993 A
5191645 Carlucci et al. Mar 1993 A
5193002 Guichard et al. Mar 1993 A
5196933 Henoi Mar 1993 A
5202760 Tourtier et al. Apr 1993 A
5228028 Cucchi et al. Jul 1993 A
5228126 Marianetti, II Jul 1993 A
5237675 Hannon, Jr. Aug 1993 A
5253078 Balkanski Oct 1993 A
5270832 Balkanski et al. Dec 1993 A
5274443 Dachiku et al. Dec 1993 A
5287420 Barrett Feb 1994 A
5301242 Gonzales et al. Apr 1994 A
5309528 Rosen et al. May 1994 A
5321440 Yanagihara et al. Jun 1994 A
5329616 Silverbrook Jul 1994 A
5341318 Balkanski et al. Aug 1994 A
5347310 Yamada et al. Sep 1994 A
5355450 Garmon et al. Oct 1994 A
5369505 Wantanabe et al. Nov 1994 A
RE34824 Morrison et al. Jan 1995 E
5379356 Purcell et al. Jan 1995 A
5388197 Rayner Feb 1995 A
5414796 Jacobs et al. May 1995 A
5577190 Peters Nov 1996 A
5600373 Chui et al. Feb 1997 A
5825970 Kim Oct 1998 A
6023531 Peters Feb 2000 A
6046773 Martens et al. Apr 2000 A
6118444 Garmon et al. Sep 2000 A
6249280 Garmon et al. Jun 2001 B1
6489969 Garmon et al. Dec 2002 B1
6553142 Peters Apr 2003 B2
Foreign Referenced Citations (7)
Number Date Country
0 323 362 Jul 1989 EP
0 347 330 Dec 1989 EP
0 469 835 Feb 1992 EP
2 597 282 Oct 1987 FR
2104180 Apr 1990 JP
WO 9114339 Sep 1991 WO
WO 9222166 Dec 1992 WO
Non-Patent Literature Citations (50)
Entry
“100Mbit/s HDTV Transmission Using a High Efficiency Codes,” Y. Yashima and K. Sawada, Signal Processing of HDTV, II, L., Chiarglione (ed), Elsevier Science Publishers B.V., 1990, pp. 579-586.
“A Chip Set Core for Image Compression” A. Artieri and O. Colavin, IEEE Transactions on Consumer Electronics, vol. 36, No. 3, Aug. 1990, pp. 395-402.
“A Complete Single-Chip Implementation of the JPEG Image Compression Standard,” M. Bolton et al., Proc. of the CICC, pp. 12.2.1-12.2.4, May 1991.
“A JPEG Still Picture Compression List” Tsugio Noda et al. 1991 Symposium on ULSI Circuits, pp. 33-34.
“Adaptive Transform Coding of HDTV Pictures,” Chantelou et al.
“An Encoder/Decoder Chip Set for the MPEG Video Standard,” Ichiro Tamitani et al., IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP-92, vol. 5 Mar. 1992, pp. 661-664.
“An Experimental Digital VCR With 40mm Drum, Single Actuator and DCT-Based Bit-Rate Reduction,” S.M.C. Borgers et al. IEEE Trans. on Consumer Electronics, vol. 34, No. 3, 1988.
“Announcing a totally new concept in the field of video post production” allegedly distributed Jan. 1992.
“C-Cube CL550, JPEG Image Compression Processor”, C-Cube MicroSystems, Preliminary Data Book, Feb. Aug. 1991, pp. 1-93.
“C-Cube CL550.TM. A Development Board for NuBus.TM.,” C-Cube Microsystems, Oct. 1990. Product Literature.
“C-Cube CL550.TM. Compression Monior User's Manual,” Version 3.1, A Compression/Decompression Utility for Use With the C-Cube CL550 Development Board, C-Cube Microsystems, Aug. 1991, pp. 1-13.
“C-Cube Microsystems Compression Workshop,” C-Cube Microsystems, 1990.
“CD-1 Full-Motion Video Encoding on a Parallel Computer,” F. Sijstermans and J. van der Meer. Communications of the ACM, vol. 34 No. 4, Apr. 1991 pp. 82-91.
“CenterStage Application Environment” Advertising material, Fluent Machines Inc.
“CL550 Engineering Samples (ES2 Revision) Bug List” C-Cube Microsystems, Product Marketing Feb., 1991.
“CL550 Errata Information,” C-Cube Product Marketing Literature, Nov. 1990.
“CL550 Reference Streams,”C-Cube MicroSystems Technical Note.
“CL550A JPEG Image Compression Processor”, C-Cube MicroSystems, Preliminary Data Book, Feb. 1990, pp. 1-36.
“Coding of Color Television Signals Using a Modified M-Transform for 34 MBit/s-Transmission,” Kiesen, et al. Frequenz, vol. 38, No. 10, Oct. 1984, with translation, pp. 1-7.
“Combined Soure Channel Coding in Adaptive Transform Coding Systems for Images,” Lohscheller, H. and Goetze, M. Proceedings of the IEEE International Conference on Communications, May 1984, vol. 1, pp. 511-515.
“Compression Monitor Software (Version 2.0) User's Manual,” C-Cube Microsystems.
“Compressor/Decompressor (CODEC),” Advertising Literature Fluent Machines Inc.
“DigiCipher.TM.—All Digital Channel Compatible HDTV Broadcast System,” W. Palk IEEE Trans. on Broadcasting, vol. 36, No. 4, Dec. 1990.
“Digital Pictures, Representation and Compression,” A, N. Netravlj and B. G. Haskell, Plenum Press, New York, Jun. 1989, pp. 301-551.
“Features Sets for Interative Images,”A. Lippman, Communications of the ACM, vol. 34 No. 4, Apr. 1991, pp. 93-102.
“Fluent Multimedia Extending the Capabilities of DVI,” Advertising material, Fluent Machines Inc.
“FM/1 Multimedia Development System,” Advertising material, Fluent Machines Inc.
“IC801 Single-Chip P.times.64 Codec For Video Phones” Preliminary Information, InfoChip Systems Incorporated, Mar. 1992, pp. 1-12.
“Image Coding by Adaptive Block Quantization,” Tasto et al. IEEE Transactions on Communication Technology, vol. COM-19, No. 6, Dec. 1971, pp. 957-972.
“Interframe Adaptive Data Compression Technique for Images,”J.R. Jain & A.K. Jain Signal and Image, Image Processing Lab., Dept. of Electrical and Computer Eng. Univ. of California, Davis. Aug. 1979, pp. 1-177.
“L64735 Discrete Cosine Transform Processor,” LSI Llogic Corporation, Jan. 1991.
“L64745 JPEG Coder,” LSI Logic Corporation Jan. 14, 1991, pp. 1-14.
“Monolithic Circuits Expedite Desktop Video,” D. Pryce, Electrical Design News, vol. 36, No. 22, Oct. 1991, Newton, MA, pp. 67, 69, 74 and 76.
“Multimedia Group Strategy and Media 100.TM. Backgrounder” dated Feb. 1992.
“NeXTstep; Putting JPEG to Multiple Uses,” G. Cockroft and L. Hourvitz Communications of the ACM Apr. 1991, vol. 34, No. 4, pp. 45 and 116.
“OBRAZ 1/Caracteristiques Generales,” Advertising material MACSYS (with translation).
“OBRAZ Explication succincte” Advertising material MACSYS (with translation).
“Overview of the px64 kbit/s Video Coding Standard,” M. Liou, Communications of the ACM, vol. 34, No. 4., Apr. 1991, pp. 60-63.
“Silicon Solution Merges Video, Stills and Voice,” Milt Leonard, Electronic Design Apr. 2.
Techncal Notes Mar. 1990, C-Cube Microsystems, 1990.
“The C-Cube CL550 JPEG Image Compression Processor,” S.C. Purcell, IEEE Computer Society International Conference, 1991, pp. 318-323.
“The JPEG Still Picture Compression Standard,” Wallace, G.K. Communications of the Association for Computing Machinery, vol. 34, No. 4, pp. 30-34, Apr. 1991.
“Toward an Open Environment for Digital Video,” M. Liebhold and E. M. Hoffert. Communications of the ACM vol. 34, No. 4, Apr. 1991 pp. 104-112.
“Video Compression Chip Set,” LSI Logic Corporation pp. 1-16.
“Video Compression Chipset Attacks High Multimedia Price Tags,” LSI Logic Corporation.
“Signal Processing of HDTV,” Proceedings of the Second International Workshop on Signal Processing of HDTV, L'Aquila, Feb. 29th-Mar. 2, 1988, pp. 231-238. 1984.
News Release entitled “Media 100.TM.—Industry's First Online, Nonlinear Video Production System Introduced by Data Translation's Multimedia Group” dated Jan. 11, 1992.
Proceedings of the 1983 International Zurich Seminar on Digital Communications, Lohscheller, H. Mar. 1984, pp. 25-31.
U.S. Ser. No. 08/048,458.
U.S. Ser. No. 08/048,782.
Continuations (7)
Number Date Country
Parent 10/197682 Jul 2002 US
Child 10/420474 US
Parent 09/723575 Nov 2000 US
Child 10/197682 US
Parent 09/370749 Aug 1999 US
Child 09/723575 US
Parent 08/676689 Jul 1996 US
Child 09/370749 US
Parent 08/270442 Jul 1994 US
Child 08/676689 US
Parent 07/866829 Apr 1992 US
Child 08/270442 US
Parent 07/807117 Dec 1991 US
Child 08/400993 US
Continuation in Parts (1)
Number Date Country
Parent 08/400993 Mar 1994 US
Child 07/866829 US