Image compression and decompression using predictive coding and error diffusion

Information

  • Patent Grant
  • 5859931
  • Patent Number
    5,859,931
  • Date Filed
    Tuesday, May 28, 1996
    28 years ago
  • Date Issued
    Tuesday, January 12, 1999
    25 years ago
Abstract
An encoding/compression technique using a combination of predictive coding and run length encoding allows for efficient compression of images produced by error diffusion.
Description

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to a lossless compression technique and an apparatus in which an image produced by error diffusion is predictively run length encoded and decoded.
2. Discussion of Related Art
Data compression systems have been used to reduce costs associated with storing and communicating image data. However, conventional compression methods yield very little compression when the input image is halftoned by error diffusion. Error diffusion is an important technique for digital halftoning. It usually generates images with superior quality. Due to irregular, high frequency noise patterns introduced in the process, it is very difficult to compress error diffused images. For example, conventional run length based techniques, such as CCITT Group3 and Group4 formats used in facsimile operations, are inappropriate for compressing error diffused images since they cannot suitably handle short run lengths which are dominant in error diffused images.
SUMMARY OF THE INVENTION
It is therefore an object of this invention to compress an error diffused image without losing image information.
It is another object of this invention to use a predictive coding technique for the lossless compression of error diffused images.
It is still another object of this invention to predictively code an error diffused image using run length encoding to realize lossless image compression.
It is still a further object of this invention to predictively code and decode an original error diffused image using run length encoding/decoding.
To achieve these and other objects, the inventive method and apparatus predictively code error diffused images into run length encoded error signals which can be transmitted or otherwise sent to a complimentary receiver/decoder.





BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects of the invention will become apparent from the following descriptions which illustrate a preferred embodiment of the invention when read in conjunction with the accompanying drawings in which:
FIG. 1 shows a block diagram of an illustrative apparatus using an inventive compression method of an embodiment of the invention;
FIGS. 2(A) and (B) are flow charts relating to an illustrative compression method of an embodiment of the invention;
FIG. 3 shows a block diagram of an illustrative apparatus using an inventive decompression method of an embodiment of the invention; and
FIGS. 4(A), (B) and (C) are flow charts relating to an illustrative decompression method of an embodiment of the present invention.





DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
Unless otherwise indicated, the term "signal" is used interchangeably herein to mean both an individual signal and a multicomponent signal having numerous individual signals.
A basic system for carrying out the compressive method of the present invention is shown in FIG. 1. As shown, an image produced by error diffusion is received as the input of a compressor 2 at point 1. The image comprises a plurality of signals b(m,n) (hereafter "b"), which is typically, 1 or 2 bits per pixel. The compressor 2 comprises prediction circuitry 3 such as a predictor, comparison circuitry or subtraction circuitry 4 and a run length encoding means or encoder 5. The compressor 2 also includes as a part of the subtraction circuitry 4, circuitry for generating a plurality of prediction errors 6 based on the results derived by the subtraction circuitry 4. The signal b is input into the predictor 3 and also into the subtraction circuitry 4. The predictor 3 generates a predicted signal, b* (m,n) (hereafter "b*") based on stored values of previous quantization errors, e*(m-i, n-j), and an average of previous signals b(m-i, n-j). The signal b, and predicted signal b*, are input into the subtraction circuitry 4. This circuitry generates a plurality of prediction errors, E(m,n) (hereinafter "E"), by comparing the predicted image to the error diffused image. Prediction errors are determined by subtracting the predicted image from the error diffused image.
Prediction errors are output at point 7 and input into a run length encoder 5. In this manner, only the differences or errors between an error diffused signal or image and a predicted image is transmitted to point 8.
FIGS. 2(A) and (B) are flow charts showing the operation of the compressor 2 shown in FIG. 1. Initially, an error diffused signal, b, is input into the compressor 2 at step S50. At step S100 a predicted, modified signal i*.sub.mod (m,n) (hereinafter "i*.sub.mod "), is calculated in the predictor 3 according to the following equation:
i*.sub.mod (m,n)=i*(m,n)+.SIGMA.a(i,j)e*(m-i,n-j) (1)
where the * denotes an estimation or prediction and a(i,j) denotes weights used in error diffusion.
The two values on the right side of equation (1) represent an average for previous error diffused signals for a set of past pixels and a weighted summation of previous quantization errors, respectively. Both values are stored in a memory or storage (not shown in FIG. 1) which may be a part of, or separate from, the compressor 2.
The predicted, modified signal or predicted continuous tone signal i* mod (m, n) is a continuous tone signal because it is based upon an average of previous error diffused halftone digital signals i* (m, n) which is a continuous tone signal. This average is a continuous tone signal because it is an average of previous error-diffused signals. Thus, even though each individual previous error-diffused signal is digital the average of these signals will produce a value that falls between the thresholds of the error-diffused signal and, therefore, is a continuous tone signal.
The predicted continuous tone signal i* mod is also a continuous tone signal because it is also based upon a weighted average of previous prediction error signals which is based upon previous predicted continuous tone signals as will be explained below in connection with Eq. (5).
Once the predicted, modified signal i*.sub.mod is determined, the next step is to predict b* from a quantized value of i*.sub.mod at step S200. Mathematically the predicted signal b* is calculated as follows:
b*(m,n)=Q�i*.sub.mod (m,n)! (2)
where Q is the quantization operation.
In this manner the error diffused signal is predicted as b*. Because b* is a prediction of an error diffused halftone digital signal it is known as the predicted error diffused halftone digital signal.
Once b* is known, the prediction error E can be calculated using the equation:
E(m,n)=b(m,n)-b*(m,n). (3)
The prediction error is calculated in the comparison circuitry 4 by subtracting the predicted image from the error diffused image. Such a comparison results in a plurality of prediction errors being generated by comparison circuitry 4 at step S300. If during such comparisons a prediction error E is determined to be zero, then the predictor 3 has predicted a correct signal. In such a case there is no need to adjust i*.sub.mod.
On the other hand, if during such comparisons E.noteq.0, then i*.sub.mod must be adjusted in step 510 in one of two ways. If:
i*.sub.mod =minimum value in a quantization interval of b(m,n), if E(m,n)>0;4(a)
or
i*.sub.mod =maximum value in a quantization interval of b(m,n), if E(m,n)<0.4(b)
For example, if 0.5 is used as a threshold for quantization then a "minimum" value corresponding to equation 4(a) would be a value greater than 0.5 e.g., 0.51 (b=1 in this case and the quantization internal is any value greater than 0.5). Likewise, a "maximum" value would be a value less than 0.5, e.g., 0.49 (b=0 in this case, and the quantization interval is any value less than 0.5).
The actual adjustment is carried out by adjustment circuitry which may be a part of the predictor 3.
In either equation 4(a) or (b) a new, predicted quantization error, e*(m,n) (hereafter "e*") must eventually be calculated and stored in the predictor 3 at step S600. This quantization error is calculated using the following formula:
e*(m,n)=i*.sub.mod (m,n)-b(m,n). (5)
As can be seen from equation (5) a present predicted quantization error, e*, is derived from a predicted, modified signal, i*.sub.mod, and b. This present quantization error is then stored in memory in order to calculate a new, predicted modified signal when the next error diffused signal is input into the predictor 3.
After the prediction errors are generated they are output to point 7 and eventually input into a run length encoder 5.
Typically, error diffused signals have short "run lengths." A run length is defined as a group of continuously coded pixels, i.e. 10 white pixels represented by binary 1s in a row. The use of predictor 3 increases the run length of error diffused signals.
The use of a run length encoder 5 allows these "lengthened" run lengths to be transmitted or otherwise output using a code which identifies, in this instance 10 "non-errors" (as opposed to 10 white pixels) in a row by, for instance "10x" (where x=error) instead of transmitting each of the 10 "non-errors" individually, i.e., 1x, 1x, 1x . . . etc. Thus, it can be seen that in the event there are no errors generated, a continuous stream of "no errors" is input into the run length encoder 5. This continuous stream can be encoded by the run length encoder 5 and output to point 8 as one code indicating "no errors."
FIG. 3 depicts a block diagram of an apparatus according to one embodiment of the invention which receives, decodes and decompresses the run length encoded image transmitted or otherwise sent from the apparatus shown in FIG. 1.
As shown in FIG. 3, a decompressor 9 comprises run length decoding means or decoder 11, predictive coding circuitry or receiver predictor 12 and addition means or circuitry 13.
The run length decoder 11 decodes run length encoded prediction errors input from point 10 and outputs a plurality of prediction errors at point 14.
Prediction circuitry 12 inputs at point 15 past decoded signals output from addition circuitry 13 and outputs a plurality of predicted signals at point 16. These predicted signals are calculated within the prediction circuitry 12 by quantizing a predicted modified signal, i*.sub.mod. This signal, i.e., i*.sub.mod, is in turn first calculated by circuitry preferably a part of the prediction circuitry from an average of past decoded signals over a "neighborhood" and previous receiver quantization errors which are stored in receiver memory or storage (not shown in FIG. 3) in the same manner as in equations (1) and (2). The receiver memory or storage may be a part of, or separate from, the decompressor 9.
The predicted signals from point 16 and prediction errors from point 14 are thereafter input into the addition circuitry 13 which adds the two signals together. In this manner, each predicted signal from the prediction circuitry 12 is added to each decoded prediction error from the run length decoder 11. As in the compressor 2, if at any point no prediction error is present, then the predicted signal is output as the decoded signal from the addition circuitry 13 to form an image at point 17. If an error exists, the corresponding predicted, modified signal is adjusted in the same manner as in equation (4).
A decoded image can be output to a printer 18 or other reproducing apparatus. Each decoded signal is generated from each addition of a predicted signal and a decoded prediction error.
In order to decode and correctly predict the next image or signal a present receiver quantization error is calculated within the decompressor 9. This error is then stored in the receiver memory.
Similarly, FIGS. 4(A), (B) and (C) are flow charts depicting the decompressing operation of the illustrative apparatus shown in FIG. 3. The operative steps of the decompressor (S1050 to S1600) are analogous to the operation of the compressor with the exception that in step S1200 a decoded output b'(m,n) is first calculated from a decoded error as follows:
b'(m,n)=E(m,n)+b*(m,n)
The invention has been described with reference to a particular embodiment. Modifications and alterations will occur to others upon reading and understanding this specification. It is intended that all such modifications and alterations are included insofar as they come within the scope of the appending claims or equivalents thereof.
Claims
  • 1. A method for lossless efficient run-length encoding of error diffused halftone images, comprising:
  • inputting an error diffused halftone digital signal;
  • generating a predicted continuous tone signal from previous error diffused halftone digital signals;
  • generating a predicted error diffused halftone digital signal from the predicted continuous tone signal;
  • generating a prediction error signal from the error diffused halftone digital signal and the predicted error diffused halftone digital signal; and
  • run length encoding the prediction error signal.
  • 2. The method of claim 1, wherein the step of generating a predicted continuous tone signal comprises averaging a plurality of portions of previous error diffused halftone digital signals.
  • 3. The method of claim 2, wherein the step of generating a predicted continuous tone signal comprises;
  • averaging previous error diffused signals;
  • determining a weighted sum of previous prediction error signals; and
  • summing the average of previous error diffused signals and the weighted sum of previous prediction error signals.
  • 4. The method of claim 3, further comprising adjusting the predicted continuous tone signal based on the prediction error signal.
  • 5. The method of claim 4, wherein the adjusting step comprises:
  • replacing the predicted continuous tone signal with a minimum value of a corresponding portion of the error diffused halftone digital signal when the prediction error signal is greater than zero;
  • replacing the predicted continuous tone signal with a maximum value of the corresponding portion of the error diffused halftone digital signal if the prediction error signal is less than zero; and
  • foregoing adjusting the predicted continuous tone signal when the prediction error signal is equal to zero.
  • 6. The method of claim 4, further comprising storing the averaged error diffused halftone digital signal and the prediction error signal.
  • 7. A method for lossless decompression of a run-length encoded signal, comprising:
  • inputting a run-length encoded prediction error signal;
  • run length decoding the run-length encoded prediction error signal to form a prediction error signal;
  • generating a decoded digital signal from the prediction error signal and a predicted digital signal;
  • generating a predicted continuous tone signal from previous decoded digital signals; and
  • generating the predicted digital signal from the predicted continuous tone signal.
  • 8. The method of claim 7, wherein the step of generating the predicted continuous tone signal comprises averaging a plurality of portions of previous decoded digital signals.
  • 9. The method of claim 7, wherein the step of generating the predicted continuous tone signal comprises:
  • averaging previous decoded digital signals;
  • determining a weighted sum of previous prediction error signals; and
  • summing the average of previous decoded digital signals and the weighted sum of previous prediction error signals.
  • 10. The method of claim 9, further comprising adjusting the predicted continuous tone signal based on the prediction error signal.
  • 11. The method of claim 10, wherein the adjusting step comprises:
  • replacing the predicted continuous tone signal with a minimum value of a corresponding portion of the decoded digital signal when the prediction error signal is greater than zero;
  • replacing the predicted continuous tone signal with a maximum value of a corresponding portion of the decoded digital signal when the prediction error signal is less than zero; and
  • foregoing adjusting the predicted continuous tone signal when the prediction error signal is equal to zero.
  • 12. The method of claim 10, further comprising storing the averaged decoded digital signal and the predicted error signal.
  • 13. An apparatus for lossless compression of an error diffused halftone digital signal, comprising:
  • a prediction circuit which inputs the error diffused halftone digital signal and outputs a predicted digital signal, comprising:
  • an averaging circuit which averages a plurality of portions of previous error diffused halftone digital signals and outputs a predicted continuous tone signal, and
  • a predicted digital signal generator circuit which inputs the predicted continuous tone image signal and outputs the predicted signal;
  • a prediction error circuit which inputs the error diffused halftone digital signal and the predicted digital signal and outputs a prediction error signal; and
  • a run length encoder which inputs the prediction error signal and outputs a run length encoded prediction error signal.
Parent Case Info

This is a continuation of application Ser. No. 08/332,176 filed Oct. 31, 1994, now abandoned.

US Referenced Citations (42)
Number Name Date Kind
4149194 Holladay Apr 1979
4256401 Fujimura et al. Mar 1981
4339774 Temple Jul 1982
4559563 Joiner, Jr. Dec 1985
4563671 Lim et al. Jan 1986
4625222 Bassetti et al. Nov 1986
4633327 Roetling Dec 1986
4654721 Goertzel et al. Mar 1987
4668995 Chen et al. May 1987
4672463 Tomohisa et al. Jun 1987
4693593 Gerger Sep 1987
4700229 Herrmann et al. Oct 1987
4706260 Fedele et al. Nov 1987
4709250 Takeuchi Nov 1987
4724461 Rushing Feb 1988
4760460 Shimotohno Jul 1988
4809350 Shimoni et al. Feb 1989
4924322 Kurosawa et al. May 1990
4955065 Ulichney Sep 1990
5043809 Shikakura et al. Aug 1991
5045952 Eschbach Sep 1991
5055942 Levien Oct 1991
5086487 Katayama et al. Feb 1992
5095374 Klein et al. Mar 1992
5226094 Eschbach Jul 1993
5226096 Fan Jul 1993
5243443 Eschbach Sep 1993
5278670 Eschbach Jan 1994
5282256 Ohzawa et al. Jan 1994
5289294 Fujisawa Feb 1994
5309254 Kuwabara et al. May 1994
5309526 Pappas et al. May 1994
5317411 Yoshida May 1994
5321525 Hains Jun 1994
5323247 Parker et al. Jun 1994
5325211 Eschbach Jun 1994
5329380 Ishida Jul 1994
5351133 Blonstein Sep 1994
5359430 Zhang Oct 1994
5448656 Tanaka Sep 1995
5463703 Lin Oct 1995
5535311 Zimmerman Jul 1996
Foreign Referenced Citations (1)
Number Date Country
0 544 511 A3 Jun 1993 EPX
Continuations (1)
Number Date Country
Parent 332176 Oct 1994