Method and apparatus for providing image and video coding with iterative post-processing using a variable image model parameter

Information

  • Patent Grant
  • 6353680
  • Patent Number
    6,353,680
  • Date Filed
    Monday, October 4, 1999
    25 years ago
  • Date Issued
    Tuesday, March 5, 2002
    22 years ago
Abstract
The present invention is an apparatus that comprises a memory including one or more instructions and a processor coupled to the memory. In response to the one or more instructions, the processor transforms an image to provide a transformed image and quantizes the transformed image to provide a quantized image. In response to the quantized image, the processor iteratively generates one or more step sizes using a variable image model parameter and transmits the one or more step sizes. In one embodiment, the processor transmits the one or more step sizes along with the variable image model parameter.
Description




BACKGROUND OF THE INVENTION




1. Field of the Invention




The present invention relates to the field of image and video compression. More particularly, the present invention relates to a method and apparatus for improving the quality of compressed image and video signals.




2. Background Information




With the continuing growth of digital image and video technology in areas such as video telephony, where bandwidth is a scarce commodity, the demand for providing image compression while maintaining image quality is a compelling need. A digital image on a standard 640×480 pixel display screen requires an enormous amount of data. For example, assuming one byte per pixel for a gray scale digital image, the 640×480 pixel digital image occupies about 307,200 bytes of storage. A color digital image requires three bytes per pixel or about 921,600 bytes of storage. Today, display screens capable of displaying over 2000×2000 pixel images are readily available and require about twelve million bytes of data for a single color image. Even more demanding are motion videos which require even more data. The amount of data required to generate such images makes the storage, processing, and transmission of the data difficult. As a result, image compression, which reduces the amount of data required to represent a digital image, has evolved as an integral part in the storage and transmission of digital images.




In particular, source coding of image data has been a very active area of research for many years. The goal is to reduce the number of bits needed to represent an image while making as few perceptible changes to the image as possible. Typically, image and video compression algorithms employ a quantization stage. The effect of the quantization stage is to add quantization noise to the reconstructed image or video. Many algorithms have been developed which can successfully compress a gray scale image to approximately 0.8 bits per pixel (“bpp”) with almost no perceptible effects. A problem arises, however, when these compression techniques are pushed beyond this rate. In implementing higher compression ratios (<0.4 bpp for gray scale), typical algorithms generate artifacts which severely degrade the perceived (visual) quality of the image. The type of artifacts generated is dependent on the compression technique and on the particular image.




Recently, iterative techniques have been described for reducing the quantization noise effects associated with image and video encoding schemes that perform quantization. One approach is described by O'ROURKE & STEVENSON in


Improved Image Decompression for Reduced Transform Coding Artifacts,


IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, Vol. 5, Dec. 1995, 490-99. This technique requires modification of an image decoder. Namely, the image decoder first obtains an initial estimate by decompressing the image. The initial estimate is applied to an iterative post processing filter which improves the initial estimate by successive iterations until the difference between successive estimates falls below a predetermined threshold or a maximum number of iterations is reached. Within each iteration of the post-processing filter, a step size is calculated in order to converge on an estimate. Although the iterative technique described in O'ROURKE & STEVENSON greatly reduces the noticeable artifacts which exist using standard decompression techniques, it has certain disadvantages associated with it.




First, the step size calculation apparatus for implementing such a technique is expensive and complex. Second, the entire image must be applied to the iterative post-processing filter because the step size for a particular iteration is dependent on the intermediate filtered image from the previous iteration. As such, the decoder will not be able to iteratively post-process a subsection of the image. Thirdly, since the entire image is applied to the iterative post-processing filter, the efficient exploitation of data cache locality in a computer based implementation will not be achieved.




The parent patent application of the present invention (Ser. No. 08/885,123, filed on Jun. 30, 1997) discloses a technique which improves the approach described in O'ROURKE & STEVENSON. Specifically, the invention disclosed in the parent application removes the step size calculation apparatus from the decoder, thereby reducing its cost and complexity. In addition, the invention removes the dependencies between the step size and the intermediate filtered image. As a result, the image decoder can post-process a sub-section of the image with several consecutive iterations, which provides for more efficient exploitation of data cache locality in a computer implementation. Furthermore, the invention allows one to perform multiple iterations on a sub-section of the image without moving the sub-section of the image from the data cache, unlike the prior art where sub-sections of the image are moved in and out of the data cache during each iteration thereby slowing down the decoder.




The invention disclosed in the parent application can be further improved by using a variable image model parameter, which will decrease the smoothness of the image used in successive iterations thereby preventing some of the quality degradations due to over-smoothing. In addition, the number of iterations can be reduced, which will speed up the decoding process.




SUMMARY OF THE INVENTION




The present invention is an apparatus that comprises a memory including one or more instructions and a processor coupled to the memory. In response to the one or more instructions, the processor transforms an image to provide a transformed image and quantizes the transformed image to provide a quantized image. In response to the quantized image, the processor iteratively generates one or more step sizes using a variable image model parameter and transmits the one or more step sizes. In one embodiment, the processor transmits the one or more step sizes along with the variable image model parameter.











BRIEF DESCRIPTION OF THE DRAWINGS




The features and advantages of the present invention will become apparent from the following detailed description of the present invention in which:





FIG. 1

illustrates an exemplary embodiment of a computer system suitable for use with the present invention.





FIG. 2A

illustrates an exemplary embodiment of an image encoder that is implemented in the present invention.





FIG. 2B

is a flow diagram illustrating a process for implementing the image encoder of

FIG. 2A

using a computer system.





FIG. 2C

illustrates another exemplary embodiment of an image encoder that is implemented in the present invention.





FIG. 2D

is a flow diagram illustrating a process for implementing the image encoder of

FIG. 2C

using a computer system





FIG. 3A

is a block diagram illustrating one embodiment of the encoder filter used to calculate a sequence of step sizes.





FIG. 3B

is a flow diagram illustrating an exemplary iterative approach used in calculating a sequence of step sizes using the computer system of FIG.


1


.





FIG. 4A

illustrates an exemplary embodiment of an image decoder that is implemented in the present invention.





FIG. 4B

is a flow diagram illustrating a process for implementing the image decoder of

FIG. 4A

using a computer system.





FIG. 4C

illustrates another exemplary embodiment of an image decoder that is implemented in the present invention.





FIG. 4D

is a flow diagram illustrating a process for implementing the image decoder of

FIG. 4C

using a computer system.





FIG. 5A

is a flow diagram illustrating the process steps performed by a decoder filter in the image decoder of FIG.


4


A.





FIG. 5B

is a flow diagram illustrating the process steps performed by a decoder filter in the image decoder of FIG.


4


C.





FIG. 6

illustrates another exemplary embodiment of an image decoder that is implemented in the present invention.





FIG. 7

is a block diagram illustrating an exemplary video encoder utilizing the present invention.





FIG. 8

is a block diagram illustrating an exemplary video decoder utilizing the present invention.











DESCRIPTION OF THE PREFERRED EMBODIMENT




The present invention relates to a method and apparatus for improving the quality of compressed image and video signals while significantly reducing the cost of post-processing in the image or video decoder. Iterative post-processing dramatically improves the visual quality of compressed image and video signals as viewed after decompression. In particular, the present invention is directed to the encoding of image signals and the decoding of such encoded image signals.




In one embodiment of the present invention, in the image encoder, an image is encoded to generate an encoded image and the encoded image is decoded to generate a decoded image in order to generate a sequence of step sizes. An iterative post-processing filter, which uses a variable image model parameter, is applied to the decoded image to generate a filtered image and a sequence of step sizes. The encoded image, a set of values for the variable image model parameter and the sequence of step sizes are transmitted to the image decoder.




The image decoder receives the encoded image, the set of values for the variable image model parameter and the sequence of step sizes. The image decoder decodes the encoded image to generate a decoded image. A decoder filter (also known as a modified iterative post-processing filter) is applied to the decoded image to generate a filtered image using the variable image model parameter and the received sequence of step sizes. A similar process is performed in a video encoder and a video decoder. In the description that follows, a sequence of step sizes is defined as one or more step sizes.




Referring to

FIG. 1

, an illustrative embodiment of a computer system suitable for use with the present invention is shown. The computer system


100


comprises a plurality of processors


105




1


-


105




P


(“P” being a positive whole number) and a main memory element


130


(e.g., dynamic random access memory “DRAM”, static random access memory “SRAM”, etc.) coupled together by a chipset


120


. However, it is contemplated that the computer system


100


may include only one processor. The chipset


120


(hereinafter referred to as a “bridge”) operates as an interface between a host bus


110


and a peripheral bus


135


.




Processors


105




1


-


105




P


may be any type of microprocessor such as the Pentium® or Pentium® Pro microprocessor manufactured by Intel Corporation of Santa Clara, Calif. Bridge


120


typically includes a cache DRAM controller (“CDC”), a Peripheral Component Interconnect (“PCI”) bus controller, and a data path unit (“DPU”).




The host bus


110


is a bus that can support transactions to a number of connected processors. Host bus


110


may be referred to as a parallel bus or multiprocessor bus because it supports parallel operations and multiple processors. It is contemplated that host bus


110


operates in a pipelined manner to increase efficiency. However, these features are not construed to limit the teachings of the present invention. The present invention can be utilized if there is only one processor connected to the host bus


110


.




The peripheral bus


135


provides a communication path between the processors


105




1


-


105




P


or main memory element


130


and a plurality of peripheral devices


150




1


-


150




N


(“N” being a positive whole number). In particular, the peripheral devices may include a secondary storage controller (e.g., a hard disk controller) coupled to a hard disk and a video controller coupled to a monitor (not shown). The peripheral bus


135


may include a PCI bus or any other type of bus architecture.




The expansion bus


155


provides a communication path between the peripheral bus


135


and a plurality of expansion peripheral devices


160




1


-


160




M


(“M” being a positive whole number). The expansion bus


155


may include an Industry Standard Architecture (“ISA”) bus or an Extended Industry Standard Architecture (“EISA”) bus. However, it is apparent to one skilled in the art that the architecture of the computer system


100


of

FIG. 1

is not critical in practicing the present invention.




1. Still Image Implementation





FIG. 2A

illustrates an exemplary embodiment of an image encoder


200


that is implemented in the present invention. Referring to

FIG. 2A

, the image encoder


200


applies an original image {overscore (x)} to a forward transformer


210


which produces transform coefficients H. By way of example, if the forward transformer


210


performs a block transform, then the forward transformer


210


breaks up the original image {overscore (x)} into a plurality of subimages as part of the operation of the transform. In one embodiment, the forward transformer


210


performs a slant transform. However, it is apparent to one of ordinary skill in the art that many other transformation techniques, such as a Haar transformation technique or a Discrete Cosine Transform (“DCT”) technique, may be used in lieu thereof. The transformed image is applied to a quantizer


220


. Quantization partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The output of the quantizer


220


is represented by


y


, which can be written as






y=Q[H{overscore (x)}]  (1)






The quantized image y is applied to a symbol encoder


230


and an encoder filter


240


. The symbol encoder


230


further compresses the image. Typically, the symbol encoder


230


provides Huffman and/or run-length coding. The output of the symbol encoder


230


is the compressed image, represented by Y. The encoder filter


240


decodes the image and applies an iterative post-processing filter to the decoded image to generate a sequence of step sizes α


0


, . . . ,α


k


using a variable image model parameter. The encoder filter


240


then applies the sequence of step sizes, α


0


, . . . ,α


k


to a bit stream formatter


250


. Similarly, the symbol encoder


230


applies the compressed image, Y, directly to the bit stream formatter


250


. The bit stream formatter


250


formats the data according to a predetermined protocol for storage and/or transmission to an image decoder.





FIG. 2B

is a flow diagram illustrating a process S


260


for implementing the image encoder


200


of

FIG. 2A

using a computer system such as, for example, the computer system


100


of FIG.


1


. In one embodiment, the process S


260


is implemented using one or more instructions located in memory (e.g., memory


130


of

FIG. 1

) and executed by a processor (e.g., processor


105




1


). Referring to

FIG. 2B

, the process S


260


commences with block S


265


where a forward transform is performed, and at block S


270


, the transformed image is quantized. At block S


275


, one or more step sizes are generated iteratively. At block S


280


, the one or more step sizes are transmitted with the compressed image for later processing by an image decoder.





FIG. 2C

illustrates another exemplary embodiment of an image encoder


281


that is implemented in the present invention. Referring to

FIG. 2C

, the image encoder


281


applies an original image {overscore (x)} to a forward transformer


283


which operates similarly to the forward transformer


210


of FIG.


2


A. The transformed image is applied to a quantizer


285


. The quantizer


285


is identical to the quantizer


220


of FIG.


2


A. The quantized image γ is applied to a symbol encoder


287


. The symbol encoder


287


, which is identical to the symbol encoder


230


of

FIG. 2A

, further compresses the image. The symbol encoder


287


applies the compressed image, Y, directly to a bit stream formatter


289


. The bit stream formatter


289


formats the data according to a predetermined protocol for storage and/or transmission to an image decoder.




In one embodiment, the bit stream formatter


289


also formats a sequence of image model parameters


T


which are then transmitted to an image decoder along with the compressed image. The image model parameters are used by the image decoder to control the amount of smoothness of the image model. However, it should be apparent to one of ordinary skill in the art that the image model parameters do not need to be transmitted to the image decoder if they reside in the image decoder permanently.





FIG. 2D

is a flow diagram illustrating a process S


290


for implementing the image encoder


281


of

FIG. 2C

using a computer system such as, for example, the computer system


100


of FIG.


1


. In one embodiment, the process S


290


is implemented using one or more instructions located in memory (e.g., memory


130


of

FIG. 1

) and executed by a processor (e.g., processor


105




1


). Referring to

FIG. 2D

, the process S


290


commences with block S


292


where a forward transform is performed, and at block S


294


, the transformed image is quantized. At block S


296


, the image is compressed. At block S


298


, the sequence of image model parameters are transmitted with the compressed image for later processing by an image decoder. However, image model parameters do not need to be transmitted to an image decoder if they reside in the image decoder permanently.




The decompression techniques implemented in the present invention will now be described in detail. To decompress the compressed image representation, a maximum a posteriori (“MAP”) technique is used. The decompressed full resolution image is represented by z. As described in O'ROURKE & STEVENSON, for MAP estimation, the decompressed image estimate {circumflex over (z)} is given by











z
^

=

arg







min

z

Z




{


-
log







Pr


(
z
)



}




,




(
2
)













where z is the set of images which compress to γ and Pr(z) is the a priori probability of a particular image z. Pr(z) depends on the image model. For a Huber Markov Random Field (“HMRF”) image model, an objective function can be expressed as










F


(
z
)


=




1

m

M







1

n

N







k
,

l


N

m
,
n









ρ
T



(


z

m
,
n


-

z

k
,
l



)


.








(
3
)













N


m,n


represents the neighborhood of pixels around the pixel located at (m,n). In a preferred embodiment, the neighborhood includes the four nearest neighbors of the pixel located at (m,n). However, in another embodiment, the neighborhood includes the eight nearest neighbors of the pixel located at (m,n). The image height is M while the image width is N. The function ρτ( ) is given by











ρ
T



(
u
)


=

{




u
2






&LeftBracketingBar;
u
&RightBracketingBar;


T

,








T
2

+

2


T


(


&LeftBracketingBar;
u
&RightBracketingBar;

-
T

)




,





&LeftBracketingBar;
u
&RightBracketingBar;

>

T
.










(
4
)













The HMRF image model assumes that the image should be smooth but uses the function


ρτ()


to allow some discontinuities. The parameter T is a variable image model parameter that controls the amount of allowable discontinuity. A larger value of T results in a smoother image estimate. A smaller value of T requires more iterations for obtaining an improved estimate of the original image.




Based on the HMRF image model, the MAP estimation {circumflex over (z)} is the image estimate which minimizes the objective function while still being within the constraint space z. The image estimate can be written as










z
^

=

arg







min

z

Z







1

m

M







1

n

N







k
,

l


N

m
,
n









ρ
T



(


z

m
,
n


-

z

k
,
l



)


.










(
5
)













As a result of the choice of image model, this results in a convex constrained optimization which can be solved using iterative techniques.





FIG. 3A

is a block diagram illustrating one embodiment of the encoder filter


300


used to calculate a sequence of step sizes. The encoder filter


300


may be used in place of the encoder filter


240


of FIG.


2


A. Each block in

FIG. 3A

represents a circuit.

FIG. 3B

is a flow diagram illustrating an exemplary iterative approach used in calculating a sequence of step sizes α


0


, . . . ,α


k


using a computer system such as the computer system


100


of

FIG. 1

(using one or more processors). In one embodiment, the process S


300


is implemented using one or more instructions located in memory (e.g., memory


130


of

FIG. 1

) and executed by a processor (e.g., processor


105




1


). An initial estimate {overscore (z)}


(0)


of the original image {overscore (x)} and is improved through successive iterations until the difference between {overscore (z)}


(k)


and {overscore (z)}


(k+1)


is below a given threshold ε or until the number of iterations has reached a maximum, k


max


. The rate of convergence of the iteration is affected by the choice of the initial estimate. A better initial estimate will result in faster convergence.




The computer system


100


of

FIG. 1

(using one or more processors


105




1


-


105




P


) or the encoder filter


300


of

FIG. 3A

may be used to calculate the sequence of step sizes α


0


, . . . ,α


k


. Therefore, each step of the process S


300


of

FIG. 3B

will also be discussed with reference to the encoder filter


300


. Referring to

FIGS. 3A and 3B

, the process S


350


begins at initialization (Step S


305


). During initialization, k is set to zero and the initial estimate is formed as expressed below:






{overscore (z)}


(0)


=H


−1


Q


−1


[γ]  (6)






That is, the initial estimate is determined by applying γ to a de-quantizer Q


−1


and an inverse transformer H


−1


(i.e., the inverse of blocks


210


and


220


of FIG.


2


A). Moreover, during initialization, the quantization boundaries ({overscore (l)},{overscore (h)}) for all of the transform coefficients are stored in a temporary buffer (e.g., buffer


335


) or transmitted directly to an estimate projector


340


(Step S


340


). The quantization boundaries ({overscore (l)},{overscore (h)}) for the transform coefficients define the constraint space z. In the encoder filter


300


, these operations are performed by an initial estimator


305


. Given the estimate at the kth iteration, {overscore (z)}


(k)


, in the preferred method, the gradient projection method is used to find the estimate at the next iteration, {overscore (z)}


(k+1)


.




The initial estimator


305


also sets a value of the variable image model parameter T for each iteration. The variable image parameter T controls the amount of smoothing which is performed on edges of the image. During initialization, a sequence of image model parameters are set for subsequent use in iterations. In one embodiment the sequence of image model parameters is monotonic and non-increasing. That is, the first iterations use a larger value of the image model parameter, which results in more smoothing and reduces the number of required iterations. The later iterations use a smaller value of the image model parameter to avoid the problem of an overly smooth post-processed image. In one embodiment, values for the variable image model parameter are predefined and are permanently stored in the image encoder. Whether these values are predefined or defined by the initial estimator


305


, a separate buffer can be set for storing a set of values for the variable image model parameter. Alternatively, the set of values for the image model parameter can be stored in the buffer


335


.




At Step S


310


, the gradient of the objective function F(z) is used to find the steepest direction {overscore (g)}


(k)


toward the minimum, as provided by the following expression:












g



(
k
)


=




c

C






ρ
T




(


d
c
t



z

(
k
)



)




d
c
t




,




(
7
)













where ρ′


T(μ)


is the first derivative of the Huber minimax function and {d


c




t


z}


cεC


={z


m,n


−z


k,l}




k,lε


N


m,n,1≦m≦M,1≦n≦N


·d


c




t


z is an operator that performs the difference between neighboring pixels and C is the set of neighboring pixel pairs. The image model parameter T is equal to T


(k)


. By using a variable image model parameter, the smoothness of the image used in successive iterations is decreased thereby preventing some of the quality degradations due to over-smoothing. The above calculation of the steepest direction {overscore (g)}


(k)


toward the minimum is performed by the direction calculator


310


of

FIG. 3A

using image model parameters provided by the initial estimator


305


, as shown by feedback arrow


308


. At Step S


315


, the step size α


(k)


is calculated using the equation











α

(
k
)


=




g




(
k
)


t





g



(
k
)







g




(
k
)


t




(




c

C






ρ
T




(


d
c
t



z

(
k
)



)




d
c



d
c




)





g


k




,




(
8
)













where ρΔT( ) is the second derivative of ρT( ). The value of the objective function may increase if the step size is too large. To avoid this potential problem, the value of α


(k)


is adjusted in an iterative manner, e.g., by dividing by two until the step size is small enough so that the value of the objective function is decreased. This process is described in detail in the following section. At Step S


320


the process computes the updated estimate, {overscore (w)}


(k+1)


, which is expressed as follows:






{overscore (w)}


(k+1)


={overscore (z)}


(k)





(k)


{overscore (g)}


(k)


.  (9)






This is performed by the estimate updator block


320


of FIG.


3


A.




Continuing to refer to

FIGS. 3A and 3B

, at Step S


325


, a determination is made as to whether the step size is acceptable. This determination is satisfied by the following expression:






−logPr({overscore (w)}


(k+1)


)≦−logPr({overscore (z)}


(k)


)  (10)






That is, equation (10) determines whether the objective value using the current step size is less than the objective value of the previous iteration. In another embodiment, the expression, −logPr(P


z


({overscore (w)}


(k+1)


))≦−logPr({overscore (z)}


(k)


), may be used to determine whether a step size is acceptable. P


z


is a projection operator and will be described below. If the step size is too large (i.e., equation (10) is not satisfied), the process proceeds to Step S


330


where the step size is reduced, e.g., by the equation












α

(
k
)


=



α

(
k
)


2

.











Then the process jumps back to Step S


320


. This process may continue until an acceptable step size is obtained (i.e., when equation (10) is satisfied). In the encoder filter


300


of

FIG. 3A

, these operations are performed by the step size updator


325


and the feedback loop


330


. Thereafter, the process proceeds to Step S


335


where the step size, α


(k)


, is stored in a buffer


335


of FIG.


3


A.




Since the updated estimate {overscore (w)}


(k+1)


may fall outside a constraint space z, at Step S


340


, {overscore (w)}


(k+1)


is projected onto the constraint space z to give the image estimate at the (k+1)th iteration






{overscore (z)}


(k+1)


=P


z


({overscore (w)}


(k+1)


)  (11)






In projecting the image {overscore (w)}


(k+1)


onto the constraint space z, the point {overscore (z)}


(k+1)


εz for which |{overscore (z)}


(k+1)


−{overscore (w)}


(k+1)


| is a minimum is found. Both {overscore (w)}


(k+1)


and {overscore (z)}


(k+1)


are in the image domain while the quantization boundaries (the constraint space z) are expressed in the transform domain. As part of the projection operator P


z


, a forward transform is applied to the updated estimate {overscore (w)}


(k+1)


. Let Hw


(k+1)


represent the transformed image and let β represent a transform coefficient of Hw


(k+1)


. Moreover, let i represent a quantization cell having boundaries l(i) and h(i), which contains γ, where γ is the projection of β onto the constraint space. The quantization boundaries ({overscore (l)},{overscore (h)}) were calculated at the initialization Step S


305


and provided to the estimate projector


340


. If β falls within the range l(i) to h(i), then γ=β (i.e., β is within the constraint space) and no projection is required. However, if β is less than l(i), then γ is equal to l(i). Symmetrically, if β is greater than h(i), then γ is equal to h(i). Once all of the transform coefficients have been projected to the constraint space in the transform domain, Hw


(k+1)


in essence becomes Hz


(k+1)


. Thereafter, an inverse transform is applied to Hz


(k+1)


which yields the left side of equation (11) or {overscore (z)}


(k+1)


. This is performed by the estimate projector


340


of FIG.


3


A.




The process proceeds to Step S


345


where a determination is made as to whether to perform any more iterations. That is, if |{overscore (z)}


(k+1)


−{overscore (z)}


(k)∥<ε




stop


or k+1=k


max


, then the process is terminated, where k


max


is set at a predetermined value during initialization. Typical values for ε


stop


is {fraction (1/1000)} of |z


(0)


∥ and k


max


ranges between 1 to 10 iterations with 6 being typical. However, it is contemplated that


k






max




may be any positive finite value. If neither of the conditions are true, then the process proceeds to Step S


350


. At Step S


350


, k is incremented and the process proceeds back to Step S


310


. The process performs the Steps S


310


-S


350


until at least one criterion among the criteria specified in Step S


345


is met. These steps are performed by the comparator


345


and the feedback loop


350


of FIG.


3


A. Once the sequence of step sizes are computed and stored in the buffer


335


of

FIG. 3A

, they are fed into the bit stream formatter


250


of FIG.


2


A. The bit stream formatter


250


formats the sequence of step sizes, a set of values for the variable image model parameter and the compressed image for storage and/or transmission to the decoder depending on the application. By way of example, for applications such as viewing an image stored on a CD-ROM encyclopedia, the encoded image, the set of values for the variable image model parameter and the sequence of step sizes are stored on a CD-ROM. By way of another example, for a two-way video conferencing application, the images or video, the set of values for the variable image model parameter and the sequence of step sizes are immediately transmitted over a communication medium. However, if values for the variable image model parameter are predefined and permanently stored in both the encoder and the decoder, they do not need to be transmitted to the decoder. Only the sequence of step sizes and the compressed image will then be stored and/or transmitted to the decoder depending on the application.




It should be noted that the image encoder


281


of

FIG. 2C

does not use an encoder filter for performing step size calculation described above. Instead, the step size calculation is performed by an image decoder during the decoding process.





FIG. 4A

illustrates an exemplary embodiment of an image decoder


400


that is implemented in the present invention. In one embodiment, the image decoder


400


is implemented in the computer system


100


of

FIG. 1

using only one processor


105




1


. However, it is contemplated that the image decoder


400


may be implemented in the computer system


100


of

FIG. 1

using a plurality of processors


105




1


-


105




P


. Referring to

FIG. 4A

, the image decoder


400


includes a bit stream parser


410


which performs error correction, if necessary, and separates the encoded image Y, the set of values for the variable image model parameter and the sequence of step sizes α


0


, . . . ,α


k


. The encoded image Y is applied to a symbol decoder


420


which performs the symbol decoding (i.e., Huffman, run-length). The sequence of step sizes are transferred to a buffer


460


. The set of values for the variable image model parameter T


(k)


can also be transmitted to the buffer


460


or stored in a separate buffer. The buffer


460


may be a separate buffer or contained within the memory


130


of FIG.


1


.




The quantized image γ is then applied to a de-quantizer


430


for de-quantization. The quantization boundaries ({overscore (l)},h{overscore (h)}) are transferred to the buffer


460


for later use by an estimate projector


476


. Thereafter, the de-quantized image is applied to an inverse transformer


440


which provides the decoded image, {overscore (z)}


(0)


. The decoded image is also stored in a buffer


450


, typically for temporary storage. In one embodiment, the buffers


450


and


460


are the same buffer. The buffer


450


is coupled to a decoder filter


470


. The decoded image {overscore (z)}


(0)


, the set of values for the variable image model parameter T


(k)


and the sequence of step sizes are provided to the decoder filter


470


, and the quantization boundaries ({overscore (l)},{overscore (h)}) are provided to the estimate projector


476


for generating a filtered image.





FIG. 4B

is a flow diagram illustrating a process S


482


for implementing the image decoder


400


of

FIG. 4A

using a computer system such as, for example, the computer system


100


of FIG.


1


. In one embodiment, the process S


482


is implemented using one or more instructions located in memory (e.g., memory


130


of

FIG. 1

) and executed by a processor (e.g., processor


105




1


). Referring to

FIG. 4B

, the process S


482


commences with block S


484


where the compressed image and the one or more step sizes are received. When the values for the variable image model parameter T


(k)


are transmitted to the image decoder, they will be received along with the compressed image and the step sizes. At block S


486


, the compressed image is decoded, and, at block S


488


, the image is de-quantized. Thereafter, at block S


490


, the image is inverse transformed to provide a decoded image. At block S


492


, the decoded image is filtered using the one or more step sizes and the variable image model parameter to provide a filtered image.





FIG. 4C

illustrates another exemplary embodiment of an image decoder


4000


that is implemented in the present invention. In one embodiment, the image decoder


4000


is implemented in the computer system


100


of

FIG. 1

using only one processor


105




1


. However, it is contemplated that the image decoder


4000


may be implemented in the computer system


100


of

FIG. 1

using a plurality of processors


105




1


-


105




P


. Referring to

FIG. 4C

, the image decoder


4000


includes a bit stream parser


4050


which performs error correction, if necessary, and separates the encoded image Y and the set of values for the variable image model parameter if the parameters are transmitted to the decoder. The encoded image Y is applied to a symbol decoder


4070


which performs the symbol decoding (i.e., Huffman, run-length). The set of values for the variable image model parameter T


(k)


can be transmitted to the buffer


4290


or can permanently reside there. The buffer


4290


may be a separate buffer or contained within the memory


130


of FIG.


1


.




The quantized image γ is then applied to a de-quantizer


4090


for de-quantization. The quantization boundaries ({overscore (l)},{overscore (h)}) are transferred to the buffer


4290


for later use by an estimate projector


4240


. Thereafter, the de-quantized image is applied to an inverse transformer


4100


which provides the decoded image, {overscore (z)}


(0)


. The decoded image is also stored in abuffer


4120


, typically for temporary storage. In one embodiment, the buffers


4120


and


4290


are the same buffer. The buffer


4120


is coupled to a decoder filter


5000


. The decoded image {overscore (z)}


(0)


and the set of values for the variable image model parameter T


(k)


are provided to the decoder filter


5000


, and the quantization boundaries ({overscore (l)},{overscore (h)}) are provided to the estimate projector


4240


for generating a filtered image.





FIG. 4D

is a flow diagram illustrating a process S


4500


for implementing the image. decoder


4000


of

FIG. 4C

using a computer system such as, for example, the computer system


100


of FIG.


1


. In one embodiment, the process S


4500


is implemented using one or more instructions located in memory (e.g., memory


130


of

FIG. 1

) and executed by a processor (e.g., processor


105




1


). Referring to

FIG. 4D

, the process S


4500


commences with block S


4520


where the compressed image is received. If the values for the variable image model parameter T


(k)


are transmitted to the image decoder, they are received along with the compressed image. At block S


4540


, the compressed image is decoded, and, at block S


4560


, the image is de-quantized. Thereafter, at block S


4580


, the image is inversely transformed to provide a decoded image. At block S


4600


, the decoded image is filtered using the variable image model parameter T


(k)


to provide a filtered image.





FIG. 5A

is a flow diagram illustrating the process steps performed by a decoder filter. The process S


500


may be performed in software using the computer system


100


of

FIG. 1

or by the decoder filter


470


. In one embodiment, the process S


500


is implemented using one or more instructions located in memory (e.g., memory


130


of

FIG. 1

) and executed by a processor (e.g., processor


105




1


). Referring to

FIGS. 4A and 5A

, the process S


500


begins at the initialization Step S


510


, where k is set to zero and k


max


is set to the number of step sizes. These values may be stored in the buffer


460


. At Step S


520


, the process calculates the gradient, {overscore (g)}


(k)


, using equation (7) similar to Step


310


of FIG.


3


B. In calculating the gradient {overscore (g)}


(k)


, a corresponding image model parameter is used for each iteration. The calculation of the gradient {overscore (g)}


(k)


is performed by a direction calculator


472


of FIG.


4


A. At Step S


530


, the process computes the updated estimate, {overscore (w)}


(k+1)


, using equation (9) similar to Step


320


of FIG.


3


B. In the embodiment of

FIG. 4A

, this is performed by an estimate updator


474


. The updated estimate, {overscore (w)}


(k+1)


, is easily obtained since {overscore (z)}


(k)


is known, {overscore (g)}


(k)


is calculated at Step S


520


, and α


(k)


is transmitted from the image encoder or otherwise provided to the image decoder and temporarily stored within the buffer


460


. At Step S


540


, w


(k+1)


is projected onto a constraint space z to give the image estimate at the (k+1)th iteration using equation (11). This Step is identical to Step S


340


of FIG.


3


B. In the embodiment of

FIG. 4A

, this is performed by an estimate projector


476


.




At Step S


550


, a determination is made as to whether the stopping criterion is met. That is, whether k+1=k


max


. If k+1≠k


max


, then the process proceeds to Step S


560


, where k is incremented. The process Steps S


520


-S


560


are performed until k+1=k


max


. In the embodiment of

FIG. 4A

, this is performed by a comparator


478


and a feedback loop


480


. Thus, the process


500


generates a filtered image {overscore (z)} using k


max


iterations.





FIG. 5B

is a flow diagram illustrating the process steps performed by a decoder filter. The process S


570


may be performed in software using the computer system


100


of

FIG. 1

or by the decoder filter


5000


of FIG.


4


C. The process S


570


follows an exemplary iterative approach used in calculating a sequence of step sizes α


0


, . . . ,α


k


using a computer system such as the computer system


100


of

FIG. 1

(using one or more processors). This exemplary iterative approach is similar to the iterative approach used by the encoder filter


300


of FIG.


3


A. In one embodiment, the process S


570


is implemented using one or more instructions located in memory (e.g., memory


130


of

FIG. 1

) and executed by a processor (e.g., processor


105




1


). An initial estimate {overscore (z)}


(0)


of the original image {overscore (x)} is improved through successive iterations until the difference between {overscore (z)}


(k)


and {overscore (z)}


(k+1)


is below a given threshold ε or until the number of iterations has reached a maximum, k


max


.




The computer system


100


of

FIG. 1

(using one or more processors


105




1


-


105




P


) or the decoder filter


5000


of

FIG. 4C

may be used to calculate the sequence of step sizes α


0


, . . . ,α


k


. Referring to

FIGS. 4C and 5B

, the process S


570


begins at initialization (Step S


572


). During initialization, k is set to zero and the initial estimate is formed, using equation (6) similar to Step


305


of FIG.


3


B. In one embodiment, a value of the variable image model parameter T for each iteration is set during initialization. In another embodiment, values for the variable image parameter T are either permanently stored in the image decoder


4000


or transmitted to the image decoder


4000


from the image encoder


281


of FIG.


2


C. The values for the variable image parameter T can be stored in the buffer


4290


.




At Step S


574


, the process calculates the gradient, {overscore (g)}


(k)


, using equation (7) similar to Step


310


of FIG.


3


B. In calculating the gradient {overscore (g)}


(k)


, a corresponding model image parameter is used for each iteration. The calculation of the gradient {overscore (g)}


(k)


is performed by a direction calculator


4140


of FIG.


4


C. At Step S


576


, the step size α


(k)


is calculated using equation (7) similar to Step


315


of FIG.


3


B. At Step S


578


, the process computes the updated estimate, {overscore (w)}


(k+1)


, using equation (9) similar to Step


320


of FIG.


3


B. In the embodiment of

FIG. 4C

, this is performed by an estimate updator


4180


.




At Step S


580


, a determination is made as to whether the step size is acceptable. This determination is satisfied by equation (10) similar to Step


325


of FIG.


3


B. If the step size is too large (i.e., equation (10) is not satisfied), the process proceeds to Step S


582


where the step size is reduced, e.g., by the equation












α

(
k
)


=



α

(
k
)


2

.











Then the process jumps back to Step S


578


. This process may continue until an acceptable step size is obtained (i.e., when equation (10) is satisfied). In the decoder filter


5000


of

FIG. 4C

, these operations are performed by the step size updator


4200


and the feedback loop


4220


.




At Step S


584


, {overscore (w)}


(k+1)


is projected onto the constraint space z to give the image estimate at the


9


k+1)th iteration using equation (11). This Step is identical to Step


340


of FIG.


3


B and is performed by an estimate projector


4240


of FIG.


4


C. At Step S


586


, a determination is made as to whether the stopping criterion is met. That is, whether |{overscore (z)}


(k+1)


−{overscore (z)}


(k)


∥<ε


stop


or k+1=k


max


. If |{overscore (z)}


(k+1)


−{overscore (z)}


(k)∥<ε




stop


or k+1=k


max


, then the process terminates. Otherwise, the process proceeds to Step S


588


, where k is incremented. The process Steps S


574


-S


588


are performed until k+1=k


max


. In the embodiment of

FIG. 4C

, this is performed by a comparator


4260


and a feedback loop


4280


. Thus, the process


570


generates a filtered image {overscore (z)} using k


max


iterations.




The advantage of the present invention is the use of a variable image model parameter. In the present invention, iterative post-processing uses an image model parameter which may change for each iteration. As a result, the number of required iterations is reduced which speeds up the decoding process. The use of the variable image model parameter also decreases smoothness of the image model used in successive iterations thereby improving the visual quality of the post-processed image.





FIG. 6

illustrates another exemplary embodiment of an image decoder


600


that is implemented in the present invention. Similar to the image decoder


400


of

FIG. 4A

, the image decoder


600


includes a bit stream parser


610


, a symbol decoder


620


, a de-quantizer


630


, and an inverse transformer


640


. Since all of the step sizes are available at the same time, the present invention can take advantage of post-processing two or more sub-sections of the image simultaneously by using a multi-processor system, such as the one shown in FIG.


1


. The decoded image, {overscore (z)}


(0)


, is broken off into sub-sections {overscore (z)}


1




(0)


, . . . ,{overscore (z)}


n




(0)


(where n is the number of subsections). The subsections, {overscore (z)}


1




(0)


, . . . ,{overscore (z)}


n




(0)


are simultaneously applied to a plurality of buffers


650




1


-


650




P


for local storage (typically n>>p). A plurality of the sub-sections {overscore (z)}


1




(0)


, . . . ,{overscore (z)}


n




(0)


are applied to a plurality of decoder filters


660




1


-


660




P


. The process of each of the decoder filters


660




1


-


660




P


may be performed by the plurality of microprocessors


105




1


-


105




P


of FIG.


1


. In one embodiment, the operation of each decoder filter


660




1


-


660




P


is the same as the operation of the decoder filter


470


of FIG.


4


A. The output of each decoder filter


660




1


-


660




P


is a corresponding filtered sub-section of the image {overscore (z)}


1




(k)


, . . . ,{overscore (z)}


p




(k)


.The filtered sub-sections, {overscore (z)}


1




(k)


, . . . ,{overscore (z)}


p




(k)


, are applied to a sub-image composer


670


for merging the sub-sections together and providing a filtered image {overscore (z)}.




By way of an example, given an image size of 640×480 pixels, the image can be split into


4800


sub-sections, each being 8×8. In the computer system


100


of

FIG. 1

having 4 processors, the filtered image can be generated about 4 times as fast as a computer system having only one processor. By way of another example, the image can be split into 12 sub-sections each being 160×160.




2. Motion Picture Implementation




The motion picture implementation deals with the compression of a sequence of still images (frames) including audio information. In the discussion that follows, the video encoder and video decoder of the present invention will be discussed without reference to the compression or synchronization of the audio information to the sequence of frames. For encoding purposes, each frame is typically subdivided into N×N pixel regions (e.g., 16×16), hereinafter called blocks. One or more blocks grouped together constitute a macroblock.





FIG. 7

is a block diagram illustrating an exemplary video encoder


700


utilizing the present invention. If a frame is compressed without reference to neighboring frames in the sequence, it is called an Intra-coded frame. On the other hand, if a frame is compressed with reference to other frames, it is called a Predicted frame. Referring to

FIG. 7

, a frame {overscore (x)}


t−1)


is applied to a summer


705


. Assuming that the frame {overscore (x)}


5−1)


is an Intra-coded frame (e.g., the first frame in the sequence), the compression of the frame {overscore (x)}


(t−1)


is similar to the compression of a still image as described by the image encoder


200


of FIG.


2


A. That is, blocks


710


,


715


,


720


, and


745


of

FIG. 7

correspond to blocks


210


,


220


,


230


, and


250


of

FIG. 2A

, respectively. Moreover, blocks


725


and


730


correspond to the initial estimator


305


of FIG.


3


and block


740


corresponds to the rest of the blocks in FIG.


3


A. Thus, the image decoder


200


of

FIG. 2A

is a special case of the video encoder of FIG.


700


.




Referring back to

FIG. 7

, the output of the quantizer


715


, γ, is also applied to a de-quantizer


725


and an inverse transformer


730


. The output of inverse transformer


730


is applied to a summer


735


. However, since the frame {overscore (x)}


t−1)


is an Intra-coded frame, nothing is added to it at the summer


735


. In addition to being applied to the encoder filter


740


, the frame {overscore (x)}


t−1)


is also stored in a buffer


750


for use as a reference frame by the video encoder


700


in encoding the next frame in the sequence.




Thereafter, the next frame, hereinafter called the current frame {overscore (x)}


(t)


, is applied to both the summer


705


and a motion estimator


755


. For sake of clarity, it is assumed that the current frame {overscore (x)}


(t)


is a Predicted frame and it is coded with respect to the reference frame {overscore (x)}


t−1)


. The motion estimator


755


also receives the reference frame {overscore (x)}


(t−1)


from the buffer


750


. The motion estimator


755


estimates the motion displacement of the current frame {overscore (x)}


(t)


using information from the reference frame {overscore (x)}


(t−1)


. Accordingly, the motion estimator


755


generates a motion vector {{overscore (υ)}} for each block or macroblock of the current frame. In another embodiment, the motion estimator


755


may use more than one reference frame to produce motion vectors {{overscore (υ)}}.




The motion vectors {{overscore (υ)}}for the current frame are then applied to a motion compensator


760


. The motion compensator


760


also receives the reference frame {overscore (x)}


t−1)


from the buffer


750


. The motion compensator


760


then combines the reference frame {overscore (x)}


(t−1)


and the motion vectors {{overscore (υ)}} for the current frame to produce a motion compensation prediction. In another embodiment, the motion compensator


760


may use more than one reference frame to produce a motion compensation prediction. The motion compensation prediction is then applied to the summer


705


and is subtracted from the current frame {overscore (x)}


(t)


. The output of the summer is a residual frame {overscore (x)}


δ




(t)


(the residual frame represents the difference between the motion compensation prediction and the current frame) which is compressed in the usual fashion of a still image as is described below. All of the motion vectors {{overscore (υ)}} for the current frame are also applied to a bit stream formatter


745


for transmission and/or storage.




The residual frame {overscore (x)}


δ




(t)


is then applied to a forward transformer


710


and a quantizer


715


. The output of the quantizer


715


is applied to the symbol encoder


720


which typically performs Huffinan and/or run-length coding similar to the symbol encoder


230


of FIG.


2


A. The output of the symbol encoder


720


is then applied to the bit stream formatter


745


. The output of the quantizer


715


is also applied to a dequantizer


725


and an inverse transformer


730


. The output of the inverse transformer


730


is a reconstructed residual frame {overscore (x)}


μ




(t)


(the residual frame has some loss due to quantization by the quantizer


715


). The reconstructed residual frame {overscore (x)}


μ




(t)


is then added to the motion compensation prediction at the summer


735


. The output of the summer


735


is stored in the buffer


750


for use as a reference frame in encoding the next frame in the sequence and so on.




The output of the summer


735


is also applied to an encoder filter


740


for calculating a sequence of step sizes α


0


, . . . ,α


k


of the current frame {overscore (x)}


(t)


. The encoder filter


740


is similar to the encoder filter


300


of

FIG. 3A

with two differences. First, the encoder filter


740


does not include the initial estimator


305


of

FIG. 3A

because the operation of the initial estimator for the video encoder


700


is performed by the dequantizer


725


, the inverse transformer


730


, and the summer


735


. Second, the estimate projector of the encoder filter


740


is a modified variation of the estimate projector


476


of FIG.


4


A. Namely, as part of the projection operation, the motion compensation prediction is subtracted from the image before the forward transform and added back to the image after the inverse transform. The dequantizer


725


provides the quantization boundaries ({overscore (l)},{overscore (h)}) to the encoder filter


740


. The sequence of step sizes for the current image {overscore (x)}


(t)


is calculated by the encoder filter


740


using variable image model parameters and applied to the bit stream formatter


745


. The output of the bit stream formatter


745


for the current frame is the compressed (residual) image Y, the sequence of step sizes α


0


, . . . ,α


k


, the image model parameters, and the motion vectors {{overscore (υ)}}.





FIG. 8

is a block diagram illustrating an exemplary video decoder


800


utilizing the present invention. Referring to

FIG. 8

, the video decoder


800


includes a bit stream parser


805


which receives an encoded frame Y, the sequence of step sizes α


0


, . . . ,α


k


, the image model parameters, and the motion vectors {{overscore (υ)}} (if any) for the encoded frame, performs error correction, and separates the encoded frame, the sequence of step sizes, and the motion vectors (if any). It must be noted that quantization tables used by the quantizer


715


of

FIG. 7

may be transmitted in the bit stream or the video encoder


700


of

FIG. 7

may transmit a predetermined code for each frame (or block) to the video decoder to specify the quantization table to be used. The encoded frame Y is then applied to a symbol decoder


810


, a dequantizer


815


, and an inverse transformer


820


(similar to block


420


,


430


, and


440


of the image decoder


400


of FIG.


4


A).




If the frame that is being decoded is an Intra-coded frame, then no motion vectors are transmitted from the video encoder


700


to the video decoder


800


. In this case, a motion compensator


830


is not used. The bit stream parser


805


further applies the sequence of step sizes α


0


, . . . ,α


k


to a decoder filter


835


. The decoder filter


835


also receives the decoded frame from the inverse transformer


820


and generates a filtered frame (similar to the decoder filter


470


of FIG.


4


A). The output of the decoder filter


835


is also applied to a buffer


840


for use in decoding the next frame.




Conversely, if the frame being decoded is a Predicted frame as in the current frame, then the motion vectors {{overscore (υ)}} are applied to the motion compensator


830


. The motion compensator


830


also receives the reference frame from the buffer


840


. The output -of the motion compensator


830


is a motion compensation prediction. The motion compensation prediction is added to the reconstructed residual frame {overscore (x)}


μ




(t)


at a summer


825


. The output of the summer


825


is applied to the buffer


840


for the next frame. The output of the summer


825


is also applied to the decoder filter


835


, which is similar to the decoder filter


470


of

FIG. 4A

except that the estimate projector of the decoder filter


835


is the same as the estimate projector of the encoder filter


740


. The decoder filter


835


generates a filtered image that is transmitted to a monitor for display (e.g., for a video conference application). While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention not be limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art.



Claims
  • 1. An apparatus, comprising:a memory including one or more instructions; and a processor coupled to said memory, said processor, in response to the one or more instructions, to transform an image to provide a transformed image, quantize the transformed image to provide a quantized image, produce an image estimate from the quantized image; calculate a direction of said image estimate using the variable image model parameter; calculate a step size of said image estimate; update the image estimate based on said direction and said step size; and adjust the step size until the updated image estimate is within an acceptable range.
  • 2. The apparatus of claim 1 wherein said processor, in response to the one or more instructions, further to:project said image estimate onto a constraint space, and compare said image estimate with a previous image estimate.
  • 3. The apparatus of claim 1 wherein said processor is further to transmit the one or more step sizes along with the variable image model parameter.
  • 4. The apparatus of claim 1 wherein said processor, in response to the one or more instructions, to further compress the quantized image to provide a compressed image, and to transmit the compressed image, the one or more step sizes and the variable image model parameter for subsequent reconstruction of the image in a decoder.
  • 5. The apparatus of claim 1 wherein said processor, in response to the one or more instructions, to further compress the quantized image to provide a compressed image, and to save the compressed image, the one or more step sizes, and the variable image model parameter on a computer readable medium.
  • 6. The apparatus of claim 1 wherein said processor, in response to the one or more instructions, to provide the transformed image by performing one of a slant transform, a Haar transform, and a discrete cosine transform.
  • 7. The apparatus of claim 1 wherein the image model parameter is a threshold parameter in a Huber Markov Random Field image model.
  • 8. An apparatus, comprising:a memory including one or more instructions; and a processor coupled to said memory, said processor, in response to the one or more instructions, to receive a coded image and one or more step sizes where said one or more step sizes are generated at an encoder, dequantize and inverse transform the coded image to provide an image estimate, and improve on the image estimate to provide a filtered image, the improving on the image estimate including improving on one or more sub-sections of said image estimate by, for each subsection of said image estimate and for each step size, calculating a direction of said sub-section of said image estimate using a variable image model parameter, updating said sub-section of said image estimate based on said direction and one of the one or more step sizes, and projecting said sub-section of said image estimate onto a constraint space.
  • 9. The apparatus of claim 8 wherein the image model parameter is a threshold parameter in a Huber Markov Random Field image model.
  • 10. An encoding apparatus, comprising:a forward transformer to receive an image and transform said image into a transformed image; a quantizer coupled to the forward transformer, said quantizer to receive said transformed image and provide a quantized image; and an encoder filter coupled to the quantizer, said encoder filter to receive said quantized image and iteratively generate one or more step sizes using a variable image model parameter in response to the quantized image, sizes used by a decoding post-processing apparatus to scale a direction of an said encoder filter comprising: an initial estimator to receive said quantized image and produce an image estimate, a direction calculator to calculate the direction of said image estimate using the variable image model parameter, a step size calculator to calculate a step size of said image estimate, an estimate updator to update said image estimate based on said direction and said step size, and a step size updator to update said step size of said image estimate until the updated image estimate is within an acceptable range.
  • 11. The encoding apparatus of claim 10 wherein said encoder filter further includes:an estimate projector to project said image estimate onto a constraint space.
  • 12. The encoding apparatus of claim 10 wherein said forward transformer performs one of a slant transform, a Haar transform, and a discrete cosine transform.
  • 13. The encoding apparatus of claim 11 wherein the direction calculator, the step size calculator, the estimate updator, the step size updator, and the estimate projector are iteratively performed until a difference between successive image estimates falls below a predetermined threshold, or a predetermined maximum number of iterations is reached.
  • 14. The apparatus of claim 10 wherein the image model parameter is a threshold parameter in a Huber Markov Random Field image model.
  • 15. An encoding method comprising:transforming an image for providing a transformed image; quantizing said transformed image for providing a quantized image; generating one or more step sizes using a variable image model parameter in response to the quantized image, the generating one or more step sizes comprising: decoding said quantized image for providing an image estimate; calculating a direction of an objective function of said image estimate using the variable image model parameter; calculating a step size of said image estimate; and updating said image estimate based on said direction and said step size until the updated image estimate is within an acceptable range.
  • 16. The method of claim 15 wherein generating one or more step sizes further comprises:comparing successive image estimates.
  • 17. The method of claim 15 further comprising storing said quantized image and said variable image model parameter in the buffer.
  • 18. The method of claim 15 wherein the image model parameter is a threshold parameter in a Huber Markov Random Field image model.
  • 19. A decoding method comprising:receiving an encoded image and one or more step sizes where said one or more step sizes are generated prior to receipt; dequantizing said encoded image to produce a dequantized image; applying said dequantized image to an inverse transformer for providing an image estimate; and responsive to the one or more step sizes, improving on said image estimate to reduce quantization noise of the image estimate and provide a filtered image, the improving on said image estimate including improving on one or more sub-sections of said image estimate by, for each subsection of said image estimate and each step size, calculating a direction of said sub-section of said image estimate using a variable image model parameter, updating said subsection of said image estimate using said direction and one of said one or more step sizes, and projecting said sub-section of said image estimate onto a constraint space to provide the filtered image.
  • 20. The method of claim 19 further comprising receiving the variable image parameter along with the encoded image and the one or more step sizes.
  • 21. A computer program product, comprising:a computer usable medium having computer program code embodied therein to encode an image to provide an encoded image; computer readable program code to iteratively generate one or more step sizes using a variable image model parameter in response to the encoded image, comprising: computer readable program code to produce an image estimate from the quantized image; computer readable program code to calculate the direction of said image estimate using the variable image model parameter; computer readable program code to calculate a step size of said image estimate; computer readable program code to update the image estimate based on said direction and said step size; and computer readable program code to adjust the step size until the updated image estimate is within an acceptable range.
  • 22. The computer program product of claim 21 wherein said computer program code to encode said image to provide an encoded image comprises:computer readable program code to transform an image for providing a transformed image; and computer readable program code to quantize said transformed image for providing said encoded image.
  • 23. The computer program product of claim 20 wherein the computer readable program code to iteratively generate one or more step sizes further comprises:computer readable program code to project said image estimate onto a constraint space; and computer readable program code to perform the aforementioned computer readable program code until a difference between successive image estimates is below a predetermined threshold, or a predetermined number of iterations are performed.
  • 24. The computer program product of claim 21 wherein the image model parameter is a threshold parameter in a Huber Markov Random Field image model.
Parent Case Info

This application is a continuation-in-part of application Ser. No. 08/885,123 filed by Thomas P. O'Rourke on Jun. 30, 1997.

US Referenced Citations (6)
Number Name Date Kind
4862173 Nishitani Aug 1989 A
5491515 Suzuki Feb 1996 A
5576765 Cheney et al. Nov 1996 A
5610657 Zhang Mar 1997 A
5684534 Harney et al. Nov 1997 A
5754700 Kuzma May 1998 A
Non-Patent Literature Citations (8)
Entry
O'Rourke et al., “Improved Image Decompression for Reduced Transform Coding Artifacts”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 5, No. 6, Dec. 1995.*
“Improved Image Decompression for Reduced Transform Coding Artifacts”, Thomas P. O'Rourke and Robert L. Stevenson, IEEE Transactions on Circuits and Systems for Video Technology, vol. 5, No. 6, Dec. 1995, pp. 490-499.
“Reduction of Coding Artifacts in Transform Image Coding”, Robert L. Stevenson, IEEE, 1993, pp. V401-404.
“Iterative Procedures for Reduction of Blocking Effects in Transform Image Coding”, Avide Zakhor, IEEE Transactions on Circuits and Systems for Video Technology, vol. 2, No. 1, Mar. 1992, pp. 91-95.
“Comments on Iterative Procedures for Reduction of Blocking Effects in Transform Image Coding”, Stanley J. Reeves and Steven L. Eddins, IEEE Transactions on Circuits and Systems for Video Technology, vol. 3, No. 6, Dec. 1993, pp. 439-440.
“Regularized Reconstruction to Reduce Blocking Artifacts of Block Discrete Cosine Transform Compressed Images”, Yongyi Yang, Nikolas P. Galatsanos and Aggelos K. Katsaggelos, IEEE Transactions on Circuits and Systems for Video Technology, vol. 3, No. 6, Dec. 1993, pp. 421-432.
“Improved Definition Image Expansion”, Richard R. Schultz and Robert L. Stevenson, Laborartory for Image and Signal Analysis Department of Electrical Engineering University of Notre Dame, IEEE 1992, pp. III-173-III-176.
Zakhor, “Iterative Procedures for Reduction of Blocking Effects in Transform Image Coding”, IEEE, vol. 2, No. 1, Mar. 1992.
Continuation in Parts (1)
Number Date Country
Parent 08/885123 Jun 1997 US
Child 09/410943 US