Method of compressing digital images

Abstract
A method is for compressing a digital image that is made up of a matrix of elements, with each element including a plurality of digital components of different types for representing a pixel. The method includes splitting the digital image into a plurality of blocks, and calculating for each block a group of DCT coefficients for the components of each type, and quantizing the DCT coefficients of each block using a corresponding quantization table scaled by a gain factor for achieving a target compression factor. The method further includes determining at least one energy measure of the digital image, and estimating the gain factor as a function of the at least one energy measure. The function is determined experimentally according to the target compression factor.
Description


FIELD OF THE INVENTION

[0001] The present invention relates to the field of integrated circuits, and more particularly, to the compression of digital images.



BACKGROUND OF THE INVENTION

[0002] Digital images are commonly used in several applications such as, for example, in digital still cameras (DSC). A digital image includes a matrix of elements, commonly referred to as a bit map. Each element of the matrix, which represents an elemental area of the image (a pixel or pel), is formed by several digital values indicating corresponding components of the pixel.


[0003] Digital images are typically subjected to a compression process to increase the number of digital images which can be stored simultaneously, such as to a memory of the camera. Moreover, this allows transmission of digital images, such as over the internet, for example, to be easier and less time consuming. A compression method commonly used in standard applications is the JPEG (Joint Photographic Experts Group) algorithm, described in CCITT T. 81, 1992.


[0004] In the JPEG algorithm, 8×8 pixel blocks are extracted from the digital image. Discrete cosine transform (DCT) coefficients are then calculated for the components of each block. The DCT coefficients are rounded off using corresponding quantization tables. The quantized DCT coefficients are encoded to obtain a compressed digital image, from which the corresponding original digital image may be later extracted by a decompression process.


[0005] In some applications, it is necessary to provide a substantially constant memory requirement for each compressed digital image, i.e., a compression factor control or CF-CTRL. This problem is particularly perceived in digital still cameras. In fact, in this case it must be ensured that a minimum number of compressed digital images can be stored in the memory of the camera to guarantee that a minimum number of photos can be taken by the camera.


[0006] The compression factor control is quite difficult in algorithms, such as the JPEG, wherein the size of the compressed digital image depends on the content of the corresponding original digital image. Generally, the compression factor is controlled by scaling the quantization tables using a multiplier coefficient (gain factor). The gain factor to obtain a target compression factor is determined using iterative methods. The compression process is executed several times, at least twice. The gain factor is modified according to the result of the preceding compression process, until the compressed digital image has a size that meets the target compression factor.


[0007] Current methods require a high computation time, so that they are quite slow. Moreover, these known methods require a considerable power consumption. This drawback is particularly acute when the compression method is implemented in a digital still camera or other portable devices which are powered by batteries.



SUMMARY OF THE INVENTION

[0008] In view of the foregoing background, it is an object of the present invention to overcome the above mentioned drawbacks.


[0009] This and other objects, advantages and features in accordance with the present invention are provided by a method of compressing a digital image that includes a matrix of elements, with each element including a plurality of digital components of a different type for representing a pixel. The method may comprise splitting the digital image into a plurality of blocks, and calculating for each block a group of DCT coefficients for the components of each type, and quantizing the DCT coefficients of each block using a corresponding quantization table scaled by a gain factor for achieving a target compression factor.


[0010] The method may further comprise determining at least one energy measure of the digital image, and estimating the gain factor as a function of the at least one energy measure. The function may be determined experimentally according to the target compression factor.


[0011] Moreover, the present invention also provides a corresponding device for compressing a digital image, and a digital still camera comprising this device.







BRIEF DESCRIPTION OF THE DRAWINGS

[0012] Further features and advantages according to the present invention will be made clear by the following description of a preferred embodiment thereof, given purely by way of a non-restrictive example, with reference to the attached figures, in which:


[0013]
FIG. 1 is a schematic block diagram of a digital still camera for implementing the compression method according to the present invention;


[0014]
FIGS. 2

a
and 2b are plots respectively illustrating an example of the relation between the energy/number of bits required to encode AC coefficients, and the basic relation between the compression factor/gain factor;


[0015]
FIG. 3 is a schematic block diagram of an energy unit of the digital still camera according to the present invention;


[0016]
FIGS. 4

a
-4b are flow charts illustrating the compression method according to the present invention; and


[0017]
FIGS. 4

c
-4d are flow charts illustrating an alternative embodiment of the compression method according to the present invention.







DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

[0018] With reference in particular to FIG. 1, a digital still camera (DSC) 100 is illustrated for taking digital images representative of real scenes. A digital image is formed by a matrix with N rows and M columns (for example, 640 rows by 480 columns). Each element of the matrix includes several digital values (for example, three values each one of 8 bits, ranging from 0 to 255) representative of respective optical components of a pixel.


[0019] The camera 100 includes an image-acquisition unit 105 formed by a diaphragm and a set of lenses for transmitting the light corresponding to the image of the real scene to a sensor unit (SENS) 110. The sensor unit 110 is typically formed by a charge-coupled device (CCD). A CCD is an integrated circuit which contains a matrix of light-sensitive cells. Each light-sensitive cell generates a voltage, the intensity of which is proportional to the exposure of the light-sensitive cell. The generated voltage is supplied to an analog/digital converter, which produces a corresponding digital value.


[0020] To reduce the number of light-sensitive cells, the sensor unit 110 does not detect all the components for every pixel. Typically, only one light-sensitive cell is provided for each pixel. The CCD is covered by a color filter that includes a matrix of filter elements. Each one is associated with a corresponding light-sensitive cell of the CCD. Each filter element transmits (absorbing a minimal portion) the luminous radiation belonging only to the wavelength of red, blue or green light (substantially absorbing the others). This is done to detect a red color component (R), a green color component (G), or a blue color component (B) for each pixel.


[0021] In particular, the filter may be of the Bayer type as described in U.S. Pat. No. 3,971,065, in which only the G component is detected for a half of the pixels, in a chessboard-like arrangement. The R component or the B component is detected for the other half of the pixels, in respective alternate rows, as shown in the following Table 1:
1TABLE 1GRGRGRGRGBGBGBGBGBGRGRGRGRGBGBGBGBGB


[0022] An incomplete digital image SImg, in which each element includes a single color component (R, G or B), is output by the sensor unit 110.


[0023] The camera 100 includes a control unit 115 formed by several blocks which are connected in parallel to a communication bus 120. Particularly, a pre-processing unit (PRE_PROC) 125 receives the incomplete digital image SImg. The pre-processing unit 125 determines various parameters of the incomplete digital image Simg, such as a high-frequency content and an average luminosity. These parameters are used to automatically control a focus (auto-focus) and an exposure (auto-exposure) by corresponding control signals Sc which are supplied to the acquisition unit 105. The pre-processing unit 125 also modifies the incomplete digital image SImg, for example, by applying a white-balance algorithm which corrects the color shift of the light towards red (reddish) or towards blue (bluish) based upon the color temperature of the light source. A corresponding incomplete digital image BImg is output by the pre-processing unit 125 and sent onto the bus 120.


[0024] The incomplete digital image BImg is received by an image-processing unit (IPU) 130. The image-processing unit 130 interpolates the missing color components in each element of the incomplete digital image BImg to obtain a corresponding digital image RGB wherein each pixel is represented by the R component, the G component and the B component. The digital image RGB is then processed to improve image quality. For example, the image quality may be improved by correcting exposure problems such as back-lighting or excessive front illumination, reducing a noise introduced by the CDD, correcting alterations of a selected color tone, applying special effects (such as a mist effect), compensating the loss of sharpness due to a y-correction function (typically applied by a television set). Moreover, the digital image can be enlarged, a desired portion of the image can be zoomed, or the ratio of its dimensions can be changed, for example, from 4:3 to 16:9, and the like.


[0025] The digital image RGB is then converted into a corresponding digital image YUV in a luminance/chrominance space. Each pixel of the digital image YUV is represented by a luminance component Y (providing information about the brightness), and two chrominance components Cu and CV for providing information about the hue. The Y,Cu,Cv components are calculated from the respective R,G,B components applying, for example, the following equations:




Y=·
0.299·R+0.587·G+0.114·B





Cu=−
0.1687·R−0.3313·G+0.5·B+128





Cv=
0.5·R−0.4187·G−0.0813·B+128



[0026] This allows chrominance information to be easily identified in order to discard more chrominance information that luminance information during a following compression process of the digital image, since the human eye is more sensitive to luminance than chrominance. The digital image YUV is sent to the bus 120.


[0027] A compression unit 135 is also connected to the bus 120. The compression unit 135 receives the digital image YUV and outputs a corresponding digital image JImg compressed by applying a JPEG algorithm. The compression unit 135 includes a discrete cosine transform (DCT) unit 145, which is input the digital image YUV. Each component of the digital image YUV is shifted from the range 0 . . .255 to the range −128 . . . +127, to normalize the result of the operation. The digital image YUV is then split into several blocks of 8×8 pixels (640×480/64=4800 blocks in the example). Each block of Y components BLy, each block of Cu components BLu, and each block of Cv components BLv is translated into a group of DCT coefficients DCTy, a group of DCT coefficients DCTu, and a group of DCT coefficients DCTv, respectively, representing a spatial frequency of the corresponding components. The DCT coefficients DCTy,u,v [h,k] (with h,k=0 . . . 7) are calculated using the following formula:
1DCTy,u,v{h,k}=14DhDkx=07y=07BLy,u,v[x,y]cos(2h+1)xπ16cos(2h+1)yπ16


[0028] wherein Dh,Dk=1{square root}{square root over (2)} for h,k=0 and Dh,Dk=1. The first DCT coefficient of each group is referred to as a DC coefficient, and it is proportional to the average of the components of the group, whereas the other DCT coefficients are referred to as AC coefficients.


[0029] The groups of DCT coefficients DCTy,u,v are directly provided to a quantizer (QUANT) 150, which also receives from the bus 120 a scaled quantization table for each type of component. Typically, a scaled quantization table SQy is used for the Y components and a scaled quantization table SQuv is used for both the Cu components and the Cv components. Each scaled quantization table includes an 8×8 matrix of quantization constants. The DCT coefficients of each group are divided by the corresponding quantization constants and are rounded off to the nearest integer. As a consequence, smaller and unimportant DCT coefficients disappear and larger DCT coefficients lose unnecessary precision. The quantization process generates corresponding groups of quantized DCT coefficients QDCTy for the Y component, groups of quantized DCT coefficients QDCTu for the Cu component, and groups of quantized DCT coefficients QDCTv for the Cv component.


[0030] These values drastically reduce the amount of information required to represent the digital image. The JPEG algorithm is then a lossy compression method, wherein some information about the original image is finally lost during the compression process. However, no image degradation is usually visible to the human eye at normal magnification in the corresponding de-compressed digital image for a compression ratio ranging from 10:1 to 20:1. This is defined as the ratio between the number of bits required to represent the digital image YUV, and the number of bits required to represent the compressed digital image JImg.


[0031] Each scaled quantization table SQy,SQuv is obtained by multiplying a corresponding quantization table Qy,Quv by a gain factor G (determined as set out in the following), that is, SQy=G·Qy and SQuv=G·Quv. The gain factor G is used to obtain a desired target compression factor bpt of the JPEG algorithm, defined as the ratio between the number of bits of the compressed digital image JImg and the number of pixels. Particularly, if the gain factor G is greater than 1, the compression factor is reduced compared to the one provided by the quantization tables Qy,Quv, whereas if the gain factor G is less than 1 the compression factor is increased.


[0032] The quantization tables Qy,Quv are defined so as to discard more chrominance information than luminance information. For example, the quantization table Qy (Table 2) is:
2TABLE 211110162440516112121419265860551413162440576956141722295187862182237566810920377243555648110411392496478871031211201017292959811210010399


[0033] and the quantization table Quv (Table 3) is:
3TABLE 3 1182447999999991821266699999999242656999999999947669999999999999966999999999999996699999999999999669999999999999966999999999999


[0034] Preferably, the quantization constants for the DC coefficients are equal to 1 in both cases. This is done to not lose any information about the mean content of each block, and then to avoid the so-called “block-effect”, wherein a contrast is perceivable between the blocks of the de-compressed image.


[0035] The groups of quantized DCT coefficients QDCTy,u,v are directly provided to a zig-zag unit (ZZ) 155. The zig-zag unit 155 modifies and reorders the quantized DCT coefficients to obtain a single vector ZZ of digital values. Each quantized DC coefficient (the one of a first group) is represented as the difference from the quantized DC coefficient of a previous group. The quantized AC coefficients are arranged in a zig-zag order so that quantized AC coefficients representing low frequencies are moved to the beginning of the group, and quantized AC coefficients representing high frequencies are moved to the end of the group. Since the quantized AC coefficients representing high frequencies are more likely to be zeros, this increases the probability of having longer sequences of zeros in the vector ZZ, which requires a lower number of bits in a run length encoding scheme.


[0036] The vector ZZ is directly provided to an encoder (ENC) 160, which also receives one or more encoding tables HT from the bus 120. Each value of the vector ZZ is encoded using a Huffman scheme, wherein the value is represented by a variable number of bits which is inversely proportional to a statistical frequency of use thereof. The encoder 160 then generates the corresponding compressed digital image JImg (which is sent to the bus 120). The compressed digital image JImg is typically formed by a header followed by the encoded values. If the last encoded value associated with a block is equal to 00, it must be followed by an (variable) End of Block (EOB) control word. Moreover, if an encoded value is equal to a further control word FF (used as a marker), this value must be followed by a 00 value.


[0037] The control unit 115 also includes a working memory 165, typically a SDRAM (Synchronous Dynamic Random Access Memory) and a microprocessor (μP) 170, which controls the operation of the device. Several peripheral units are further connected to the bus 120 by a respective interface. Particularly, a non-volatile memory 175, typically a flash EEPROM stores the quantization tables Qy,Quv and the encoding tables HT, and a control program for the microprocessor 170. A memory card (MEM_CARD) 180 is used to store the compressed digital images Jimg. The memory card 185 has a capacity of a few Mbytes, and can store several tens of compressed digital images JImg. At the end, the camera 100 includes an input/output (I/O) unit 185 that includes, for example, a series of push-buttons for enabling the user to select various functions of the camera 100. These push-buttons may include an on/off button, an image quality selection button, a shot button, and a zoom control button. The camera 100 also includes a liquid-crystal display (LCD), for supplying data on the operative state of the camera 100 to the user.


[0038] Likewise, considerations apply if the camera has a different architecture or includes different units, such as equivalent communication means, a CMOS sensor, a view-finder or an interface for connection to a personal computer (PC) and a television set, if another color filter (not with a Bayer pattern) is used, if the compressed digital images are directly sent outside the camera (without being stored onto the memory card), and so on.


[0039] Alternatively, considerations also apply if the digital image is converted into another space (not a luminance/chrominance space), the digital image RGB is directly compressed (without being converted), the digital image YUV is manipulated to down-sample the Cu,Cv components by averaging groups of pixels together to eliminate further information without sacrificing overall image quality, or no elaboration of the digital image is performed. Similarly, one or more different quantization tables are used, arithmetic encoding schemes are used, and a different compression algorithm is used, such as a progressive JPEG. Moreover, the compression method of the present invention leads itself to be implemented even in a different apparatus, such as a portable scanner, a computer in which graphic applications are provided, and the like.


[0040] In the camera 100, in addition to the known structure described above, an energy unit (ENRG) 190 which receives the digital image YUV from the bus 120 is included. The energy unit 190 determines, as described in detail below, an energy measure Ey, Eu and Ev for each type of component (Y, Cu and Cv, respectively) of the digital image YUV. In other words, values indicative of the high-frequency content of each type of component of the digital image YUV are determined. A total energy measure E=Ey+Eu+Ev is then calculated and sent to the bus 120.


[0041] The gain factor G for obtaining the target compression factor bpt is a function of one or more energy measures of the digital image YUV, which is the total energy measured E in the example. The function depends on the target compression factor bpt in addition to the characteristics of the camera 100, such as the dimension of the CCD, the size of the digital image, and the quantization tables used. The function may be determined a priori by a statistical analysis.


[0042] More generally, as described in detail in the following, the present invention includes the steps of determining at least one energy measure of the digital image, and estimating the gain factor as a function of the at least one energy measure. The function is determined experimentally according to the target compression factor.


[0043] The method of the invention is very fast, in that the operations performed by the compression unit (i.e., the compression of the digital image) are executed only once. The arroach according to the present invention is particularly advantageous in portable devices supplied by batteries, even if different applications are not excluded, since it drastically reduces the power consumption.


[0044] These results are achieved with a low error, on the order of a few units, between the target compression factor bpt and a compression factor bpa actually obtained, defined as (bpt−bpa)/bpt. Experimental results on the illustrated camera provided a mean error of −0.6%. The negative error is more important than the positive error because the size of the compressed digital image is bigger than the target one, with a distribution of 68% between ±6% and 82% between ±10%.


[0045] In a preferred embodiment of the present invention, a first estimate is made of the number of bits ACbits required to encode (in the compressed digital image JImg) the AC coefficients of all the groups quantized using the tables Qy,Quv scaled by a pre-set factor S. The number ACbits is estimated as a function of the one or more energy measures determined a priori by a statistical analysis.


[0046] For example, FIG. 2a shows a relation between the total energy measure E and the number ACbits for a camera having a CDD with 1 million light-sensitive cells and for images of 640×480 pixels, with a factor S=0,2 and a target compression factor bpt=2 bit/pel. This relation can be interpolated as a linear function. In other words, the number of ACbits can be estimated using the relation ACbits=a·E+b. The parameters a and b depend on the characteristics of the camera 100 and the target compression factor bpt.


[0047] The DC coefficient DCy,u,v of each group is equal to the mean value of the components of the respective block BLy,u,v, that is:
2DCy,u,v=164h=07k=07BLy,u,v[h,k]


[0048] The quantized DC coefficients QDCy,u,v are calculated by dividing the DC coefficients DCy,u,v by the corresponding quantization constants and rounding off the result to the nearest integer. In the example, wherein the quantization constants for the DC coefficients are equal to 1, the quantized DC coefficients QDCy,u,v are the integer part of the respective DC coefficients DCy,u,v, that is, QDCy,u,v=INT [DCy,u,v].


[0049] Each quantized DC coefficient is represented in the vector ZZ as the difference DiffQDCy,u,v from the quantized DC coefficient of a previous group. The number of bits DiffDCbits required to encode (in the compressed digital image JImg) each DiffQDCy,u,v value is defined in the JPEG standard by the following table Jty (Table 4), for the Y components:
4TABLE 4DiffQDCyDiffDCbits  12−1 . . . +14−3 . . . −2, +2 . . . +35−7 . . . −4, +4 . . . +76−15 . . . −8, +8 . . . +157−31 . . . −16, +16 . . . +318−63 . . . −32, +32 . . . +6310−127 . . . −64, +64 . . . +12712−255 . . . −128, +128 . . . +25514−511 . . . −256, +256 . . . +51116−1023 . . . −512, +512 . . . +102318−2047 . . . −1024, +1024 . . . +204720


[0050] and by the following table Jtuv (Table 5), for the Cu,Cv components:
5TABLE 5DiffQDCu,vDiffDCbits  02−1 . . . +13−3 . . . −2, +2 . . . +34−7 . . . −4, +4 . . . +76−15 . . . −8, +8 . . . +158−31 . . . −16, +16 . . . +3110−63 . . . −32, +32 . . . +6312−127 . . . −64, +64 . . . +12714−255 . . . −128, +128 . . . +25516−511 . . . −256, +256 . . . +51118−1023 . . . −512, +512 . . . +102320−2047 . . . −1024, +1024 . . . +204722


[0051] The number of bits DCbits required to encode in the compressed digital image JImg the quantized DC coefficients of all the groups is then calculated by summing the values DiffDCbits.


[0052] The number of bits HDbits required to represent the header of the compressed digital image JImg is fixed. The number of EOB control words is assumed equal to the number of blocks, and the number of bits CTbits required to represent the EOB control words is then estimated by the formula 8·N·M/64=N·M/8. The number of bits required by the values 00 following the encoded values equal to the marker FF cannot be estimated a priori, and it is set to a default value, which is preferably 0.


[0053] Therefore, it is possible to estimate a basic compression factor bpb obtained using the quantization tables Qy,Quv scaled by the factor S. The basic compression factor bpb is estimated applying the relation bpb =(HDbits+DCbits+ACbits+CTbits)/(N·M).


[0054] The gain factor G for obtaining the target compression factor bpt is then estimated as a function of the basic compression factor bpb, determined a priori by a statistical analysis. For example, FIG. 2b shows a relation between the basic compression factor bpb and the gain factor G for obtaining a compression factor of 2 bit/pel for the same camera. This relation can be interpolated as a quadratic function. In other words, the gain factor G can be estimated using the relation G=C2·bpb2+C1·bpb+C0. The parameters C2, C1 and C0 are depend on the characteristics of the camera 100 and the target compression factor bpt.


[0055] This approach is particularly straight forward and provides good accuracy. The parameters a,b and the tables JTy,JTuv are stored in the EEPROM 175. Preferably, two or more sets of parameters C2,C1,C0, each one associated with a different value of the target compression factor bpt and with a different size of the digital image, are determined a priori by a statistical analysis. A look-up table, wherein each row addressable by the value of the target compression factor bpt contains the respective parameters C2,C1,C0, is also stored in the EEPROM 175. This feature allows different compression factors to be easily selected by the user.


[0056] Advantageously, the factor S is determined a priori by a statistical analysis to further reduce the error between the target compression factor bpt and the actual compression factor bpa. Experimental results have shown that the factor S, which minimizes the error, also depends on the target compression factor bpt in addition to the characteristics of the camera 100.


[0057] Likewise, considerations apply if the compressed digital image has a different format, if the number CTbits and the number of bits required by the values 00 following the encoded values equal to the marker FF are set to different values (such as some tens of bits), and the like. Alternatively, the gain factor is estimated directly from the energy measures, the relation ACbits/E and the relation G/bpb are interpolated with different functions (such as a logarithmic function), the look-up table is stored elsewhere or a different memory structure is used, only one set of parameters C2,C1,C0 is stored, the linear and quadratic functions are implemented by software, the factor S is set to a constant value, even equal to 1 irrespective of the target compression factor bpt, and the like.


[0058] Considering now FIG. 3, the energy unit 190 includes a demultiplexer 310 with one input and three outputs. The demultiplexer 310 receives the digital image YUV and transfers the components of each type to a respective output according to a selection command not shown in the figure. As a consequence, the digital image YUV is split into a luminance component image IMGy, a Cu chrominance component image IMGu, and a Cv chrominance component image IMGv.


[0059] The component images IMGy, IMGu and IMGv are supplied to a buffer (BFR) 315y, 315u, and 315v, respectively. A Sobel filter (SBL) 320y, 320u and 320v is also provided for each type of component. The Sobel filter 320y,320u,320v receives the component image IMGy,IMGu,IMGv directly from the demultiplexer 310, and the component image IMGy,IMGu,IMGv output by the buffer 315y,315u,315v at respective inputs. An output of each Sobel filter 320y, 320u and 320v is provided to a respective accumulator 325y, 325u and 325v, which outputs the corresponding energy measure Ey, Eu and Ev. The energy measures Ey,Eu,Ev are supplied to a summing node 330, which outputs the total energy measure E.


[0060] Each Sobel filter 320y,320u,320v calculates a horizontal Sobel image SHy,SHu,SHv and a vertical Sobel image SVy,SVu,SVv by a convolution of the component image IMGy,IMGu,IMGv with a horizontal mask Mh and a vertical mask Mv, respectively. The horizontal mask Mh used to detect horizontal outlines of the image is:
6−1−2−1000121


[0061] The vertical mask Mv used to detect vertical outlines of the image is:
7−101−202−101


[0062] In other words, each element of the horizontal Sobel images SHy,u,v[i,j] and each element of the vertical Sobel images SVy,u,v[i,j] (with i=0 . . . N−1 and j=0 . . . M−1) are calculated by the following formulas:
3SHy,u,v[i,j]=a=-1+1b=-1+1IMGy,u,v[i+a,j+b]·Mh[a,b]SVy,u,v[i,j]=a=-1+1b=-1+1IMGy,u,v[i+a,j+b]·Mv[a,b]


[0063] Each Sobel filter 320y,320u,320v then calculates a total Sobel image Sy,Su,Sv defined by the formula:


[0064] Sy=SHy+α·SVy


[0065] Su=SHu+SVu


[0066] Sv=SHv+SVv


[0067] wherein the parameter α is used to compensate for the asymmetry of the quantization table Qy along a horizontal and a vertical direction (for example α=0,6). The accumulator 325y,325u,325v sums these values and sets the respective energy measure Ey,Eu,Ev equal to this sum. In other words:
4Ey,u,v=i=0Nj=0M&LeftBracketingBar;Sy,u,v[i,j]&RightBracketingBar;


[0068] The summing node 330 then calculates the total energy measure E=Ey+Eu+Ev.


[0069] This approach provides good accuracy, without requiring a too lengthy computing time. However, the approach of the present invention can also be implemented without: any compensation parameter, using different masks, a different method for estimating the energy measures, such activity measures, Laplacian or other high-pass filters, using several energy measures such as the energy measures Ey, Eu and Ev, and the like.


[0070] To explain the operation of the camera, reference is made to FIGS. 4a-4b together with FIG. 1. When the camera 100 is switched on by the user via the on/off button, the microprocessor 170 runs the control program stored in the EEPROM 175. A method 400 corresponding to this control program starts at block 405 and then passes to block 410, wherein the user selects the desired quality of the image, such as low or high, by acting on the corresponding button. The microprocessor 170 determines and stores in the SDRAM 165 the target compression factor bpt corresponding to the selected image quality, for example, 1 bit/pel for the low quality and 2 bit/pel for the high quality.


[0071] The method checks at block 415 if the shot button has been partially pressed to focus the image. If not, the method returns to block 410. As soon as the user partially presses the shot button, the method proceeds to block 420, wherein the incomplete digital image SImg is acquired by the sensor unit 110. The diaphragm is always open and the light is focused by the lenses, through the Bayer filter, onto the CCD. The pre-processing unit 125 then controls the acquisition unit 115 by the control signals Sc according to the content of the incomplete digital image SImg.


[0072] The method checks again the status of the shot button at block 425. If the shot button has been released, the method returns to block 410, whereas if the shot button has been completely pressed to take a photo the method continues to block 430. On the other hand, if no action is performed by the user, the method stays in block 425 in an idle loop.


[0073] Considering now block 430, the incomplete digital image SImg is acquired by the sensor unit 110 and modified by the pre-processing unit 125. The corresponding incomplete digital image BImg is stored onto the SDRAM 165. The method then passes to block 435, wherein the incomplete digital image BImg is read from the SDRAM 165 and provided to the image-processing unit 130. The image-processing unit 130 interpolates the missing color components in each element of the incomplete digital image BImg to obtain the corresponding digital image RGB, and modifies the digital image RGB to improve the image quality. The digital image RGB is then converted into the corresponding digital image YUV, which is sent to the bus 120.


[0074] At this point, the method splits into two branches which are executed concurrently. A first branch includes block 438, and a second branch includes blocks 440-450. The two branches are joined at block 455.


[0075] Considering now block 438, the digital image YUV is stored in the SDRAM 165. At the same time, the digital image YUV is also received by the energy unit 190 at block 440. The energy unit 190 estimates the total energy measure E, which is sent to the bus 120. The method proceeds to block 441, wherein the microprocessor 170 receives the total energy measure E and estimates the number of ACbits required to encode the AC coefficients using the parameters a, b read from the EEPROM 175. Continuing to block 442, the microprocessor 170 calculates the number of DCbits required to encode the DC coefficients using the tables JTy,JTuv read from the EEPROM 175.


[0076] The method passes to block 443, wherein the microprocessor 170 calculates the number of HDbits required to represent the header of the compressed digital image JImg, and the number of CTbits required to represent the EOB control words. The microprocessor 170 then calculates, at block 445, the basic compression factor bpb by dividing the sum of the numbers ACbits, DCbits, HDbits and CTbits by the number of pixels of the digital image YUV (N·M). Continuing now to block 450, the microprocessor 170 reads the parameters C2,C1,C1 associated with the target compression factor bpt from the EEPROM 175, while addressing the look-up table by the value of the target compression factor bpt. The microprocessor 170 then estimates the gain factor G for obtaining the target compression factor bpt using the read parameters C2,C1,C0.


[0077] At the end, the method passes to block 455, wherein the digital image YUV is read from the SDRAM 185 and is provided to the DCT unit 140 which calculates the groups of DCT coefficients DCTy,u,v. Proceeding to block 460, the microprocessor 170 reads the quantization tables Qy,Quv from the EEPROM 175 and calculates the scaled quantization tables SQy,SQuv multiplying the respective quantization tables Qy,Quv by the gain factor G. Continuing to block 465, the groups of DCT coefficients DCTy,u,v and the scaled quantization tables SQy,SQuv are provided to the quantizer 150, which generates the corresponding groups of quantized DCT coefficients QDCTy,u,v.


[0078] The method proceeds to block 470, wherein the quantized DCT coefficients QDCTy,u,v are transformed into the vector ZZ by the zig-zag unit 155. The vector ZZ is provided to the encoder 160 at block 475, which generates the corresponding compressed digital image Jimg. The compressed digital image JImg is then stored in the SDRAM 165. Continuing to block 480, the compressed digital image JImg is read from the SDRAM 165 and sent to the memory card 180.


[0079] The method checks at block 485 if a stop condition has occurred, for example, if the user has switched off the camera 100 via the on/off button, or if the memory card 180 is full. If not, the method returns to block 410. On the other end, the method ends at block 490.


[0080] The preferred embodiment of the present invention described above, with the energy measure function being implemented in hardware and the basic compression factor and gain factor estimation function being implemented in software, is a good trade-off between speed and flexibility.


[0081] Likewise, considerations apply if the program executes a different equivalent method, for example with error routines, with sequential processes, and the like. In any case, the method of the present invention leads itself to be carried out even with all the functions completely implemented in hardware or in software.


[0082] In an alternative embodiment of the present invention, as shown in FIGS. 4c-4d together with FIG. 1, the microprocessor 170 executes a method 400a which starts at block 405 and then goes until block 430 as described above.


[0083] In this case, however, the method sequentially passes to blocks 435, wherein the digital image YUV, provided by the image-processing unit 130, is sent to the bus 120. The method then executes the operations defined by blocks 440-450, wherein the energy unit 190 estimates the total energy measure E, and the microprocessor 170 estimates the number ACbits, calculates the number DCbits, calculates the numbers HDbits and CTbits, calculates the basic compression factor bpb, and estimates the gain factor G.


[0084] At this point, the method continues to block 435a, wherein the incomplete digital image BImg is again read from the SDRAM 165 and processed by the image-processing unit 130 to obtain the digital image YUV, which is sent onto the bus 120 (as at block 435). The method then passes to block 455 and proceeds as in the method shown in FIGS. 4a-4b.


[0085] The approach described above provides the advantage of not requiring the digital image YUV to be stored in the SDRAM 165. This result is obtained with the trade-off of processing the incomplete digital image BImg twice by the image-processing unit 130. However, in many structures the time and the power consumption required for storing and reading the digital image YUV are higher then the ones required for processing the incomplete digital image BImg. Moreover, the approach described above is particularly advantageous when there are memory constraints in the camera.


[0086] Naturally, in order to satisfy local and specific requirements, a person skilled in the art may apply to the approach described above many modifications and alternatives all of which, however, are included within the scope of protection of the invention as defined by the following claims.


Claims
  • 1. A method (400) of compressing a digital image including a matrix of elements each one consisting of a plurality of digital components of different type representing a pixel, the method comprising the steps of: splitting (455) the digital image into a plurality of blocks and calculating, for each block, a group of DCT coefficients for the components of each type, quantizing (460-465) the DCT coefficients of each block using a corresponding quantization table scaled by a gain factor for achieving a target compression factor, characterized by the steps of determining (440) at least one energy measure of the digital image, estimating (441-450) the gain factor as a function of the at least one energy measure, the function being determined experimentally according to the target compression factor.
  • 2. The method (400) according to claim 1, wherein each group of DCT coefficients consists of a DC coefficient and a plurality of AC coefficients, the step (441-450) of estimating the gain factor including the steps of: estimating (441) a first number of bits required to encode the AC coefficients of all the blocks using the quantization tables scaled by a pre-set factor as a first function of the at least one energy measure, the first function being determined experimentally according to the target compression factor, calculating (442) a second number of bits required to encode the DC coefficients of all the blocks using the quantization tables scaled by the pre-set factor, estimating (443-445) a basic compression factor provided by the quantization tables scaled by the pre-set factor according to the first number of bits and the second number of bits, and estimating (450) the gain factor as a second function of the basic compression factor, the second function being determined experimentally according to the target compression factor.
  • 3. The method (400) according to claim 2, wherein the first function is a linear function and the second function is a quadratic function.
  • 4. The method (400) according to claim 2 or 3, wherein the step of estimating (443-445) the basic compression factor includes the steps of: estimating (443) a third number of bits, required to encode control values, according to the number of elements of the digital image, calculating (445) the basic compression factor dividing the sum of the first, second and third number of bits by the number of elements of the digital image.
  • 5. The method (400) according to any claim from 2 to 4, further comprising the steps of: storing a plurality of sets of parameters representing the second function, each set of parameters being associated with a corresponding value of the target compression factor, selecting (410) an image quality and determining a current value of the target compression factor as a function of the selected image quality, reading (450) the parameters associated with the current value of the target compression factor and estimating the gain factor using the read parameters.
  • 6. The method (400) according to any claim from 2 to 5, wherein the pre-set factor is determined experimentally according to the target compression factor.
  • 7. The method (400) according to any claim from 1 to 6, wherein each element of the digital image consists of a luminance component, a first chrominance component, and a second chrominance component.
  • 8. The method (400) according to claim 7, wherein the at least one energy measure consists of a total energy measure equal to the sum of an energy measure of the luminance components, an energy measure of the first chrominance components and an energy measure of the second chrominance components.
  • 9. The method (400) according to claim 7 or 8, wherein the step (440) of determining the at least one energy measure comprises, for each type of component, the steps of: calculating a horizontal Sobel image and a vertical Sobel image by means of a convolution of the elements of the digital image consisting of said type of component with a horizontal mask and a vertical mask, respectively, calculating a total Sobel image by summing the horizontal Sobel image and the vertical Sobel image, and summing the absolute value of each element of the total Sobel image.
  • 10. The method (400) according to claim 9, wherein at least one quantization table is asymmetric along a horizontal direction and a vertical direction, the method further comprising the steps of: multiplying the Sobel image associated with the at least one quantization table by a correction factor for compensating the asymmetry of the corresponding quantization table.
  • 11. The method (400) according to any claim from 1 to 10, further comprising the steps of: providing (410-430) an incomplete digital image wherein at least one component is missing in each element, obtaining (435) the digital image from the incomplete digital image, storing (438) the digital image onto a working memory and concurrently performing the steps of determining (440) the at least one energy measure and estimating (441-450) the gain factor, reading (455-465) the digital image from the working memory for performing the steps of splitting (455) the digital image and quantizing (460-465) the DCT coefficients.
  • 12. The method (400a) according to any claim from 1 to 10, further comprising the steps of: providing (410-430) an incomplete digital image wherein at least one component is missing in each element, obtaining (435) the digital image from the incomplete digital image for performing the steps of determining (440) the at least one energy measure and estimating (441-450) the gain factor, obtaining (435a) the digital image from the incomplete digital image again for performing the steps of splitting (455) the digital image and quantizing (460-465) the DCT coefficients.
  • 13. A device (115) for compressing a digital image including a matrix of elements each one consisting of a plurality of digital components of different type representing a pixel, the device (115) comprising means (145) for splitting the digital image into a plurality of blocks and calculating, for each block, a group of DCT coefficients for the components of each type, means (150) for quantizing the DCT coefficients of each block using a corresponding quantization table scaled by a gain factor for achieving a target compression factor, characterized in that the device (115) further includes means (190) for determining at least one energy measure of the digital image, and means (170) for estimating the gain factor as a function of the at least one energy measure, the function being determined experimentally according to the target compression factor.
  • 14. The device (115) according to claim 13, further comprising a compression unit (135) comprising the means (145) for splitting the digital image and calculating the DCT coefficients and the means (150) for quantizing the DCT coefficients, a memory unit (175) for storing the quantization tables, an energy unit (190) including the means for determining the at least one energy measure, a processor unit (170) for controlling the device (115), communication means (120) for connecting the compression unit, the memory unit, the energy unit and the processor unit therebetween, the processor unit (170) estimating the gain factor under the control of a program stored onto the memory unit (175).
  • 15. A digital still camera (100) comprising means (105-130) for providing the digital image and the device (115) of claim 13 or 14.
Priority Claims (1)
Number Date Country Kind
00202436.2 Jul 2000 EP