This application is based on and claims priority under 35 U.S.C. §119 from Japanese Patent Application No. 2006-227554 filed Aug. 24, 2006.
1. Technical Field
The invention relates to an image processing system, an image compression system, an image editing system, a computer readable medium, a computer data signal, and an image processing apparatus.
2. Related Art
There is an image compression method, such as a mixed raster content (MRC) method specified in ITU-T recommendations T. 44.
This is a method for solving a problem that distortion of text or line drawing part increases when an image including the text or line drawing part is compressed by means of transform coding, such as JPEG.
In the MRC method, text or line drawing part is extracted from an input image. Moreover, color data of the extracted text or line drawing part is extracted. In addition, the text or line drawing part is removed from the input image. The text or line drawing part is coded as binary data by using a binary data compression method, such as MH, MMR, or JBIG. The color data is coded by using a multi-level image compression method, such as JPEG. In addition, a background image that remains after removing the text or line drawing part from the input image is coded by the multi-level image compression method, such as JPEG.
By performing the multi-plane compression described above, it becomes possible to keep the image quality of the text or line drawing part high even in the case of a high compression rate.
Alternatively, it is possible to set the text or line drawing part as binary data that is different for each color, without using the MRC format described above.
For example, a black text or line drawing part is extracted from an input image so as to create binary data. Further, a red-colored text or line drawing part is extracted from the input image so as to create binary data. Furthermore, multilevel image data in which the black or red-colored text or line drawing part is removed is created from the input image. Binary data created by extracting the black text or line drawing part is compressed in a binary data compression method. Moreover, information indicating black is additionally granted to a color of the image. In the same manner, binary data created by extracting the red-colored text or line drawing part is compressed in the binary data compression method. In addition, information indicating a red color is additionally granted to a color of the image. In addition, a background image that remains after removing the text or line drawing part from the input image is coded by the multi-level image compression method, such as JPEG.
As shown in
In the case when the background image is created as described above, high-frequency components are generated if the text or line drawing part is simply removed. As a result, the created image becomes not suitable for the transform coding, such as JPEG. By compensating for the extracted text or line drawing part, the compression efficiency may be improved. A compensated pixel value is arbitrary because the compensated pixel value is overwritten in the text and line drawing part at the time of decoding. Accordingly, various kinds of compensation methods have been proposed.
According to an aspect of the invention, an image processing system includes an identification unit, a region division unit and a pixel value change unit. The identification unit identifies a second pixel among an image including a first pixel and the second pixel. A pixel value of the first pixel is not to be changed. A pixel value of the second pixel is to be changed. The region division unit divides the image into regions. The pixel value change unit changes the value of the second pixel to a predetermined value through a filtering process when the second pixel identified by the identification unit is included within the regions obtained by the region division unit.
Exemplary embodiments of the invention will be described in detail below with reference to the accompanying drawings, wherein:
First, a basic method will be described to facilitate understanding of exemplary embodiments.
In this exemplary embodiment, compensation of a background image excluding text and line drawing part is efficiently performed in accordance with a subsequent-stage compression method.
This exemplary embodiment compensates sequentially without causing a pixel value gap at the time of compensation. Moreover, this exemplary embodiment relates to compensation for an extracted part of text or line drawing part, for example, in a method of performing image coding by extracting the text or line drawing part from an input image and then separating the extracted text or line drawing part into a binary image coding plane and a multilevel image coding plane.
Hereinafter, the exemplary embodiments of the invention will be described with reference to the accompanying drawings.
1. Configuration View of Modules
In general, the modules refer to components, such as software and hardware, which can be logically separated. That is, the modules in this exemplary embodiment refer to modules in the hardware configuration as well as modules in a program. Accordingly, in this exemplary embodiment, a program, an apparatus, a system, and a method will also be explained. Furthermore, modules correspond to functions in an approximately one-to-one manner. However, in actuality, one module may be realized by one program or a plurality of modules may be realized by one program. Alternatively, one module may be realized by a plurality of programs. Furthermore, a plurality of modules may be executed by a computer or one module may be executed by a plurality of computers in a distributed or parallel environment. In addition, ‘connection’ referred hereinafter includes physical connection and logical connection.
In addition, a system may be realized by connecting a plurality of computers, hardware, apparatuses, and the like to one another in a network or may be realized by a computer, hardware, an apparatus, and the like.
In this exemplary embodiment, an image input module 11, a first pixel/second pixel identification information input module 12, a region division module 13, a second pixel identification module 14, a pixel value change module 15, a compression module 16, a compression image storage module 17, an image editing module 18, and an output module 19 are provided as shown in
The image input module 11 is connected to the region division module 13 and is input with an image. Here, the image includes a binary image, a multilevel image (color image), and the like. In addition, the image may be a single image or plural images.
The first pixel/second pixel identification information input module 12 is connected to the region division module 13 and is input with information used to identify pixels within the image input by the image input module 11 as either a first pixel whose pixel value is not to be changed or a second pixel whose pixel value is to be changed. The information used to identify corresponds to the image input by the image input module 11. Specifically, the first pixel corresponds to a natural image or the like, and the second image corresponds to a text/figure or the like. Moreover, the information used to identify is information used to separate the image, which is input by the image input module 11, into the first and second pixels.
The region division module 13 is connected to the image input module 11, the first pixel/second pixel identification information input module 12, and the second pixel identification module 14. The region division module 13 divides the image, which is input by the image input module 11, into regions. The divided regions are transmitted to the second pixel identification module 14 together with the identification information input by the first pixel/second pixel identification information input module 12. Specifically, the image input by the image input module 11 is divided into blocks (rectangles). Hereinafter, an example of dividing an image into blocks will be mainly described.
The second pixel identification module 14 is connected to the region division module 13 and the pixel value change module 15. The second pixel identification module 14 identifies a second pixel among the image (image including a first pixel and a second pixel) divided by the region division module 13. A result of the identification is transmitted to the pixel value change module 15.
The pixel value change module 15 is connected to the second pixel identification module 14 and is also connected to the compression module 16 or the image editing module 18. In the case when the second pixel identified by the second pixel identification module 14 is included within a region divided by the region division module 13, a value of the second pixel is changed through a filtering process so as to be a predetermined value. To change the value of the second pixel through the filtering process so as to be a predetermined value includes, for example, changing the value of the second pixel in a direction in which a value after the filtering process using a high pass filter (wide band pass filter) becomes zero and changing the value of the second pixel in a direction in which a value of power after the filtering process using the high pass filter becomes a minimum. The direction in which the value becomes zero includes ‘0’ and brings a value approximate to ‘0.’ The direction in which the value of power becomes a minimum includes a minimum value as the value of power at that time and brings a value approximate to the minimum value. In addition, the process of changing the value of the second pixel in the direction in which the value of power becomes minimal is performed in the case when a predetermined number or more of consecutive second pixels identified by the second pixel identification module 14 are included within a region divided by the region division module 13.
Further, in the process by the pixel value change module 15, for example, pixel values are sequentially changed. Specifically, a value of a next pixel is changed on the basis of a pixel for which the changing process has been completed.
Furthermore, in the case when a negative value is included in a filter coefficient used when a value of a pixel is changed by the pixel value change module 15, the filter coefficient is changed in the direction in which the filter coefficient becomes zero. Alternatively, the filter coefficient is changed to be equal to or larger than zero.
In addition, an order of the pixel value change using the pixel value change module 15 may start from a plurality of different directions within an image. Specifically, a starting point for raster scan may be an upper left end or a lower right end, and the direction of raster scan may be a horizontal direction or a vertical direction. Details will be described later.
The image input module 11 to the pixel value change module 15 will now be described more specifically.
In this exemplary embodiment, an image is input by the image input module 11 and identification information is input by the first pixel/second pixel identification information input module 12. Hereinafter, an explanation will be made by using compensation information as a main example of the identification information.
For example, the compensation information is binary data, that is, ‘1’ indicates extracted text and line drawing part ‘0’ indicates the other parts. The size of the binary data is the same as that of the input image. In this exemplary embodiment, a process of properly setting (that is, compensating for) a pixel value of apart in which the compensation information is ‘1’ is performed. Thus, it is possible to stably increase a compression rate of an image including a compensated image, for example.
First, a region including compensated pixels is extracted by the region division module 13. An explanation on the region will now be made by using a pixel block (rectangle) as an example.
Then, the second pixel identification module 14 determines as to whether or not a compensated pixel exists in a pixel block. If the second pixel identification module 14 determines that a compensated pixel exists in the pixel block, the pixel value change module 15 calculates a value of the compensated pixel.
The size or location of the pixel block varies. In addition, there are various kinds of methods of calculating the compensated pixel value.
(1) In the case of performing JPEG compression, a pixel block corresponding to 8×8, which is the same as in JPEG, is extracted.
(1-1) Further, a compensated pixel value, which brings zero run of a DCT coefficient to continue, is calculated. In addition, the calculation process is an example of a process of changing a compensated pixel value in the direction in which a value after the filtering process becomes zero.
(1-2) Alternatively, a compensated pixel value, at which norm of a DCT coefficient is a minimum, is calculated. In addition, the calculation process is an example of a process of changing a compensated pixel value in the direction in which a value of power after a filtering process becomes zero.
(2) In the case of performing JPEG2000 compression, one of the following methods is used.
(2-1) Compensation is performed for each pixel in the order of raster scan. The pixel block size is assumed to be the size of a high pass filter. A compensated pixel value that brings an output value of a high pass filter to be zero is used. In addition, the calculation process is an example of a process of changing a compensated pixel value in the direction in which a value after a filtering process becomes zero.
(2-2) A pixel block (first predetermined region) having predetermined size is extracted with a first compensated pixel as an upper left end, and a pixel value within the pixel block is determined such that a power value of an output of a high pass filter within the block becomes a minimum. In addition, the calculation process is an example of a process of changing a compensated pixel value in the direction in which the value of power after the filtering process becomes zero.
(2-3) A pixel block (second predetermined region) having predetermined size is extracted with the first compensated pixel as the upper left end. Then, when a non-compensated pixel does not exist in the block, a compensated pixel value that brings an output value of the high pass filter to be zero is calculated for each pixel. When a non-compensated pixel exists, a pixel value within the first predetermined region is determined so that the power value of an output of the high pass filter in the first predetermined region becomes a minimum.
The compression module 16 is connected to the pixel value change module 15 and the compression image storage module 17. The compression module 16 compresses an image including pixels changed by the pixel value change module 15.
The compression image storage module 17 is connected to the compression module 16 and stores an image compressed by the compression module 16.
The image editing module 18 is connected to the pixel value change module 15 and the output module 19. The image editing module 18 edits an image including pixels changed by the pixel value change module 15.
The output module 19 is connected to the image editing module 18 and outputs an image edited by the image editing module 18. Specifically, the output module 19 may be a display or a printer, for example.
2. Basic Method
An explanation will be made to facilitate understanding of basic concept of a process performed by the pixel value change module 15. An object of the basic concept is to improve a compression rate, and there are following three concepts.
(First Concept)
A pixel value is set such that the number of transform coefficients, which are zero, is made as many as possible.
(Second Concept)
A power value of a transform coefficient output value is made as small as possible.
(Third Concept)
A combination of the first concept and the second concept corresponds to a third concept.
3. Case of Applying the First Concept to JPEG
In the case of applying the first concept to JPEG, a compensated pixel value is set such that the zero run length of a DCT coefficient in the order of zigzag scan becomes large.
3.1. Case of Applying the First Concept to Two-Dimensional DCT
Here, it is assumed that a pixel block having 8×8 is 64-dimensional pixel value vector x. The DCT transform may be considered as ‘transform to obtain 64-dimensional transform coefficient vector c from 64-dimensional pixel value vector x by using DCT matrix D of 64×64’. Such relationship may be expressed in the following Expression 1.
c=Dx (1)
For example, let (i, j) be the pixel location of 8×8 block, (u, v) be a two-dimensional frequency, Svu be a transform coefficient, and sji be a pixel value. Here, it is assumed that (i, j) and (u, v) indicate (column number, row number). 8×8 DCT transform is expressed by the following Expression 2.
Inverse transform is expressed by the following Expression 3.
Here, conditions of the following Expression 4 are applied.
Cu,Cv=1/√{square root over (2)} for u,v=0
Cu,Cv=1 otherwise (4)
Further, assuming that components within the vector x are arranged in the order of (i0, i0), (i1, i1), . . . , (i63, i63) and that components within the vector c are arranged in the order of (u0, v0), (u1, v1), . . . , (u63, v63), a component Dts in the ‘t’ row and ‘s’ column of matrix D may be obtained by Expression 5.
In the same manner, D−1ts, which is a component of matrix D−1, may be obtained by Expression 6.
Here, the order of (u0, v0), (u1, v1) (u63, v63) is a zigzag order from a low-frequency component to a high-frequency component. In addition, it is assumed that among 64 image blocks, a pixel for which compensation is not to be performed will be referred to as an ‘A’ pixel and a pixel for which compensation is to be performed will be referred to as a ‘B’ pixel. The order of (i0, j0), (i1,j1), . . . , (i63, j63) is set such that the ‘A’ pixel is first arranged and the ‘B’ pixel is arranged last.
Hereinafter, the dimensional of the vector x is assumed to be ‘N’ for the purpose of general description. In this case, N=64.
It is now assumed that the number of ‘A’ pixels is ‘N-b’ and the number of ‘B’ pixels is ‘b’. Here, pixel values of the ‘B’ pixels, which are compensated pixels, are obtained. Therefore, Expression 1 is assumed to be simultaneous equations having pixel values of ‘b’ pieces of B pixels as unknowns.
In this case, it is preferable to set ‘N’ or less unknowns because Expression 1 is simultaneous equations with N unknowns. In addition, since the compression rate is generally high when the DCT coefficient is zero, all of the ‘b’ DCT coefficients are assumed to be ‘0’.
Under the conditions described above, Expression 1 becomes the following Expression 7.
Vector xA denotes a vector indicating a known non-compensated pixel, vector xB denotes a vector indicating an unknown compensated pixel, and vector c0 denotes ‘N-b’ DCT coefficients starting from a low-frequency component in the order of zigzag scan.
Since Expression 7 is a linear equation with ‘N’ unknowns, the unknown vector xB can be obtained.
3.2. Case of Applying the First Concept to One-Dimensional DCT
The same method may be applied to one-dimensional DCT.
Here, a compensated pixel value that causes high-frequency components of DCT coefficients in the horizontal direction to be zero will be obtained.
First, an image is divided into blocks of 1 row×8 columns.
Pixel blocks generated by the division are indicated by a column vector x.
DCT coefficients of the vector x are assumed to be a column vector c.
Here, DCT transform matrix D of 8×8 is considered. Dts, which denotes a component in the ‘t’ row and ‘j’ column of the matrix D, may be obtained by Expression 8.
The same process described above may be performed for the matrix D and the vector x.
4. Case of Applying the Second Concept to JPEG
Vector xB that causes the norm of a DCT coefficient to be a minimum is obtained (refer to Expression 9).
5. Supplement
In the above, when the zero run length or norm of a DCT coefficient is measured, it may be possible to measure the zero run length or norm of a vector after quantization.
6. Case of Applying the First and Second Concepts to JPEG2000
JPEG2000 can perform lossless compression. Accordingly, it may be possible to adopt a compensation technique for improving a lossless compression rate. Hereinafter, it is assumed that a pixel for which compensation is not to be performed will be referred to as an ‘A’ pixel and a pixel for which compensation is to be performed will be referred to as a ‘B’ pixel.
Let p(x,y) be a compensated pixel value. In general, it is considered optimal to adopt a value of p(x, y) that causes a power value of high-frequency components in DWT (discrete wavelet transform) to be a minimum. For example, it is preferable to obtain p(x, y) that causes a result, which is obtained by partial-differentiating a sum of power values of high-frequency components, to be zero. However, this approach is not practical because an amount of calculation when the number of compensated pixels is large is vast. In addition, the minimum power norm (corresponding to the second concept) does not necessarily directly linked to improvement of the lossless compression rate. Therefore, in this exemplary embodiment, the following method is used.
(1) A compensated pixel region is limited and pixels in the limited region are compensated by using the local minimum power norm. Thus, a processing load is reduced.
(2) In addition, in order to increase the lossless compression rate, a compensated pixel value that causes an output of a filtering process of a high pass filter to be zero is selected if possible.
6.1. Ideal Compensation
Before describing the exemplary embodiment in detail, ideal compensation will be described.
Hereinafter, let p(x, y) be a pixel value. Here, ‘x’ denotes an index indicating a row and ‘y’ denotes an index indicating a column. Moreover, it is assumed that a pixel for which compensation is not to be performed will be referred to as an ‘A’ pixel and a pixel for which compensation is to be performed will be referred to as a ‘B’ pixel. In addition, let UB be a set of ‘B’ pixels corresponding to the position (x, y).
An input image is divided into plural frequency regions of 1HH, 1HL, 1LH, 2HH, 2HL, 2LH, . . . , xLL by performing DWT (discrete wavelet transform).
A value of each frequency region is a filter output of an input pixel value. Accordingly, the value of each frequency region becomes a function of p(x, y). At this time, a sum of power values in respective frequency bands may be made as small as possible in order to increase the compression rate of JPEG2000. Thus, an evaluation function shown in the following (Expression 10) is defined.
Here, Ec denotes the sum of power in a frequency band c, and wc denotes a weighting coefficient in each frequency band. ‘c’ denotes number attached to the respective frequency bands of 1HH, 1HL, 1LH, 2HH, 2HL, 2LH, . . . , xLL. In general, wc is a high value in the case of high-frequency components. In addition, wc is a value that is higher in the case of HH than in the case of HL or LH.
In this case, obtained may be p(x, y) that makes a value of Etotal (refer to Expression 10) as small as possible. That is, the simultaneous equations shown in Expression 11 may be solved.
The equation in Expression 11 is simultaneous equations with |UB| unknowns (where, the number of ‘B’ pixels=|UB|). Therefore, when the number of ‘B’ pixels is large, it is not practical to solve Expression 11.
Furthermore, when the sum of power values is small as described above, optimization may be realized by the lossy compression. However, the optimization may not be realized by the lossless compression. In the lossless compression, a case in which equal pixel values continue even if a power value is high may be more desirable than a case in which a number of data having small power values exists.
6.2. Explanation on Basic Concept
Based on that described above, a method of calculating a compensated pixel using two norms (first concept and second concept) shown below and a hybrid method (third concept) will be described.
(1) Output Value Zero Norm (Corresponding to the First Concept)
In the DWT which is a transform method of JPEG2000, an input image is divided into two images using a low pass filter and high pass filter. If it is possible to make an output of the high pass filter zero, the compression rate (particularly, lossless compression rate) can be increased.
(2) Local Power Minimum Norm (Corresponding to the Second Concept)
By locally performing a process of minimizing a power value but not performing the process for the entire image, it is possible to reduce the processing load.
(3) Hybrid Method (Corresponding to the Third Concept)
This is a method using advantages of both the output value zero norm and the local power minimum norm.
Hereinafter, a basic technique of the proposed methods will be described.
6.3. Definition
Preparation of the explanation will be made below.
6.3.1. Input Pixel Value
In the one-dimensional case, let p(x) be a value of an input pixel. In the two-dimensional case, let p(x, y) be a value of an input pixel.
Moreover, it is assumed that a pixel (including a pixel for which compensation has been completed) for which compensation is not to be performed will be referred to as an ‘A’ pixel and a pixel for which compensation is to be performed will be referred to as a ‘B’ pixel.
6.3.2. One-Dimensional Filter
Let ‘N’ be the tap length of a one-dimensional high pass filter and h(i) be a coefficient thereof where i=0, 1, 2, . . . , N−1.
Then, an output of a high pass filter is assumed to be Hj in which p(j) is a right-end input of the filter. That is, Hj is expressed by the following Expression 12.
In addition, a high pass filter at the location j may be expressed as Hj.
6.3.3. Two-Dimensional Filter
In JPEG2000, a one-dimensional filter is defined. A two-dimensional filter is realized by horizontally and vertically arranging two one-dimensional filters. For the convenience of explanation, a horizontally and vertically separated type two-dimensional filter is assumed to be a non-separated type two-dimensional filter.
Here, horizontally and vertically separated filters are applied (the filters may be the same). In addition, let one-dimensional filters be h1(i) (where, i=0, 1, 2, . . . , N1−1) and be h2(i) (where, i=0, 1, 2, . . . , N2−1). This is expressed as a vector as shown in Expression 13.
A non-separated type two-dimensional filter kernel K (matrix of N1×N2) may be calculated by Expression 14.
Hereinafter, components within the filter kernel K will be referred to as K(i, j) (where i=0, 1, 2, . . . , N1−1 and j=0, 1, 2, . . . , N2−1).
In addition, let Hij be an output of a filter in which a pixel (i, j) is a lower-right-end input. Hij may be expressed by the following Expression 15.
In addition, a filter at the position (i, j) may be expressed as Hij.
In addition, it is possible to define a two-dimensional FIR filter in which an input image is input with respect to ‘x’ equal to or larger than ‘1’ so as to obtain outputs of xHH, xHL, xLH, and xLL. This becomes a filter having a high tap number generated by applying coefficients of the first-stage filter which has been described until now. In the case of obtaining an output subsequent to the second stage, it is preferable to use the two-dimensional filter.
6.4. Output Value Zero Norm
A method of calculating a compensated pixel value using the output value zero norm will now be described.
In this norm, a process of determining a compensated pixel value is performed in the raster order for the purpose of making an output of a high pass filter zero.
6.4.1. Case of One Dimension
It is assumed that p(x) is a ‘B’ pixel, and p(x) will now be compensated (refer to
Here, p(x) is defined such that an output of a high pass filter in which p(x) is a right-end input becomes zero. That is, p(x) may be defined as shown in Expression 16.
p(x) may be obtained by the following Expression 17.
6.4.2. Case of Two Dimension
It is assumed that p(x, y) is a ‘B’ pixel, and p(x) will be compensated. Since the compensation is performed in the order of raster scan, it may be assumed that pixels positioned at left and upper sides of p(x, y) are all ‘A’ pixels. In the same manner as the one-dimension case, an unknown value shown in Expression 18 is only p(x, y), and accordingly, it is possible to calculate p(x, y).
Hxy=0 (18)
6.4.3. Case of Application to a Plurality of Frequency Bands
In DWT, an input image is finally divided into plural frequency bands. It is difficult to apply the output value zero norm to plural frequency bands. This is because it is not possible to obtain p(x, y) that causes values in all of the plural frequency bands to be zero.
Accordingly, it is reasonable to make a value in only one of frequency bands zero. For example, a value of 1HH is made zero with paying attention to only 1HH.
6.5. Local Power Minimum Norm
A process of calculating a compensated pixel value based on the power minimum norm is locally performed to reduce an amount of operation and determine consecutive pixel values. In this norm, plural compensated pixel values are determined at the same time.
6.5.1. Case of One Dimension
First, a one-dimension case will be described.
As shown in
Further, let Vx be a set of j in which at least one input pixel of an N-tap filter Hj belongs to a region [Lx ∩ UB] (that is, a set of pixels that belong to the region Lx and that are ‘B’ pixels). If the range of the region Vx becomes a maximum, j=x to x+L+N−2. In addition, a range where input pixels of the filter Hj exist (where, jεVx) is assumed to be a range Wx. If the range of the region Wx becomes a maximum, j=x−N+1 to x+L+N−2 (refer to
Under the condition described above, pixel values of p (x) to p(x+L−1) are determined by using the power minimum norm. In this case, it is preferable to set the pixel values of p(x) to p(x+L−1) such that a sum of power values of Hj is smallest. First, a sum of squares of Hj (where, jεVx) is assumed to be Ex. For example, when p(x) to p(x+L−1) are ‘B’ pixels, Ex may be expressed by the following Expression 19.
Ex is a function of p(x) (xε[UB ∩ Wx]) (where, UB is a set of x that is a ‘B’ pixel). Therefore, in the same manner as described in 6.1, p(x) can be obtained by solving simultaneous equations shown in the following Expression 20.
Those obtained by solving the equations shown in Expression 20 are pixel values p(x−N+1) to p(x+L+N−2) of the region Wx. However, those adopted as actual compensated pixel values are only pixel values p (x), p (x+1), . . . , p(x+L−1) of the region Lx. This is because it is better to calculate the compensated pixel value by using ‘A’ pixels, which are located outside Wx, in the case of pixel values p(x−N+1) to p(x−1) and p(x+L) to p(x+L−2) of a region Wx−Lx.
In the case of minimizing a power value with respect to the entire image, XεUB has been applied. However, in this algorithm, it is possible to reduce an amount of operations by applying Xε[UB∩Wx]. In addition, since the pixel values can be changed in a stepwise manner in the region Lx, the pixel value gap described in the output value zero norm does not easily occur.
6.5.2. Case of Two Dimension
A case of a two-dimensional filter may be considered in the same manner as described above. An explanation will be made referring to
(1) ‘B’ pixels within a two-dimensional region Lxy are compensated. The region Lxy has an arbitrary shape.
(2) A local power value of an output value when using a two-dimensional filter K is minimized.
(3) A set of positions (i, j) where at least one input pixel of a filter Hij belongs to the region Lxy and is a ‘B’ pixel is assumed to be Vxy.
(4) Under the condition of the position (i, j)εVxy where the filter Hij exists, a range where the input pixel (x, y) of the filter Hij exists is assumed to be Wxy.
The region Wxy is obtained by expanding the region Lxy to the maximum left and right by ‘N2−1’ pixels and expanding the region Lxy to the maximum up and down by ‘N1−1’ pixels.
It is desirable to obtain the pixel value p(x, y) (where (X, y)εUB) within the region Lxy that minimizes a variable Exy shown in the following Expression 21 under the condition described above.
That is, it is preferable to solve the equation shown in the following Expression 22.
In addition, after solving the equation shown in Expression 22, only p(x, y) (where (x, y)ε[UB ∩ Lxy]) is selected.
In the same manner as Expression 10, minimization shown in the following Expression 23 may be performed by using the weighting factor wc set for each frequency band. Exyc is Exy of the region Lxy belonging to the frequency band c.
6.5.3. Supplement
The definition of the set Vxy has been made as a “set of positions (i, j) where at least one input pixel of a filter Hij belongs to the region Lxy and is a ‘B’ pixel.” More simply, the set Vxy may be defined as a “set of positions (i, j) where at least one input pixel of the filter Hij belongs to the region Lxy.”
This is the same for the one-dimensional case.
The set Vx may be defined as a “set of ‘j’ positions that allow at least one input pixel of the Hj to belong to the region Lx.”
6.5.4. Case of Performing Sub-Sampling
Here, it will be described about a case in which filtering is executed by performing sub-sampling in the same manner as in DWT.
Hereinbefore, the power value has been defined as expressed in Expression 19 or 21. However, in DWT, sub-sampling of 2:1 is performed. For this reason, more precisely, one of the equations shown in Expression 24 is used in Expression 19 indicating the power value Ex.
It is determined by the filtering phase when applying DWT whether ‘j’ will be an odd number or an even number. That is, when the sub-sampling phase is known, it is preferable to perform optimization by using any one of the equations.
More generally, there is shown Exy regarding frequency bands nHH, nHL, and nLH in the two-dimensional case of performing n-stage DWT. In this case, a sub-sampling rate is 2n=1. Accordingly, the sub-sampling phase becomes modulo 2n. Expression 21 changes to the following Expression 25.
Here, mi and mj are sub-sampling phases (0 to 2n−1). (25)
When performing the sub-sampling as described above, conditions of simultaneous equations are not sufficient, and accordingly, the solution may be indefinite. In this case, it is preferable to copy pixel values from adjacent ‘A’ pixels until the solution is not indefinite by using a result of processing in which the sub-sampling is not performed or to reduce the number of unknowns by making values of a plurality of unknowns equal.
6.6. Hybrid Method
Next, a combination of the output value zero norm and the local power minimum norm is considered.
Compensation, which is basically in the order of raster scan and uses the output value zero norm, is performed. In this case, the range to which the output value zero norm is to be applied is limited.
First, a region Mx shown in
Then, as will be described below, the output value zero norm and the local power minimum norm switch to each other.
When an ‘A’ pixel does not exist within the region Mx, p(x) is calculated by using the output value zero norm.
When an ‘A’ pixel exists within the region Mx, a pixel value within the region Lx is calculated by using the local power minimum norm.
The pixel value p(x) that is calculated next is a first ‘B’ pixel found by making a search in the raster order.
By performing the switching described above, an output value can be made zero if possible and a compensated value suitable even for the lossy coding can be calculated.
In the above, the hybrid method in the one-dimensional case has been described. However, this may be applied to a two-dimensional case in the same manner. Hereinafter, a typical algorithm will be described.
First, a region Mxy is defined at the right and lower sides of p(x, y).
(1) A next compensated pixel (B pixel) is searched in the raster order.
(2) A ‘B’ pixel that is first searched is to be p(x, y).
(3) Region Mxy is evaluated.
(3-1) When an ‘A’ pixel does not exist within the region Mxy, a compensated value of p(x, y) is calculated on the basis of the output value zero norm.
(3-2) When the ‘A’ pixel exists within the region Mxy, compensation of the region Lxy including p(x, y) is performed by using the local power minimum norm.
(4) The process returns to (1).
7.7. Limitation of Filter Coefficient
In the filtering methods (output value zero norm and local power minimum norm) described above, a filter coefficient may be a negative value. In the case when the filter coefficient is a negative value, an output value may not be stable. For this reason, when the filter coefficient is a negative value, it is possible to make the filter coefficient zero.
In the case of making a part of filter coefficients zero, it is preferable to normalize all of the filter coefficients such that a sum of the filter coefficient is 1.
8. Specific Example of Compensation Method
A compensation algorithm in the case of JPEG2000 will be described in detail on the basis of that described above.
8.1. Examples of Region
Examples of regions (region Mxy, region Lxy, region Vxy, and Wxy) are shown below.
8.1.1. Region Mxy
The region Mxy is defined as shown in
8.1.2. Region Lxy
The region Lxy is defined as shown in
8.1.3. Region Vxy
The lower right position of the filter kernel K is assumed to be (i, j). The range of (i, j) where at least a pixel of the filter kernel K belongs to the ‘B’ pixel region Lxy is assumed to be the region Vxy.
A maximum range of the region Vxy is shown in
8.1.4. Region Wxy
An existing range of input pixels of the filter Hij (where, (i, j)εVxy) is assumed to be the range Wxy.
A maximum range of the region Wxy is shown in
8.2. End Point Processing Method
At an end of an image, the filter input position may exist outside the image. In that case, end point processing, such as JPEG2000, is performed. That is, a value of a pixel outside the image is obtained by converting a pixel inside the image using a mirror image.
Referring to
Here, in the case of the output value zero norm, values of all left and upper pixels are assumed to be ‘A’ pixels. For this reason, it is necessary to set such that even pixels outside an image become ‘A’ pixels. Therefore, one of the following methods is adopted.
(1) In the case when a ‘B’ pixel exists within input pixels of Hxy as a result of the end point processing even if an ‘A’ pixel does not exist within the region Mxy, the compensated pixel value is calculated by using the local power minimum norm. In this case, the region Lxy is assumed to be a region for only a pixel of p(x, y).
(2) In the case when all pixels used for the operation are ‘B’ pixels, the value of p(x, y) becomes indefinite. In this case, one of the following values is adopted as p(x, y).
(2-1) Pixel value of ‘A’ pixel found first when input image is raster scanned
(2-2) Fixed value (for example, 0 or 128)
(2-3) ‘B’ pixel is maintained Compensation is performed in the order of another scan after completing scan for a screen of image (refer to ‘use of a plurality of scan directions’ described in 8.3)
8.3. Use of a Plurality of Scan Directions
As shown in
That is, referring to direction 1 shown in
In addition, referring to direction 2 shown in
In addition, referring to direction 3 shown in
In addition, referring to direction 4 shown in
(1) Method 1 (method of compensating pixels, which cannot be compensated in the direction 1, in another direction.
(1-1) Compensation is performed in the scan direction 1.
(1-2) Image in which compensation has been completed in the direction 1 is compensated in the direction 2.
(1-3) Image in which compensation has been completed in the direction 2 is compensated in the direction 3.
(1-4) Image in which compensating has been completed in the direction 3 is compensated in the direction 4.
(2) Method 2 (method of compensating pixels in a plurality of kinds of scan directions and then finally acquiring a mean value)
(2-1) Compensation is performed in the scan directions 1 to 4.
(2-2) Final output value is created by using a mean value of compensating results in the directions in which the compensating can be performed.
8.4. Other Examples
Hereinbefore, it has been described about the method of compensating pixels by performing raster scan from upper left to lower right. Then, a method of considering a plurality of raster scan directions is considered.
As shown in
‘S’ directions are extracted from these eight raster scan directions so as to calculate the compensated pixel value ps(x, y) (where number ‘s’ (s=1, 2, . . . ) is granted in the raster scan direction). It is preferable to set a mean value of ps(x, y) as a final compensated pixel value. That is, the mean value is calculated in the following Expression 26.
In addition, the number of cases of extracting the ‘S’ directions is 28−1.
9. Specific Numeric Example (in the Case of JPEG2000)
Further, a calculation example when performing compensation in the one-dimensional case is shown by using a high pass (reversible) filter of DWT as an example.
In the high pass (reversible) filter of DWT of JPEG2000, h=(−1/2, 1, −1/2), N=3.
Here, it is assumed that Mx region=p(x) to p(x+2) and Lx region=p(x) to p(x+1) in the case when p(x) is a compensated pixel.
Hereinafter, it is assumed that a pixel described as φ in specific cases is a ‘don't care pixel’ ‘B’ regardless of an ‘A’ pixel and a ‘B’ pixel.
9.1. Case 1: Typical Processing
First, a case in which all pixels located at the left side of p(x) are ‘A’ pixels will be considered.
9.1.1. Case 1-1
The output value zero norm is used.
A case of the pixel arrangement shown in
From the following Expression 27, Expression 28 can be obtained. As a result, a value of p(x) can be obtained.
9.1.2. Case 1-2
Hereinafter, the local power minimum norm will be used up to case 1-5.
A case of the pixel arrangement shown in
Referring to
‘E’ is expressed by the following Expression 30.
A condition for minimizing ‘E’ is expressed by the following Expression 31.
When Expression 31 is solved, the following Expression 32 is obtained. Thus, values of p(x) and p(x+1) can be obtained.
9.1.3. Case 1-3
A case of the pixel arrangement shown in
‘E’ is expressed by the following Expression 33 when using Hi in Expression 29.
In this case, there are three unknown values of p(x) p(x+1), and p(x+3). The minimization of ‘E’ is expressed by the following Expression 34.
In addition, since the range of Lx is p(x) and p(x+1), only the following Expression 35 is adopted.
9.1.4. Case 1-4
A case of the pixel arrangement shown in
Referring to
‘E’ is expressed by the following Expression 37.
p(x) that minimizes ‘E’ is obtained in the following Expression 38. Thus, a value of p(x) can be obtained.
9.1.5. Case 1-5
A case of the pixel arrangement shown in
‘E’ is expressed by the following Expression 39 when using Hi in Expression 36.
The minimization of ‘E’ is expressed by the following Expression 40.
Since Lx=p(x) top(x+1), only the following Expression 41 is adopted.
9.2. Case 2: Exceptional Processing 1 for End Point
A case when p(x−1) is an end point of an image is considered.
As for pixels located at the left side of an end of an image, a pixel value is obtained in the power minimum norm by copying the pixel value with a mirror image.
9.2.1. Case 2-1
A case of the pixel arrangement shown in
Referring to
In this case, ‘E’ can be expressed by the following Expression 43.
Lx=p(x). Accordingly, a condition for minimizing ‘E’ is expressed by the following Expression 44.
p(x)=p(x·1) (44)
A case of the pixel arrangement shown in
Referring to
In this case, ‘E’ can be expressed by the following Expression 46.
Accordingly, the condition for minimizing ‘E’ is expressed by the following Expression 47.
A case of the pixel arrangement shown in
Expression of ‘E’ is the same as described above. The condition for minimizing ‘E’ is expressed by the following Expression 48.
A case of the pixel arrangement shown in
Referring to
‘E’ can be expressed by the following Expression 50.
p(x) that minimizes ‘E’ is obtained in the following Expression 51.
A case of the pixel arrangement shown in
Expression of ‘E’ is the same as Expression 50. p(x) that minimizes ‘E’ is expressed by the following Expression 52.
9.3. Case 3: Exceptional Processing 2 for End Point
A case when p(x) is an end point of an image is considered.
9.3.1. Case 3-1
A case of the pixel arrangement shown in
In this case, a pixel value cannot be specified. Ideally, the method described in ‘8.3 method 1’ should be adopted. However, since the processing load is large if the method is adopted, it is herein assumed that p(x)=0
A case of the pixel arrangement shown in
Referring to
In this case, ‘E’ can be expressed by the following Expression 54.
The condition for minimizing ‘E’ is expressed by the following Expression 55.
A case of the pixel arrangement shown in
Expression of ‘E’ is the same as described above. In this case, since a condition of E=0 exists, the following Expression 56 is obtained.
p(x)=p(x+1)=p(x+2) (56)
A case of the pixel arrangement shown in
Referring to
In this case, ‘E’ can be expressed in the following Expression 58.
p(x) that minimizes ‘E’ can be expressed by the following Expression 59.
A case of the pixel arrangement shown in
Expression of ‘E’ is the same as described above. In this case, since a condition of E=0 exists, the following Expression 60 is obtained.
p(x)=p(x+1) (60)
9.4. Case of Considering Sub-Sampling
Next, a case of considering a sub-sampling phase will be described.
9.4.1. Case 1-2-1
A case of the pixel arrangement shown in
In the case when the right end of the filter is indicated by the arrow shown in
E=H12+H32 (61)
By using Expression 61, the following Expression 62 is obtained.
9.4.2. Case 1-2-2
A case of the pixel arrangement shown in
In the case when the right end of the filter is indicated by the arrow shown in
E=H02+H22 (63)
By using Expression 63, the following Expression 64 is obtained.
9.4.3. Case 1-3-1
A case of the pixel arrangement shown in
E=H12+H32 (65)
In this case, the solution is indefinite. Accordingly, in this case, the following Expression 66 is obtained assuming that p(x+1)=p(x+2).
Furthermore, in the case of assuming that p(x)=p(x+1), the following Expression 67 is obtained.
9.4.4. Case 1-3-2
A case of the pixel arrangement shown in
In the case when the right end of the filter is indicated by the arrow shown in
E=Ho2+H22 (68)
In this case, the following Expression 69 is obtained.
9.4.5. Case 1-4-1
A case of the pixel arrangement shown in
In the case when the right end of the filter is indicated by the arrow shown in
Accordingly, p(x) is obtained by using the following Expression 71.
9.4.6. Case 1-4-2
A case of the pixel arrangement shown in
In the case when the right end of the filter is indicated by the arrow shown in
E=H02+H22 (72)
p(x) that minimizes ‘E’ is expressed in the following Expression 73.
9.4.7. Case 1-5-1
A case of the pixel arrangement shown in
In the case when the right end of the filter is indicated by the arrow shown in
9.4.8. Case 1-5-2
A case of the pixel arrangement shown in
In the case when the right end of the filter is indicated by the arrow shown in
E=H02+H22 (75)
In this case, since solution that allows E=0 exists, the following Expression 76 is obtained.
9.4.9. Case 2-2-1
A case of the pixel arrangement shown in
In the case when the right end of the filter is indicated by the arrow shown in
E=H12+H32 (77)
In this case, p(x) and p(x+1) are expressed in the following Expression 78.
9.4.10. Case 2-2-2
A case of the pixel arrangement shown in
In the case when the right end of the filter is indicated by the arrow shown in
E=H02+H22 (79)
In this case, p(x) and p(x+1) are expressed in the following Expression 80.
9.4.11. Case 2-3-1
A case of the pixel arrangement shown in
The solution is the same as ‘9.4.3. Case 1-3-1’.
9.4.12. Case 2-3-2
A case of the pixel arrangement shown in
In the case when the right end of the filter is indicated by the arrow shown in
E=H02+H22 (81)
In this case, since solution that allows E=0 exists, the following Expression 82 is obtained.
9.4.13. Case 2-4-1
A case of the pixel arrangement shown in
The solution is the same as ‘9.4.5. Case 1-4-1’.
9.4.14. Case 2-4-2
A case of the pixel arrangement shown in
In the case when the right end of the filter is indicated by the arrow shown in
E=H02+H22 (83)
In this case, p(x) that minimizes ‘E’ is expressed in the following Expression 84.
9.4.15. Case 2-5-1
A case of the pixel arrangement shown in
The solution is the same as ‘9.4.5. Case 1-4-1’.
9.4.16. Case 2-5-2
A case of the pixel arrangement shown in
In the case when the right end of the filter is indicated by the arrow shown in
E=H02+H22 (85)
In this case, p(x) that minimizes ‘E’ is expressed in the following Expression 86.
p(x)=p(x−1) (86)
9.4.17. Case 3-2-1
A case of the pixel arrangement shown in
In the case when the right end of the filter is indicated by the arrow shown in
E=H12+H32 (87)
In this case, since solution that allows E=0 exists, the following Expression 88 is obtained.
9.4.18. Case 3-2-2
A case of the pixel arrangement shown in
In the case when the right end of the filter is indicated by the arrow shown in
E=H02+H22 (89)
In this case, the solution is indefinite. The solution that allows E=0 when p(x)=p(x+1) is assumed is adopted. That is, the following Expression 90 is obtained.
9.4.19. Case 3-3-1
A case of the pixel arrangement shown in
In the case when the right end of the filter is indicated by the arrow shown in
E=H12+H32 (91)
In this case, the solution is indefinite. The solution that allows E=0 when p(x)=p(x+3) is assumed is adopted. That is, the following Expression 92 is obtained.
9.4.20. Case 3-3-2
A case of the pixel arrangement shown in
The solution is the same as ‘9.4.18 Case 3-2-2’.
9.4.21. Case 3-4-1
A case of the pixel arrangement shown in
In this case, preferably, the following Expression 93 is obtained.
p(x)=p(x+1) (93)
9.4.22. Case 3-4-2
A case of the pixel arrangement shown in
In this case, preferably, the following Expression 94 is obtained.
p(x)=2p(x+1)−p(x+2) (94)
9.4.23. Case 3-5-1
A case of the pixel arrangement shown in
In this case, preferably, the following Expression 95 is obtained.
p(x)=p(x+1) (95)
9.4.24. Case 3-5-2
A case of the pixel arrangement shown in
In this case, since the solution is indefinite, p(x)=p(x+2). Accordingly, preferably, the following Expression 96 is obtained.
p(x)=p(x+1) (96)
10. Test for Effect Confirmation
A test for confirming the effect of the compensation method was performed.
10.1. Details of Test
As pseudo text, a vertically striped pattern was considered. The following text pattern is considered.
That is, two ‘A’ pixels consecutively exist and N ‘B’ pixels consecutively exist. This is repeated.
1 to 5 was used as the above value N.
The above-described image (N=1 to 5) is compensated by using the above cases 1-1, 1-2, and 1-4. A method of switching the cases is performed by using the ‘hybrid method described in 6.8’.
Here, all filter coefficients are limited to positive values by using the method described in ‘7. Limitation of filter coefficient’.
Comparative methods are as follows.
First comparative example: compensation is performed by using an ‘A’ pixel value closest in the horizontal direction.
Second comparative example: compensation is performed by using a mean value of an ‘A’ pixel value closest to the left and an ‘A’ pixel value closest to the right.
Third comparative example: in addition to the second comparative example, a low pass filter is further provided. The filter kernel is assumed to be (¼, 2/4, and ¼).
In this exemplary embodiment, the first comparative example, the second comparative example, and the third comparative example, the following lossless coding test and lossy coding test are performed.
(1) Lossless Coding Test
An amount of lossless codes is examined.
(2) Lossy Coding Test
The relationship between an amount of codes and PSNR (peak signal-to-noise ratio) [dB] is examined. Here, PSNR is calculated for only an effective pixel region (‘A’ pixel region).
PSNR is represented as a difference between PSNRs when an compensated image is an original image. That is, PSNR when compressing an image compensated in the method x is assumed to be PSNRx. In addition, PSNR when compressing an original image is assumed to be PSNRorg. PSNR shown in the following is PSNRx−PSNRorg [dB].
JPEG2000 is used as a compression method.
10.2. Test Result
10.2.1 Lossless Coding Test
A bit rate (Bit rate [bit/pixel]) when performing lossless coding is shown in
As shown in
10.2.2. Lossy Coding Test
Results of lossy coding tests are shown in
It can be seen that the compression performance in this exemplary embodiment is stable and satisfactory. As compared with the third comparative example, a case in which the compression performance in this exemplary embodiment is worse is only N=3. Furthermore, even in the case N=3, a difference between the compression performance in this exemplary embodiment and that in the third comparative example is small.
Furthermore, it is noticeable that one-path sequential processing is enough in this exemplary embodiment. In the first to third comparative examples, it is not possible to calculate compensated pixel values when position and value of a right ‘A’ pixel are not known. That is, in the known examples, there is no choice but to perform two-path processing. In contrast, in this exemplary embodiment, when the right ‘A’ pixel is positioned farther, it is possible to calculate the compensated pixel value even if the position or value of the right ‘A’ pixel is not known. Accordingly, this exemplary embodiment is advantageous in terms of processing time or amount of memory consumption, since one-path processing is possible.
Hereinafter, an example of the hardware configuration of an image processing system according to this exemplary embodiment will be described with reference to
The CPU (central processing unit) 401 is a control unit that executes processing according to a computer program that describes sequences, which are executed by various modules described in this exemplary embodiment, that is, the region division module 13, the second pixel identification module 14, and the pixel value change module 15, the compression module 16, and the image editing module 18.
A ROM (read only memory) 402 stores programs or operation parameters used by the CPU 401. A RAM (random access memory) 403 stores programs used in execution of the CPU 401 or parameters that properly vary in the execution. Those described above are connected to each other by a host bus 404, such as a CPU bus.
The host bus 404 is connected to an external bus 406, such as a PCI (peripheral component interconnect/interface) through a bridge 405.
A keyboard 408 and a pointing device 409, such as a mouse, are input devices operated by an operator. A display 410 includes a liquid crystal display device or a CRT (cathode ray tube) and displays various kinds of information as text or image information.
A HDD (hard disk drive) 411 has a hard disk therein and serves to drive a hard disk and records or reproduces information or programs executed by the CPU 401. The hard disk stores input image or compressed images. In addition, various kinds of computer programs, such as various data processing programs, are stored in the hard disk.
A drive 412 serves to read out data or programs recorded in a magnetic disk, an optical disk, and a magneto-optic disk, which are installed, or a removable recording medium 413, such as a semiconductor memory, and to supply the data or programs to the RAM 403 connected through an interface 407, the external bus 406, the bridge 405, and the host bus 404. The removable recording medium 413 may be used as a data recording region in the same manner as a hard disk.
A connection port 414 serves as a port for connection with externally connected equipment 415 and has connection parts, such as USB and IEEE1394. The connection port 414 is connected to the CPU 401 through the interface 407, the external bus 406, the bridge 405, and the host bus 404. A communication unit 416 is connected to a network and executes data communication processing with external apparatuses. The data reading unit 417 is, for example, a scanner and executes a process of reading a document. The data output unit 418 is, for example, a printer and executes a process of outputting document data.
The hardware configuration shown in
Further, it is possible to find the following exemplary embodiments from the above description.
[A] An image processing system includes an identification unit, a region division unit and a pixel value change unit. The identification unit identifies a second pixel among an image including a first pixel and the second pixel. A pixel value of the first pixel is not to be changed. A pixel value of the second pixel is to be changed. The region division unit divides the image into regions. The pixel value change unit changes the value of the second pixel to a predetermined value through a filtering process when the second pixel identified by the identification unit is included within the regions obtained by the region division unit. If a negative value is included in a filter coefficient and the pixel value change unit is to change the pixel value, the filter coefficient is changed in a direction in which the filter coefficient becomes zero and the filter coefficients are normalized so that a sum of the filter coefficients becomes 1.
[B] An image processing system includes an identification unit, a region division unit and a pixel value change unit. The identification unit identifies a second pixel among an image including a first pixel and the second pixel. A pixel value of the first pixel is not to be changed. A pixel value of the second pixel is to be changed. The region division unit divides the image into regions. The pixel value change unit changes the value of the second pixel to a predetermined value through a filtering process when the second pixel identified by the identification unit is included within the regions obtained by the region division unit. An order in which the pixel value change unit changes pixel values is set from a plurality of different directions within the image and the value of the second pixel is changed to a mean value thereof.
[C] An image processing system includes an identification unit, a region division unit and a pixel value change unit. The identification unit identifies a second pixel among an image including a first pixel and the second pixel. A pixel value of the first pixel is not to be changed. A pixel value of the second pixel is to be changed. The region division unit divides the image into regions. The pixel value change unit changes the value of the second pixel to a predetermined value through a filtering process when the second pixel identified by the identification unit is included within the regions obtained by the region division unit. The regions obtained by division of the region division unit are rectangular. The filtering process is discrete cosine transform.
[D] In the image processing system of [C], a pixel value of an image located at an invalid position is determined so that a zero run length within an image part after the discrete cosine transform becomes large.
[E] In the image processing system of [C], a pixel value of an image located at an invalid position is determined so that norm of an image part after the discrete cosine transform becomes small.
[F] In the image processing system of [C], the discrete cosine transform is performed for a two-dimensional object.
[G] In the image processing system of [C], the discrete cosine transform is performed for a one-dimensional object.
[H] In the image processing system of [C], the discrete cosine transform is performed under a size (8×8) used in JPEG.
[I] In the image processing system of [C], quantization is performed after the discrete cosine transform.
[J] An image processing system includes an identification unit, a region division unit and a pixel value change unit. The identification unit identifies a second pixel among an image including a first pixel and the second pixel. A pixel value of the first pixel is not to be changed. A pixel value of the second pixel is to be changed. The region division unit divides the image into regions. The pixel value change unit changes a value of the second pixel in a direction in which the value after a filtering process becomes zero when the second pixel identified by the identification unit is included within the regions obtained by the region division unit. Each of the regions obtained by the region division unit is a line including only one second pixel. A length of each region is a size of a high pass filter in JPEG2000.
[K] An image processing system includes an identification unit, a region division unit and a pixel value change unit. The identification unit identifies a second pixel among an image including a first pixel and the second pixel. A pixel value of the first pixel is not to be changed. A pixel value of the second pixel is to be changed. The region division unit divides the image into regions. The pixel value change unit changes a value of the second pixel in a direction in which a value of power after a filtering process becomes minimum when the second pixel identified by the identification unit is included within the regions obtained by the region division unit. The regions obtained by the region division unit are lines having predetermined length.
[L] An image processing system includes an identification unit, a region division unit, a first pixel value change unit and a second pixel value change unit. The identification unit identifies a second pixel among an image including a first pixel and the second pixel. A pixel value of the first pixel is not to be changed. A pixel value of the second pixel is to be changed. The region division unit divides the image into regions. The first pixel value change unit changes a value of the second pixel in a direction in which the value after a filtering process becomes zero when the second pixel identified by the identification unit is included within the regions obtained by the region division unit. The second pixel value change unit changes a value of the second pixel in a direction in which a value of power after the filtering process becomes minimum when predetermined number or more of consecutive second pixels identified by the identification unit are included within the regions obtained by the region division unit. When the predetermined number or more of consecutive second pixels exist, processing by the second pixel value change unit is performed. In other cases, processing by the first pixel value change unit is performed.
[M] In the image processing system of [J], [k], or [L], filter coefficients are limited to positive values.
[N] In the image processing system of [J], [k], or [L], the processing order of the pixel value change unit is a plurality of raster scans.
Thus, it is possible to cope with a case in which compensating cannot be performed by one raster scan.
[O] In the image processing system of [J], [k], or [L], the processing order of the pixel value change unit is a plurality of raster scans and a mean value thereof is an output value.
[P] The image processing system described in [J], [k], or [L], where the regions obtained by the region division unit are two-dimensional regions and the high pass filter is a two-dimensional filter.
In addition, the program described above may be stored in a recording medium or provided by a communication unit. In this case, for example, the program described above may be understood as the invention of a ‘recording medium that is readable by a computer recorded with a program’.
The ‘recording medium that is readable by a computer recorded with a program’ refers to a recording medium, which is readable by a computer recorded with a program, used for installation, execution, and distribution of programs.
For example, the recording medium includes, as a digital versatile disk (DVD), ‘DVD-R, DVD-RW, and DVD-RAM’ that are specifications decided by DVD forum and ‘DVD+R and DVD+RW’ that are specifications decided by DVD+RW. In addition, the recording medium includes, as a compact disc (CD), a read only memory (CD-ROM), a CD recordable (CD-R), and a CD rewritable (CD-RW). In addition, the recording medium includes a magneto-optic disk (MO), a flexible disk (FD), a magnetic tape, a hard disk, a read only memory (ROM), an electrically erasable programmable read only memory (EEPROM), a flash memory, and a random access memory (RAM).
Furthermore, the program or a part of the program may be recorded in the recording medium so as to be stored or distributed. Furthermore, the program may be transmitted through communication. For example, a local area network (LAN), metropolitan area network (MAN), a wide area network (WAN), a wireline network used for Internet, Intranet, or Extranet, or a wireless communication network may be used to transmit the program. Furthermore, a transmission medium using a combination thereof may be used to transmit the program, or the program may be transmitted through carrier waves.
In addition, the program may be a part of another program or may be recorded in a recording medium together with a separate program.
Number | Date | Country | Kind |
---|---|---|---|
2006-227554 | Aug 2006 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
5345317 | Katsuno et al. | Sep 1994 | A |
5657085 | Katto | Aug 1997 | A |
5701367 | Koshi et al. | Dec 1997 | A |
5818970 | Ishikawa et al. | Oct 1998 | A |
6256417 | Takahashi et al. | Jul 2001 | B1 |
6724945 | Yen et al. | Apr 2004 | B1 |
20020037100 | Toda et al. | Mar 2002 | A1 |
20030123740 | Mukherjee | Jul 2003 | A1 |
20040165782 | Misawa | Aug 2004 | A1 |
20040264793 | Okubo | Dec 2004 | A1 |
20050157945 | Namizuka et al. | Jul 2005 | A1 |
20060088222 | Han et al. | Apr 2006 | A1 |
Number | Date | Country |
---|---|---|
A-7-135569 | May 1995 | JP |
B2 2611012 | Feb 1997 | JP |
A-9-084003 | Mar 1997 | JP |
B2 2720926 | Nov 1997 | JP |
A 11-32206 | Feb 1999 | JP |
B2 2910000 | Apr 1999 | JP |
B2 3083336 | Jun 2000 | JP |
B2 3122481 | Oct 2000 | JP |
B2 3231800 | Sep 2001 | JP |
A 2002-77633 | Mar 2002 | JP |
A-2002-262114 | Sep 2002 | JP |
A 2002-369011 | Dec 2002 | JP |
A 2003-18412 | Jan 2003 | JP |
A 2003-18413 | Jan 2003 | JP |
A-2003-115031 | Apr 2003 | JP |
A-2003-169221 | Jun 2003 | JP |
A 2003-219187 | Jul 2003 | JP |
A 2003-244447 | Aug 2003 | JP |
A 2004-260327 | Sep 2004 | JP |
A 2005-20227 | Jan 2005 | JP |
A 2006-19957 | Jan 2006 | JP |
A-2006-093880 | Apr 2006 | JP |
Number | Date | Country | |
---|---|---|---|
20080050028 A1 | Feb 2008 | US |