METHOD AND APPARATUS FOR REMOVING FALSE CONTOURS

Information

  • Patent Application
  • 20070286515
  • Publication Number
    20070286515
  • Date Filed
    May 10, 2007
    17 years ago
  • Date Published
    December 13, 2007
    17 years ago
Abstract
A method and apparatus for removing false contours while preserving edges. In the method, a false contour area is detected from an input image, false contour direction information and false contour location information of the false contour area are generated, the false contour area is expanded, and a false contour is removed from the expanded false contour area.
Description

BRIEF DESCRIPTION OF THE DRAWINGS

The above and other features and advantages of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings in which:



FIG. 1 is a block diagram for explaining a conventional method of removing false contours;



FIG. 2 is a block diagram of an apparatus for removing false contours according to an exemplary embodiment of the present invention;



FIG. 3 is a diagram for explaining structural elements according to an exemplary embodiment of the present invention;



FIG. 4 is a flowchart illustrating a method of removing false contours while preserving edges according to an exemplary embodiment of the present invention;



FIG. 5 is a diagram for explaining the operation of a false contour detection unit illustrated in FIG. 2;



FIG. 6 is a block diagram of a false contour removal unit illustrated in FIG. 2;



FIG. 7 is a diagram of a first order weight function for explaining the determination of a weight according to an exemplary embodiment of the present invention;



FIG. 8 is a block diagram for explaining the operation of a neural network learning unit according to an exemplary embodiment of the present invention;



FIG. 9 is a block diagram of an apparatus for removing false contours using neural networks according to an exemplary embodiment of the present invention;



FIG. 10 is a diagram for explaining a method of expanding a false contour filtering area according to an exemplary embodiment of the present invention;



FIG. 11 is a diagram for explaining an example of the method illustrated in FIG. 10; and



FIG. 12 is a flowchart illustrating a method of removing false contours using neural networks according to an exemplary embodiment of the present invention.





DETAILED DESCRIPTION OF THE INVENTION

The present invention will now be described more fully with reference to the accompanying drawings in which exemplary embodiments of the invention are shown.



FIG. 2 is a block diagram of an apparatus for removing false contours according to an exemplary embodiment of the present invention. Referring to FIG. 2, the apparatus includes a false contour detection unit 210, a false contour area expansion unit 220, and a false contour removal unit 230.


The false contour detection unit 210 includes a contour detector 212 and a false contour separator 214.


The contour detector 212 removes flat areas from an input image using a difference between the input image and an image obtained by reducing the number of bits of the input image, and detects a contour area in the input image. The contour area comprises not only a false contour area but also an edge area.


The false contour separator 214 separates a false contour area and an edge area from the contour area obtained by the contour detector 212, and generates information (hereinafter referred to as false contour direction information) indicating the direction of the false contour area and information (hereinafter referred to as false contour location information) indicating the location of the false contour area.


The operation of the false contour detection unit 210 will be described later in further detail with reference to FIG. 5.


The false contour area expansion unit 220 includes a structural element generator 224 and a calculator 226.


The structural element generator 224 generates a structural element that is needed to expand a false contour area. FIG. 3 is a diagram for illustrating structural elements according to an exemplary embodiment of the present invention. Referring to FIG. 3, the structural element generator 224 can generate various shapes of structural elements such as a circular structural element 302, oval structural elements 304 and 306, a square structural element 308, and rectangular structural elements 310 and 312.


The calculator 226 expands a false contour area, according to the size and shape of the structural element generated by the structural element generator 224, by performing a binary morphology dilation operation. If the structural element generated by the structural element generator 224 is circular, a false contour area is expanded to be as large as a circular mask by performing a binary morphology dilation operation.


Structural elements and a binary morphology dilation operation are obvious to one of ordinary skill in the art to which the present invention pertains, and thus detailed descriptions thereof will be skipped.


The false contour removal unit 230 determines a smoothing mask weight according to the distance from a pixel in the mask to a center pixel of a false contour, determines an edge preservation mask weight according to the contrast with the center pixel of the false contour, and removes the false contour by performing filtering using the smoothing mask weight and the edge preservation mask weight.


The operation of the false contour removal unit 230 will be described later in further detail with reference to FIGS. 6 and 7. According to the present exemplary embodiment, the false contour removal unit 230 may use neural networks to remove false contours, which will be described later in further detail with reference to FIGS. 8 through 11.



FIG. 4 is a flowchart illustrating a method of removing false contours while preserving edges according to an exemplary embodiment of the present invention. Referring to FIG. 4, in operation 402, flat areas are removed from an input image, and a contour area is detected from the resulting input image. In operation 404, a false contour area and an edge area are separated from the contour area obtained in operation 402, and false contour direction information and false contour location information are generated.


In operation 406, a structural element is generated, and the false contour area is expanded according to the size of the structural element by performing a binary morphology dilation operation.


In operation 408, a smoothing mask weight is determined according to the distance from a pixel in the mask to a center pixel of the false contour area an edge preservation mask weight is determined according to the contrast with the center pixel of the false contour area, and the false contour area is filtered using the smoothing mask weight and the edge preservation mask weight.


According to the present exemplary embodiment, in operation 408, neural networks may be used to remove a false contour, and this will be described later in further detail with reference to FIGS. 8 through 11.



FIG. 5 is a diagram for explaining the operation of the false contour detection unit 210 illustrated in FIG. 2. Referring to FIG. 2, the contour detector 212 calculates a difference between an input image I(m,n) (where m indicates a horizontal coordinate and n indicates a vertical coordinate) and an image obtained by reducing the number of bits of the input image I(m,n), and detects contour information C(m,n) from a binary image using the absolute value of the result of the calculation.


The false contour separator 214 separates a false contour area and an edge area from the contour information C(m,n), and generates false contour direction information and false contour location information.


In detail, the false contour direction information is generated based on the contour information C(m,n) of the input image I(m,n), as indicated by Equation (1):










Contrast
max

=

Max
(






m
=
0


K
-
1







n
=
0


L
-
2





(


I


(

m
,
n

)


-

I


(

m
,

n
+
1


)



)

2




K


(

L
-
1

)



,





m
=
0


K
-
2







n
=
0


L
-
1





(


I


(

m
,
n

)


-

I


(


m
+
1

,
n

)



)

2





(

K
-
1

)


L


,





m
=
0


K
-
2







n
=
0


L
-
2





(


I


(

m
,
n

)


-

I


(


m
+
1

,

n
+
1


)



)

2





(

K
-
1

)



(

L
-
1

)



,





m
=
0


K
-
1







n
=
0


L
-
1





(


I


(

m
,
n

)


-

I


(


m
-
1

,

n
-
1


)



)

2





(

K
-
1

)



(

L
-
1

)




)





(
1
)







where Contrastmax indicates a maximum contrast, K indicates the size of a mask in a horizontal direction, and L indicates the size of the mask in a vertical direction. The four components parenthesized in Equation (1) respectively indicate horizontal false contour direction information corresponding to an angle of 0°, vertical false contour direction information corresponding to an angle of 90°, diagonal false contour direction information corresponding to an angle of 135°, and opposite false contour direction information corresponding to an angle of 180°, and are represented as θh, θv, θd, and θad. A minimum contrast Contrastmin is calculated as indicated by Equation (2):










Contrast
min

=


Min
(






m
=
0


K
-
1







n
=
0


L
-
2





(


I


(

m
,
n

)


-

I


(

m
,

n
+
1


)



)

2




K


(

L
-
1

)



,





m
=
0


K
-
2







n
=
0


L
-
1





(


I


(

m
,
n

)


-

I


(


m
+
1

,
n

)



)

2





(

K
-
1

)


L


,





m
=
0


K
-
2







n
=
0


L
-
2





(


I


(

m
,
n

)


-

I


(


m
+
1

,

n
+
1


)



)

2





(

K
-
1

)



(

L
-
1

)



,





m
=
0


K
-
1







n
=
0


L
-
1





(


I


(

m
,
n

)


-

I


(


m
-
1

,

n
-
1


)



)

2





(

K
-
1

)



(

L
-
1

)




)

.





(
2
)







According to the present exemplary embodiment, a non-direction θnondir is added as a type of direction. The non-direction θnondir can be determined to correspond to the situation when the difference between the maximum contrast Contrastmax and the minimum contrast Contrastmin is less than a predefined threshold Th, as indicated by Equation (3):





Contrastmax−Contastmin<Th  (3).


Thereafter, a false contour area and an edge area are separated from the contour information C(m,n) according to whether the maximum contrast Contrastmax (hereinafter referred to as the maximum contrast Cm(m,n)) is less than a predefined threshold T. In other words, an area where the maximum contrast Cm(m,n) is larger than the predefined threshold T is determined as an edge area, and an area where the maximum contrast Cm(m,n) is less than the predefined threshold T is determined as a false contour area. In this manner, false contour direction information θ(m,n) and false contour location information Bf(m,n) can be obtained.


However, the present invention is not restricted to the false contour detection method set forth herein.



FIG. 6 is a block diagram of an example of the false contour removal unit 230 illustrated in FIG. 2, i.e., a false contour removal unit 600. Referring to FIG. 6, the false contour removal unit 600 includes a weight determiner 602 and a false contour removal filter 604.


The weight determiner 602 determines a smoothing mask weight ws according to the distance from a pixel in the mask to a center pixel of a false contour area, and an edge preservation mask weight wep according to the contrast between a pixel in the mask and with the center pixel of the false contour area.


A weight function used to define each of the smoothing mask weight ws and the edge preservation mask weight wep may be defined by Equation (4):











w
1



(

d
,
D

)


=

{








1
-



d


D


,










d


<
D










0
,







otherwise









(
4
)







where d indicates an input variable, and D indicates the size in pixels of a mask. A brightness difference or a distance may be used as the input variable d, but the present invention is not restricted thereto.



FIG. 7 is a graph for illustrating a first order weight function and explains the determination of a weight. Referring to FIG. 7, the width of the first order weight function is 2D+1, and a weight is determined according to the value of the input variable d.


A second order weight function can be obtained by combining two first order weight functions, as indicated by Equation (5):











w
2



(

d
,
D

)


=

{










(

1
-




d
1




D
1



)



(

1
-




d
2




D
2



)


,











d
1



<


D
1






and








d
2




<

D
2











0
,







otherwise




.






(
5
)







In this manner, an n-th order weight function can be generalized as indicated by Equation (6):











w
n



(

d
,
D

)


=

{












k
=
1

n







(

1
-




d
k




D
k



)


,












d
k



<

D
k


,





k
=
1

,
2
,







n















0
,







otherwise




.






(
6
)







By using the n-th order weight function, a weight for an n-th input variable d=(d1, . . . , dn) can be determined. The width of the n-th order weight function can be determined according to the mask size D=(D1, . . . , Dn). The first order weight function can be used for determining a weight according to the contrast between areas in a black-and-white image or determining a weight for a moving image according to the passage of time. The second order weight function can be used for determining a weight according to a distance between areas in an image. The third order weight function can be used for determining a weight according to a difference between the colors of areas in a color image.


However, the present invention is not restricted to the weight functions set forth herein.


The determination of a smoothing mask weight and an edge preservation mask weight using a weight function will hereinafter be described in further detail.


Assuming that a center pixel of a false contour area is x=(x1, x2) and a neighbor pixel in a smoothing mask is ξ=(ξ12), a smoothing mask weight ws can be determined using a second order weight function, as indicated by Equation (7):






w
s(ξ,x)=w2(ξ−x,M)  (7)


where M=(M1, M2) indicates a parameter that is needed to determine the smoothing mask weight ws. Since the width of a weight function is the same as the size of a smoothing mask, the smoothing mask weight ws has a value of 0 outside the smoothing mask. As the size of the smoothing mask increases, false contours that are distant from each other can be more effectively removed. However, the larger the smoothing mask, the more likely it is to blur an image. Thus, there is the need to appropriately determine the smoothing mask weight ws.


An edge preservation mask weight wep is determined according to the contrast between the center pixel x and the neighbor pixel ξ using a first order weight function, as indicated by Equation (8):






w
ep(ξ,x)=w1I,ΔI)





ΔI=I(ξ)−I(x)  (8)


where Δi indicates the contrast between the center pixel x and the neighbor pixel ξ, and ΔI indicates a parameter that is needed to determine the edge preservation mask weight wep and is determined based a maximum contrast detected in a false contour area by a user. If Δl is smaller than ΔI, an edge preservation mask considers the neighbor pixel ξ when performing filtering. However, if ΔI is smaller than Δl, the edge preservation mask does not consider the neighbor pixel ξ when performing filtering. In this manner, false contours can be effectively removed while preserving edge areas.


The brightness of each pixel of a black-and-white image is represented by a single value, and thus, a weight for a black-and-white image can be determined in the aforementioned manner. On the other hand, the brightness of each pixel of a color image is represented by three values, i.e., R, G, and B, and thus, a weight for a color image can be determined using a third order weight function, as indicated by Equation (9):






w
ep,x(=w3I,ΔIfx)





ΔI=I(ξ)−I(x)  (9)


where ΔI is a color plane vector indicating the contrast between the center pixel x and the neighbor pixel ξ, and I(x) indicates the brightness of a color image. The brightness I(x) may be represented by a value of a YCbCr plane or a value of a CIE L*a*b* plane as well as a value of an RGB plane. The parameter ΔI in Equation (8) is replaced by a vector ΔIfx in Equation (9).


Once the smoothing mask weight ws and the edge preservation mask weight wep are determined in the aforementioned manner, a false contour is removed by performing filtering on a false contour area using a weight that is obtained by multiplying the smoothing mask weight ws by the edge preservation mask weight wep and normalizing the result of the multiplication. This type of filtering is referred to as bilateral filtering, and is indicated by Equation (10):











I
~



(
x
)


=

{












ξ


N
x






I


(
ξ
)





w
s



(

ξ
,
x

)





w
ep



(

ξ
,
x

)








ξ


N
x







w
s



(

ξ
,
x

)





w
ep



(

ξ
,
x

)





,










B

~

fc

=
1










I


(
x
)


,







otherwise









(
10
)







were Nx indicates a mask whose center is x.


However, the present invention is not restricted to bilateral filtering. In other words, a variety of filtering methods other than a bilateral filtering method may be used.


The removal of false contours using a bilateral filtering method has been described in detail so far. Hereinafter, the removal of false contours using neural networks will be described in detail.



FIG. 8 is a block diagram for explaining the operation of a neural network learning unit according to an exemplary embodiment of the present invention. Referring to FIG. 8, neural network learning aims at adaptively performing filtering and thus minimizing deterioration of signal components. According to the present exemplary embodiment, neural network learning is performed using a neural network unit 810, which comprises eight neural networks 810, in consideration of a total of eight false contour directions (0°, 45°, 90°, 135°, 180°, 225°, 270°, and 315°) in order to adaptively perform filtering according to each false contour direction.


An original input image I(m,n), an image If(m,n) including false contours, and false contour location information Bf(m,n) and false contour direction information θ(m,n) provided by the false contour detection unit 210 are input to the neural network unit 810. Pixels where a false contour is detected are represented by the equation Bf(m,n)=1, and pixels where no false contour is detected are represented by the equation Bf(m,n)=0. A weight can be determined by learning of the neural network learning unit 810, as indicated by Equation (11):






W=[W(1),W(2),W(3),W(4),W(5),W(6),W(7),W(8)]  (11)


where W(k) (1≦k=└θ(x,y)/45┘+1≦8) indicates the weight determined by the learning of the neural network unit 810, and └α┘ indicates the closest integer smaller than α.


Each of the eight neural networks corresponding to the respective false contour directions comprises an input layer consisting of L nodes, a hidden layer consisting of M nodes, and an output layer consisting of N nodes. An input to each of the eight neural networks is obtained from a location in the image If(m,n) corresponding to a pixel in a mask that comprises L pixels surrounding a pixel where a false contour is detected and a target value is obtained from a location in the original input image I(m,n) corresponding to a center pixel of the mask.



FIG. 9 is a block diagram of an apparatus for removing false contours using neural networks according to an exemplary embodiment of the present invention. For convenience of description, a false contour detection unit 210 is illustrated, and a weight w, which is the output of a neural network learning unit 810, is illustrated instead of the neural network learning unit 810.


Referring to FIG. 9, a filtering area expansion unit 910 expands an area (hereinafter referred to as a false contour filtering area) where filtering is to be performed based on false contour direction information θ(m,n), false contour location information Bf(m,n), and contour information C(m,n). In general, a difference in the values of a pair of adjacent pixels in an area where a false contour is detected is considerable. In this case, a false contour may not be able to be properly removed simply by varying the values of pixels where the false contour is detected. Thus, according to the present exemplary embodiment, a false contour is removed by expanding a false contour filtering area so that not only pixels where a false contour is detected but also pixels adjacent to the pixels where a false contour is detected can be filtered.



FIG. 10 is a diagram for explaining a false contour filtering area according to an exemplary embodiment of the present invention. Referring to FIG. 10, the expansion of a false contour filtering area is performed in a direction perpendicular to the direction of a false contour. The amount by which a false contour filtering area is expanded, i.e., an expansion distance E, may be up to 10 (E=10). When a false contour or an edge is encountered during the expansion of a false contour filtering area, the false contour filtering area is not expanded any further.


Here, the expansion distance may be 10 or greater in an another exemplary embodiment.


Referring to FIG. 9, the filtering area expansion unit 910 outputs filtering area expansion information Bd(m,n) indicating both pixels to be incorporated into an expanded false contour filtering area and pixels not to be incorporated into the expanded false contour filtering area. Pixels to be incorporated into an expanded false contour filtering area are represented by the equation Bd(m,n)=1, and pixels not to be incorporated into an expanded false contour filtering area are represented by the equation Bd(m,n)=0.



FIG. 11 is a diagram for explaining an example of the method illustrated in FIG. 10. Referring to FIG. 11, (a) illustrates the situation when the expansion of a false contour filtering area in a direction perpendicular to a horizontal false contour is stopped after an encounter with a false contour adjacent to the horizontal false contour, (b) illustrates the situation when the expansion of a false contour filtering area in a direction perpendicular to a vertical false contour is stopped after an encounter with an edge adjacent to the vertical false contour, (c) illustrates pixels that are processed more than one time during the expansion of a false contour filtering area, (d) illustrates a similar situation as the situation illustrated by (b), i.e., the situation when the expansion of a false contour filtering area is stopped after an encounter with an edge, (e) illustrates a similar situation as the situation illustrated by (a), i.e., the situation when the expansion of a false contour filtering area is stopped after an encounter with a false contour, and (f) illustrates the situation when the false contour filtering area is expanded to its maximum (E=10) and then the expansion is stopped. In a case where pixels are processed more than one time during the expansion of a false contour filtering area, as indicated by (c) of FIG. 11, the values of pixels incorporated into the expanded false contour filtering area can be calculated according to the direction of a false contour, as indicated by Equations (12) through (15):











False





Contour





Direction


:






k

=
1









I
^



(


m
+
i

,
n

)


=

{











r

r
+
1





I
^



(


m
+
i

,
n

)



+


r

r
+
1




{


d
1


D
1













(



I
f




(


m
+
X

,
n

)


-


I
f




(

m
,
n

)



)

+


I
f




(

m
,
n

)



}

,











if







B
d



(


m
+
i

,
n

)



=








1
&




I
f




(


m
+
1

,
n

)



>


I
f




(

m
,
n

)
















r

r
+
1





I
^



(


m
+
i

,
n

)



+


r

r
+
1




{


d
2


D
2













(



I
f




(


m
-
X

,
n

)


-


I
f




(

m
,
n

)



)

+


I
f




(

m
,
n

)



}

,











if







B
d



(


m
+
i

,
n

)



=








1
&




I
f




(


m
-
1

,
n

)



>


I
f




(

m
,
n

)













I
f




(


m
+
i

,
n

)


,



otherwise








False





Contour





Direction


:






k

=


5







I
^



(


m
+
i

,
n

)



=

{









r

r
+
1





I
^



(


m
+
i

,
n

)



+


r

r
+
1




{


d
1


D
1













(



I
f




(


m
-
X

,
n

)


-


I
f




(

m
,
n

)



)

+


I
f




(

m
,
n

)



}

,











if







B
d



(


m
+
i

,
n

)



=








1
&




I
f




(


m
-
1

,
n

)



>


I
f




(

m
,
n

)
















r

r
+
1





I
^



(


m
+
i

,
n

)



+


r

r
+
1




{


d
2


D
2













(



I
f




(


m
+
X

,
n

)


-


I
f




(

m
,
n

)



)

+


I
f




(

m
,
n

)



}

,











if







B
d



(


m
+
i

,
n

)



=








1
&




I
f




(


m
+
1

,
n

)



>


I
f




(

m
,
n

)













I
f




(


m
+
i

,
n

)


,



otherwise
















(
12
)







FalseContourDirectionk
=
2









I
^



(


m
+
i

,

n
+
j


)


=

{











r

r
+
1





I
^



(


m
+
i

,

n
+
j


)



+


r

r
+
1




{


d
1


D
1













(



I
f




(


m
+
X

,

n
+
Y


)


-


I
f




(

m
,
n

)



)

+


I
f




(

m
,
n

)



}

,











if







B
d



(


m
+
i

,

n
+
j


)



=








1
&




I
f




(


m
+
1

,

n
+
1


)



>


I
f




(

m
,
n

)
















r

r
+
1





I
^



(


m
+
i

,

n
+
j


)



+


r

r
+
1




{


d
2


D
2













(



I
f




(


m
-
X

,

n
-
Y


)


-


I
f




(

m
,
n

)



)

+


I
f




(

m
,
n

)



}

,











if







B
d



(


m
+
i

,

n
+
j


)



=








1
&




I
f




(


m
-
1

,

n
-
1


)



>


I
f




(

m
,
n

)













I
f




(


m
+
i

,

n
+
j


)


,



otherwise








FalseContourDirectionk

=


6







I
^



(


m
+
i

,

n
+
j


)



=

{









r

r
+
1





I
^



(


m
+
i

,

n
+
j


)



+


r

r
+
1




{


d
1


D
1













(



I
f




(


m
-
X

,

n
-
Y


)


-


I
f




(

m
,
n

)



)

+


I
f




(

m
,
n

)



}

,











if







B
d



(


m
+
i

,

n
+
j


)



=








1
&




I
f




(


m
-
1

,

n
-
1


)



>


I
f




(

m
,
n

)
















r

r
+
1





I
^



(


m
+
i

,

n
+
j


)



+


r

r
+
1




{


d
2


D
2













(



I
f




(


m
+
X

,

n
+
Y


)


-


I
f




(

m
,
n

)



)

+


I
f




(

m
,
n

)



}

,











if







B
d



(


m
+
i

,

n
+
j


)



=








1
&




I
f




(


m
+
1

,
n

)



>


I
f




(

m
,

n
+
1


)













I
f




(


m
+
i

,

n
+
j


)


,



otherwise












(
13
)








False





Contour





Direction


:






k

=
3









I
^



(

m
,

n
+
j


)


=

{











r

r
+
1





I
^



(

m
,

n
+
j


)



+


r

r
+
1




{


d
1


D
1













(



I
f




(

m
,

n
+
Y


)


-


I
f




(

m
,
n

)



)

+


I
f




(

m
,
n

)



}

,











if







B
d



(

m
,

n
+
j


)



=








1
&




I
f




(

m
,

n
+
1


)



>


I
f




(

m
,
n

)
















r

r
+
1





I
^



(

m
,

n
+
j


)



+


r

r
+
1




{


d
2


D
2













(



I
f




(

m
,

n
-
Y


)


-


I
f




(

m
,
n

)



)

+


I
f




(

m
,
n

)



}

,











if







B
d



(

m
,

n
+
j


)



=








1
&




I
f




(

m
,

n
-
1


)



>


I
f




(

m
,
n

)













I
f




(

m
,

n
+
j


)


,



otherwise








False





Contour





Direction


:






k

=


7







I
^



(

m
,

n
+
j


)



=

{









r

r
+
1





I
^



(

m
,

n
+
j


)



+


r

r
+
1




{


d
1


D
1













(



I
f




(

m
,

n
-
Y


)


-


I
f




(

m
,
n

)



)

+


I
f




(

m
,
n

)



}

,











if







B
d



(

m
,

n
+
j


)



=








1
&




I
f




(

m
,

n
-
1


)



>


I
f




(

m
,
n

)
















r

r
+
1





I
^



(

m
,

n
+
j


)



+


r

r
+
1




{


d
2


D
2













(



I
f




(

m
,

n
+
Y


)


-


I
f




(

m
,
n

)



)

+


I
f




(

m
,
n

)



}

,











if







B
d



(

m
,

n
+
j


)



=








1
&




I
f




(

m
,

n
+
1


)



>


I
f




(

m
,
n

)













I
f




(

m
,

n
+
j


)


,



otherwise












(
14
)







FalseContourDirectionk
=
4









I
^



(


m
+
i

,

n
+
j


)


=

{











r

r
+
1





I
^



(


m
+
i

,

n
+
j


)



+


r

r
+
1




{


d
1


D
1













(



I
f




(


m
-
X

,

n
+
Y


)


-


I
f




(

m
,
n

)



)

+


I
f




(

m
,
n

)



}

,











if







B
d



(


m
+
i

,

n
+
j


)



=








1
&




I
f




(


m
-
1

,

n
+
1


)



>


I
f




(

m
,
n

)
















r

r
+
1





I
^



(


m
+
i

,

n
+
j


)



+


r

r
+
1




{


d
2


D
2













(



I
f




(


m
+
X

,

n
-
Y


)


-


I
f




(

m
,
n

)



)

+


I
f




(

m
,
n

)



}

,











if







B
d



(


m
+
i

,

n
+
j


)



=








1
&




I
f




(


m
+
1

,

n
-
1


)



>


I
f




(

m
,
n

)













I
f




(


m
+
i

,

n
+
j


)


,



otherwise








FalseContourDirectionk

=


8







I
^



(


m
+
i

,

n
+
j


)



=

{









r

r
+
1





I
^



(


m
+
i

,

n
+
j


)



+


r

r
+
1




{


d
1


D
1













(



I
f




(


m
+
X

,

n
-
Y


)


-


I
f




(

m
,
n

)



)

+


I
f




(

m
,
n

)



}

,











if







B
d



(


m
+
i

,

n
+
j


)



=








1
&




I
f




(


m
+
1

,

n
-
1


)



>


I
f




(

m
,
n

)
















r

r
+
1





I
^



(


m
+
i

,

n
+
j


)



+


r

r
+
1




{


d
2


D
2













(



I
f




(


m
-
X

,

n
-
Y


)


-


I
f




(

m
,
n

)



)

+


I
f




(

m
,
n

)



}

,











if







B
d



(


m
+
i

,

n
+
j


)



=








1
&




I
f




(


m
-
1

,

n
+
1


)



>


I
f




(

m
,
n

)













I
f




(


m
+
i

,

n
+
j


)


,



otherwise












(
15
)







where d1 or d2 indicates a distance between a pixel incorporated into an expanded false contour filtering area and a pixel where a false contour is detected, D1 or D2 indicates the length in pixels by which a false contour filtering area is expanded, and r indicates the number of iterations of processing of a pixel during the expansion of a false contour filtering area and is equal to 0 for pixels that are processed for a first time. Equations (12) through (15) respectively correspond to pairs of false contour directions illustrated in FIG. 10, each pair of false contour directions that form an angle of 180°. The distances d1 and d2 can be defined by Equation (16):





d1=|i| or |j|, and d2=|i| or |j|  (16)


where i indicates a horizontal pixel distance, and j indicates a vertical pixel distance.


The lengths D1 and D2 can be defined by Equation (17):





D1=X or Y





D2=X or Y  (17)


where X indicates the length in a horizontal direction by which a false contour filtering area is expanded, and Y indicates the length in a vertical direction by which a false contour filtering area is expanded.


Referring to FIG. 11, in the case of (d), If′(m+1,n−1)>If′(m,n). Thus, a false contour filtering area is expanded in a direction that satisfies the equation: k=8; and D1=4. On the other hand, in the case of (e), D2=3. In a case where a false contour filtering area is expanded in a total of eight directions, like in the present exemplary embodiment, the absolute values of the horizontal pixel distance i and the vertical pixel distance j are the same for a diagonal false contour direction and an anti-diagonal false contour direction.


Referring to FIG. 9, a weight applicator 922 applies a weight using the weight w, which is obtained by neural network learning, the false contour location information Bf(m,n), the false contour direction information θ(m,n), and the image If(m,n) and outputs an image If′(m, n) as a result of primary false contour removal, as indicated by Equation (18):











I
f
f



(

m
,
n

)


=

{












i
=
1

M




c
i
1




w

i
,
1

2



(
k
)




+

b
i
2


,









if







B
f



(

m
,
n

)



=
1












I
f



(

m
,
n

)


,







otherwise









(
18
)







where ci1 indicates a value obtained from an intermediate calculation process performed by a neural network. The value ci1 can be defined by Equation (19):










c
i
1

=





j
=
1

L




p
j




w

j
,
i

1



(
k
)




+

b
i
1






(
19
)







where the superscript of 1 in wj,i1(k) indicates a location of layer, the subscripts of i and j in wj,i1(k) indicate a location of node in two consecutive layers, bi1 indicates a bias, the superscript of 1 in bi1 indicates location of layer, and the subscript of i in bi1 indicates a location of node.


A false contour removal filter 924 applies an adaptive one-dimensional (1D) directional smoothing filter to the image If′(m,n) provided by the weight applicator 922, and outputs an image Î(m, n) as a result of final false contour removal. Here, the false contour removal filter 924 is not restricted to an adaptive 1D directional smoothing filter, and this will hereinafter be described in detail.


The false contour removal filter 924 uses the false contour direction information θ(m,n), the false contour location information Bf(m,n), the contour information C(m,n), and the filtering area expansion information Bd(m,n) to perform filtering in a direction perpendicular to a false contour direction indicated by the false contour direction information θ(m,n). If the false contour removal filter 924 is a 9-tap smoothing filter, an adaptive 1D directional smoothing filter coefficient h(n) may be defined by Equation (20):










h






(
n
)


=


1
16

×


{

1
,
1
,
2
,
2
,
4
,
2
,
2
,
1
,
1

}

.






(
20
)







The false contour removal filter 924 performs filtering using the adaptive 1D directional smoothing filter coefficient h(n), thereby obtaining the image Î(m, n). The present invention is not restricted to a 9-tap smoothing filter. In other words, a 5-tap or 7-tap smoothing filter coefficient may be selectively applied to the present invention.


neural networks according to an exemplary embodiment of the present invention. Referring to FIG. 12, in operation 1202, a weight is determined through neural network learning using an original input image, an image containing false contours, false contour location information, and false contour direction information.


In operation 1204, a false contour filtering area is expanded using the false contour location information, the false contour direction information, and contour information.


In operation 1206, a weight is applied using the weight obtained in operation 1202, the false contour location information, the false contour direction information, and the image containing false contours.


In operation 1208, a false contour is removed from a false contour area by performing adaptive 1D smoothing filtering using the false contour location information, the false contour direction information, the contour information, and filtering area expansion information.


The present invention can be realized as computer-readable code embodied on a computer-readable recording medium. The computer-readable recording medium may be any type of recording device in which data is stored in a computer-readable manner. Non-limiting examples of the computer-readable recording medium include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disc, an optical data storage, and a carrier wave (e.g., data transmission through the Internet). The computer-readable recording medium may be distributed over a plurality of computer systems connected to a network so that computer-readable code is written thereto and executed therefrom in a decentralized manner. Functional programs, code, and code segments needed for realizing the present invention can be easily construed by one of ordinary skill in the art.


According to the present invention, it is possible to remove false contours even when it is unknown what has caused the false contours, by detecting a false contour area candidate and performing false contour removal only on the detected false contour area candidate. In addition, according to the present invention, it is possible to enhance the quality of images by performing filtering while preserving edges in an original input image and precisely performing pixel-based processes through neural network learning.


While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims.

Claims
  • 1. A method of removing false contours comprising: detecting a contour area from an input image;detecting a false contour area from the contour area using a contrast between pixels in the contour area;expanding the false contour area; andremoving a false contour from the expanded false contour area.
  • 2. The method of claim 1, wherein the detecting the false contour area comprises: removing flat areas from the input image;detecting the contour area from the input image;separating an edge area and the false contour area from the contour area using the contrast between pixels in the contour area; andgenerating false contour direction information and false contour location information of the false contour area.
  • 3. The method of claim 2, wherein direction information indicating a direction that maximizes a contrast between pixels in a predetermined area is determined as the false contour direction information.
  • 4. The method of claim 3, wherein the direction that maximizes the contrast between pixels in the predetermined area is classified into one of five directions that comprise a direction corresponding to an angle of 0°, a direction corresponding to an angle of 45°, a direction corresponding to an angle of 90°, a direction corresponding to an angle of 135°, and a non-direction.
  • 5. The method of claim 3, wherein the direction that maximizes the contrast between pixels in the predetermined area is classified into one of eight directions that comprise the direction corresponding to an angle of 0°, the direction corresponding to an angle of 45°, the direction corresponding to an angle of 90°, the direction corresponding to an angle of 135°, a direction corresponding to an angle of 180°, a direction corresponding to an angle of 225°, a direction corresponding to an angle of 270°, and a direction corresponding to an angle of 315°.
  • 6. The method of claim 4, wherein the non-direction corresponds to a situation when a difference between a maximum contrast and a minimum contrast is smaller than a predefined threshold.
  • 7. The method of claim 2, wherein the expanding the false contour area comprises: generating a structural element; andexpanding the false contour area by performing a binary morphology dilation operation according to the size and shape of the structural element.
  • 8. The method of claim 1, wherein the removing the false contour comprises: determining a smoothing mask weight according to a distance to a center pixel where the false contour is detected, and determining an edge preservation mask weight according to a contrast with the center pixel; andperforming filtering using the smoothing mask weight and the edge preservation mask weight.
  • 9. The method of claim 8, wherein the performing filtering comprises performing filtering using a bilateral filter.
  • 10. The method of claim 1, wherein the removing the false contour comprises: performing neural network learning according to a direction of the false contour area, and generating a weight for pixels in the false contour area;removing the false contour in units of pixels by applying the weight according to the false contour direction information; andfiltering pixels from which the false contour is removed and pixels adjacent to the pixels from which the false contour is removed.
  • 11. The method of claim 10, wherein the filtering comprises: expanding a false contour filtering area one pixel at a time in a direction perpendicular to the direction of the false contour area; andstopping the expansion of the false contour filtering area when a false contour or an edge is encountered during the expansion of the false contour filtering area.
  • 12. The method of claim 10, wherein the filtering comprises filtering using an adaptive one-dimensional directional smoothing filter.
  • 13. A method of removing false contours while preserving edges comprising: determining a smoothing mask weight according to a distance from a pixel in a mask to a center pixel where a false contour is detected;determining an edge preservation mask weight according to a contrast with the center pixel; andfiltering using the smoothing mask weight and the edge preservation mask weight.
  • 14. The method of claim 13, wherein the filtering comprises performing filtering using a bilateral filter.
  • 15. A method of removing false contours using neural networks comprising: performing neural network learning according to a direction of a false contour area;generating a weight for pixels in the false contour area;removing a false contour in units of pixels by applying the weight according to false contour direction information of the false contour area; andfiltering pixels from which the false contour is removed and pixels adjacent to the pixels from which the false contour is removed.
  • 16. The method of claim 15, wherein the filtering comprises: expanding a false contour filtering area one pixel at a time in a direction perpendicular to the direction of the false contour area; andstopping the expansion of the false contour filtering area when a false contour or an edge is encountered during the expansion of the false contour filtering area.
  • 17. The method of claim 15, wherein the filtering comprises filtering using an adaptive one-dimensional directional smoothing filter.
  • 18. An apparatus for removing false contours comprising: a false contour detection unit which detects a contour area from an input image, and detects a false contour area from the contour area using a contrast between pixels in the contour area;a false contour area expansion unit which expands the false contour area; anda false contour removal unit which removes a false contour from the expanded false contour area.
  • 19. The apparatus of claim 18, wherein the false contour detection unit comprises: a contour detector which removes flat areas from the input image and detects the contour area from the input image; anda false contour separator which separates an edge area and the false contour area from the contour area using the contrast between pixels in the contour area, and generates false contour direction information and false contour location information of the false contour area.
  • 20. The apparatus of claim 18, wherein the false contour area detection unit comprises: a structural element generator which generates a structural element; anda calculator which expands the false contour area by performing a binary morphology dilation operation according to the size and shape of the structural element.
  • 21. The apparatus of claim 18, wherein the false contour removal unit comprises: a weight determiner which determines a smoothing mask weight according to a distance from a pixel in a mask to a center pixel where the false contour is detected, and determines an edge preservation mask weight according to a contrast with the center pixel; anda false contour removal filter which filters using the smoothing mask weight and the edge preservation mask weight.
  • 22. The apparatus of claim 21, wherein the false contour removal filter is a bilateral filter.
  • 23. The apparatus of claim 18, wherein the false contour removal unit comprises: a neural network learning unit which performs neural network learning according to a direction of the false contour area, and generates a weight for pixels in the false contour area;a weight applicator which removes the false contour in units of pixels by applying the weight according to the false contour direction information; anda false contour removal filter which filters pixels from which the false contour is removed and pixels adjacent to the pixels from which the false contour is removed.
  • 24. The apparatus of claim 23, wherein the false contour removal filter comprises a filtering area expansion unit which expands a false contour filtering area one pixel at a time in a direction perpendicular to the direction of the false contour area and stops the expansion of the false contour filtering area when a false contour or an edge is encountered during the expansion of the false contour filtering area.
  • 25. The apparatus of claim 23, wherein the false contour removal filter is an adaptive one-dimensional directional smoothing filter.
  • 26. An apparatus for removing false contours while preserving edges comprising: a weight determiner which determines a smoothing mask weight according to a distance from a pixel in a mask to a center pixel where a false contour is detected, and determines an edge preservation mask weight according to a contrast with the center pixel; anda false contour removal filter which filter using the smoothing mask weight and the edge preservation mask weight.
  • 27. The apparatus of claim 26, wherein the false contour removal filter is a bilateral filter.
  • 28. An apparatus for removing false contours using neural networks comprising: a neural network learning unit which performs neural network learning according to a direction of a false contour area, and generates a weight for pixels in the false contour area;a weight applicator which removes a false contour in units of pixels by applying the weight according to false contour direction information of the false contour area; anda false contour removal filter which filters pixels from which the false contour is removed and pixels adjacent to the pixels from which the false contour is removed.
  • 29. The apparatus of claim 28, wherein the false contour removal filter comprises a filtering area expansion unit which expands a false contour filtering area one pixel at a time in a direction perpendicular to the direction of the false contour area and stops the expansion of the false contour filtering area when a false contour or an edge is encountered during the expansion of the false contour filtering area.
  • 30. The apparatus of claim 28, wherein the false contour removal filter is an adaptive one-dimensional directional smoothing filter.
  • 31. A computer-readable recording medium having recorded thereon a program for executing the method of claim 1.
  • 32. A computer-readable recording medium having recorded thereon a program for executing the method of claim 13.
  • 33. A computer-readable recording medium having recorded thereon a program for executing the method of claim 15.
Priority Claims (1)
Number Date Country Kind
10-2006-0052872 Jun 2006 KR national