Histogram-based segmentation of objects from a video signal via color moments

Information

  • Patent Grant
  • 6526169
  • Patent Number
    6,526,169
  • Date Filed
    Monday, March 15, 1999
    25 years ago
  • Date Issued
    Tuesday, February 25, 2003
    21 years ago
Abstract
A histogram-based segmentation of an image, frame or picture of a video signal into objects via color moments is initiated by defining a relatively large area within the object. The defined area is characterized by its color information in the form of a limited set of color moments representing a color histogram for the area. Based upon the set of color moments, color moments generated for small candidate blocks within the image, an automatically generated weighting vector, distance measures for the blocks from a central block in the object and a tolerance the area is grown to encompass the object to the extent of its boundaries. The initial set of color moments are then updated for the entire object. Those candidate blocks within the object serve to segment the object from the image.
Description




BACKGROUND OF THE INVENTION




The present invention relates to processing of video signals, and more particularly to a histogram-based segmentation of an image or video signal into visual objects via color moments.




In the processing of images or video signals it is desirable to be able to take an object, such as a tennis player, from one video signal or image and superimpose it upon another video signal or image. To this end keying systems were developed—either luminance or chrominance based. For example, in character generation luminance keying is typically used, while chrominance keying is used for placing a weather man in front of a weather map. Chrominance keying is based upon the object to be segmented, i.e., the weather man, being situated before a uniform color background, such as a blue screen. A key signal is generated that is one value when the color is blue and another value when the color is not blue. The key signal is then used to cut a hole in another video signal into which the segmented object is placed, thus superimposing an object from one video signal onto another.




In naturally occurring scenes there may be many objects against a non-uniform color background, such as tennis players and the ball against a crowd background. It may be desirable to segment an object from this scene in order to superimpose it upon another scene. In this situation conventional luminance and chrominance key generation techniques do not work.




What is desired is a method of segmenting an object from an image in a video signal using the colors of the object.




BRIEF SUMMARY OF THE INVENTION




Accordingly the present invention provides histogram-based segmentation of an image or video signal into objects via color moments. A user defines a relatively large area that lies entirely within an object of interest in one image, frame or picture from the video signal. An algorithm extracts characteristics of the user-defined area. Using these characteristics the algorithm grows the area to encompass the entire object of interest. The characterization of the area is made based on its color properties. Color moments representing a color histogram for the area are used for characterizing the color properties. Weighting factors and a threshold are automatically calculated for use in the characterization. At the conclusion the “characteristic” color moments for the object are updated.




The objects, advantages and other novel features of the present invention are apparent from the following detailed description when read in conjunction with the appended claims and attached drawing.











BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING





FIG. 1

is an illustrative view of an image from a video signal containing an object to be segmented according to the present invention.





FIG. 2

is an illustrative view of the image of

FIG. 1

for updating “characteristic” color moments for the object according to the present invention.











DETAILED DESCRIPTION OF THE INVENTION




The basic concept of the present invention is to perform semi-automatic extraction of an object of interest from a given color image, frame or picture of a video signal using color moments that represent a color histogram for the object. Referring now to

FIG. 1

the color image


10


is shown with the object of interest


12


. A user is asked to define a relatively large area


14


, such as a rectangle, that lies entirely within the object of interest


12


. A segmentation algorithm begins by characterizing the color information within the user-defined area


14


. The use of color histograms for this purpose is a well-known technique. However such methods are subject to certain limitations: namely the color space has to be divided into a finite number of “bins”, and selecting a good set of such bins is image-dependent and, therefore, less robust.




The use of color moments to represent a color histogram circumvents the problems presented by using color histogram methods by eliminating the need for explicit quantization of the color histogram into a number of bins. In the field of probability/statistics it is known that a histogram is uniquely specified by all its moments. The relationship is:






 




Φ
X1X2X3



(


w
1

,

w
2

,

w
3


)


=


FT


{


h
X1X2X3



(


x
1

,

x
2

,

x
3


)


}


=




k
,
l
,

m
=
0










E


{


X
1
k



X
2
l



X
3
m


}



(


(

jw
1

)



k
/

k
!



)

*

(


(

jw
2

)



l
/

l
!



)

*

(


(

jw
3

)



m
/

m
!



)















where X


1


, X


2


, X


3


represent the three color components, h


X1X2X3


(.) is the three-dimensional histogram, Φ


X1X2X3


(.) is the Fourier transform of the histogram, and E{X


1




k


X


2




l


X


3




m


} represents the moments.




Each pixel in the object


12


has three color components. For the present illustration the Y,C


b


,C


r


domain is used. To characterize the histogram of the object, instead of an infinite set of moments as in the above equation, a finite number of color moments is used. For the present example 13 moments are used:






















E{Y}




E{C


b


}




E{C


r


}







E{Y


2


}




E{C


b




2


}




E{C


r




2


}







E{YC


b


}




E{C


b


C


r


}




E{C


r


Y}







E{Y


3


}




E{C


b




3


}




E{C


r




3


}








E{YC


b


C


r


}















From the large user-defined area


14


the above moments for that area may be calculated to provide a 13-point color moment vector {ζ*} that characterizes the area where ζ*=[E{Y}, E{C


b


}, . . . , E{YC


b


}/δ


y


δ


Cb


, . . . ]


T


.




In order to grow the user-defined area


14


into the complete object boundary


12


, candidate blocks and a distance measure are used. The image


10


is divided into blocks of size P×Q, such as P=Q=2 pixels, and the color moment vector for each of these blocks is computed. The k-th block, whose color moment vector is denoted by the vector ζ


k


, belongs to the object


12


if:









l
=
0

12








w
1



&LeftBracketingBar;




ζ
_

k



[
l
]


-


ζ
*



[
l
]



&RightBracketingBar;












T












where w is a thirteen point weighting vector and T is a threshold.




The weighting vector w and threshold T may be automatically calculated as follows. First subdivide the user-defined area


14


into P×Q non-overlapping blocks and find the color moment vectors ζ


j


for each block. Then find the auto-covariance matrix |R


c


| defined as:






|


R




c




|ΔE


{(ζ−μ)(ζ−μ)


T




}, μΔE{ζ}








Then w


l


=1/λ


l


where λ


l


is the l-th eigen value of |R


c


|. This is equivalent to defining distance measure as







d





k


,ζ*)Δ(ζ


k


−ζ*)


T




|R




c


|


−1





k


−ζ*)




Some of the λ


l


may be zero and |R


c


| may be singular. Assuming that |R


c


| is almost diagonal so that λ


l




2


is the variance of the l-th component of the color moment vector, find λ


min


as the minimum of the non-zero eigen values of |R


c


|. Then







λ
1

=






λ
1

,








λ
min

/
10

,











λ
1












0







λ
1

=
0















This gives a set of non-zero eigen values that are used in the distance measure.




To compute the threshold T, the distance measure d(ζ


k


,ζ*) is calculated for all of the P×Q blocks within the user-defined area


14


. Then








d




max


Δmax{


d





k


,ζ*)};


d




min


Δmin{


d





k


,ζ*)}


TΔd




max


+α(


d




max




−d




min


)






where α>0 is tuned to the object


12


, typically about 0.05.




With the distance measure, the threshold and the “characteristic” color moment vector ζ*, all the P×Q blocks may be found in the image


10


that are “close” to the user-defined area


14


. From this collection of blocks a connected region is grown centered at the center of the user-defined area via morphological operations. The result is the segmentation of the object of interest


12


from the image


10


with a coarse boundary due to the size of the P×Q blocks. Correlating to a key signal, all of the values within the close blocks would have one value, and all other blocks would have another value.




The segmented object may then used to update the characteristic color moment vector ζ* to reflect the color content of the entire object. This is done via a weighted averaging of the color moment vectors of all P×Q blocks in the object, as illustrated in FIG.


2


. For the (x


j


,y


j


)-th P×Q block in the object


12


, where (x


0


,y


0


) is the P×Q block located at the center of the user-defined area


14


, compute:






η((


x




j




,y




j


))=exp[−10.0{(


x




j




−x




0


)


2


+(


h/w


)


2


(


y




j




−y




0


)


2


}/{nRows


2


+(


h/w


)


2


nCols


2


}]






Then the updated characteristic color moment vector is:








ζ
_

updated
*

=


{




R










η


(


x
j

,

y
j


)





ζ
_



(


x
j

,

y
j


)




}

/




R





η


(


x
j

,

y
j


)














where R′ is the connected region identified as the object of interest.




Thus the present invention provides a histogram-based, semi-automatic segmentation of objects from a video signal via color moments without an explicit quantization of the corresponding color histogram into a small number of bins, the algorithm automatically determining distance measures, weights and a threshold and updating a “characteristic” color moment vector for the object.



Claims
  • 1. A method of semi-automatically segmenting an image, frame or picture of a video signal into objects based upon histograms comprising the steps of:defining a large area within the object, the object having boundaries; characterizing color information within the defined large area by means of a finite number of color moments representing a color histogram; and growing the large area to the boundaries of the object using the color moments and wherein the growing step comprises the steps of: dividing the image into a plurality of small candidate blocks; determining the color moments for each of the blocks; and identifying the candidate blocks as belonging to the object when a weighted sum of the differences between the color moments for candidate blocks and the corresponding color moments for the defined area is less than a threshold.
  • 2. The method as recited in claims 1 further comprising the steps of:generating an auto-covariance matrix based upon the color moments for the candidate blocks, a weighting vector for the identifying step being a function of the values of the auto-covariance matrix; defining a distance measure from the color moments for the candidate blocks and the defined area and from the auto-covariance matrix; and computing the threshold from the distance measures for all the candidate blocks.
  • 3. The method as recited in claim 1 further comprising the step of updating the color moments for the defined area to reflect the color content of the entire object.
US Referenced Citations (5)
Number Name Date Kind
5768412 Mitsuyama et al. Jun 1998 A
5933524 Schuster et al. Aug 1999 A
6148092 Qian Nov 2000 A
6167167 Matsugu et al. Dec 2000 A
6246803 Gauch Jun 2001 B1