The invention relates to a method of converting of a first set of initial segments of an image into a second set of updated segments of the image, the method comprising iterative updates of intermediate segments being derived from respective initial segments, a particular update comprising determining whether a particular pixel being located at a border between a first one of the intermediate segments and a second one of the intermediate segments, should be moved from the first one of the intermediate segments to the second one of the intermediate segments, on basis of a pixel value of the particular pixel, on basis of a first parameter of the first one of the intermediate segments and on basis of a second parameter of the second one of the intermediate segments.
The invention further relates to a conversion unit arranged to perform such a method of converting.
The invention further relates to an image processing apparatus, comprising:
receiving means for receiving a signal representing an image;
a segmentation unit for determining a first set of initial segments of the image;
a conversion unit for converting the first set of initial segments into a second set of updated segments; and
an image processing unit for processing the image on basis of the second set of updated segments.
Image segmentation is an important first step that often precedes other tasks such as segment based depth estimation or video compression. Generally, image segmentation is the process of partitioning an image into a set of non-overlapping parts, or segments, that together correspond as much as possible to the physical objects that are present in the scene. There are various ways of approaching the task of image segmentation, including histogram-based segmentation, edge-based segmentation, region-based segmentation, and hybrid segmentation.
The method of the kind described in the opening paragraph is known in the art. With this known method a first set of initial segments of an image is converted into a second set of updated segments of the image. The method comprises iterative updates of intermediate segments being derived from respective initial segments. An update comprises determining whether a particular pixel being located at a border between a first intermediate segment and a second intermediate segment should be moved from the first intermediate segment to the second intermediate segment. This is based on the color value of the particular pixel, the mean color value of the first intermediate segment and on basis of the mean color value of the second intermediate segment. If it appears that the particular pixel should be moved from the first intermediate segment to the second intermediate segment new mean color values are computed for the new intermediate segments. Subsequently a next pixel is evaluated and optionally moved. After evaluation of the relevant pixels of the image in one scan over the image, another scan of evaluations over the image is started.
The known method however suffers from the fact that several segmentation refinement iterations of the complete image have to be performed for realizing pixel-precise segmentation. Typically, twenty scans over the image are made to achieve the second set of updated segments of the image. This approach is therefore very expensive in terms of memory access, power consumption and computational effort.
It is an object of the invention to provide a method of the kind described in the opening paragraph which is relatively efficient with regard to memory access.
This object of the invention is achieved in that first a number of iterative updates are performed for pixels of a first two-dimensional block of pixels of the image and after that the number of iterative updates are performed for pixels of a second two-dimensional block of pixels of the image. Typically the dimensions of the blocks of pixels are 8*8 or 16*16 pixels. The evaluations are performed for the relevant pixels in a block in a number of scans. That means that, e.g. row by row these relevant pixels in the block under consideration a evaluated and after that again relevant pixels of that block are evaluated. Note that the parameters of the segments are adapted after each evaluation. After the relevant pixels of a block of pixels have been evaluated in a number of scans, the pixel values of another block of pixels are evaluated in a similar way. With relevant pixels is meant those pixels which are located at a border between two segments. Note that a border moves, i.e. the edge of a segment changes, if a pixel is taken from an intermediate segment and added to its neighboring intermediate segment. Therefor the relevant pixels of a block is different for each of the scans.
An advantage of the method according to the invention is that a sliding window, comprising the pixels of subsequent blocks, is moved over the image only once. That means that the blocks of pixels have to be accessed only once from a memory device. Typically the pixel values of a block under consideration are temporarily stored in a cache. Then the iterations are performed on basis of the values in the cache.
In an embodiment of the method according to the invention, the first parameter corresponds to a mean color value of the first intermediate segment, the second parameter corresponds to a mean color value of the second intermediate segment and the pixel value of the particular pixel represents the color value of the particular pixel. Color is a relatively good criterion for image segmentation. An advantage of this embodiment according to the invention is that the updated segments relatively well correspond to objects in the scene.
In an embodiment of the method according to the invention, the particular update is based on a regularization term depending on the shape of the first one of the intermediate segments, the regularization term being computed on basis of a first group of pixels of the first two-dimensional block of pixels. In other words, the regularization term depends on the shape of the boundary between segments. The regularization term penalizes irregular segment boundaries. An advantage of this embodiment according to the invention is that relatively regular segment boundaries are determined. Therefor this embodiment according to the invention is less sensitive to noise in the image.
In an embodiment of the method according to the invention, a first sequence of the number of iterative updates are performed in a row-by-row scanning within the first block of pixels and a second sequence of the number of iterative updates are performed in a column-by-column scanning within the first block of pixels. In other words, the scanning directions are alternated between successive scans. For instance, first a scan in a horizontal direction is performed and then in vertical direction. Alternatively, first a scan in a vertical direction is performed and then in horizontal direction. Optionally, a third scan is in the opposite direction of the first scan, e.g. left-to-right versus right-to-left. Optionally, a fourth scan is in the opposite direction of the second scan, e.g. top-to-bottom versus bottom-to-top. Preferably the values of the regularization terms are different for the various scans, e.g. starting from a low curvature penalty to a high curvature penalty.
In an embodiment of the method according to the invention the first two-dimensional block of pixels is located adjacent to the second two-dimensional block of pixels. An advantage of this embodiment according to the invention is that a relatively simple memory allocation scheme is achieved.
In an embodiment of the method according to the invention the regularization term is computed on basis of the first group of pixels of the first two-dimensional block of pixels and a second group of pixels of the second two-dimensional block of pixels. By also taking into account pixels of a neighboring block of pixels a better regularization term can be computed for pixels at the border of a block.
It is a further object of the invention to provide a conversion unit of the kind described in the opening paragraph which is relatively efficient with regard to memory access.
This object of the invention is achieved in that the conversion unit comprises computation means for performing first a number of iterative updates for pixels of a first two-dimensional block of pixels of the image and for, after that, performing the number of iterative updates for pixels of a second two-dimensional block of pixels of the image.
It is advantageous to apply an embodiment of the conversion unit according to the invention in an image processing apparatus as described in the opening paragraph. The image processing apparatus may comprise additional components, e.g. a display device for displaying the processed images or storage means for storage of the processed images. The image processing unit might support one or more of the following types of image processing:
Video compression, i.e. encoding, e.g. according to the MPEG standard or H26L standard; or
Conversion of traditional monoscopic video (2D) video material into 3D video for viewing on a stereoscopic (3D) television. In this technology, structure from motion methods can be used to derive a depth map from two consecutive images in the video sequence; or
Image analysis for e.g. vision-based control like robotics or security applications.
Modifications of the method and variations thereof may correspond to modifications and variations thereof of the conversion unit and of the image processing apparatus described.
These and other aspects of the method, of the conversion unit and of the image processing apparatus according to the invention will become apparent from and will be elucidated with respect to the implementations and embodiments described hereinafter and with reference to the accompanying drawings, wherein:
Same reference numerals are used to denote similar parts throughout the figures.
An important step in converting 2D video to 3D video is the identification of image segments or regions with homogeneous color, i.e., image segmentation. Depth discontinuities are assumed to coincide with the detected edges of homogeneous color regions. A single depth value is estimated for each color region. This depth estimation per region has the advantage that there exists per definition a large color contrast along the region boundary. The temporal stability of color edge positions is critical for the final quality of the depth maps. When the edges are not stable over time, an annoying flicker may be perceived by the viewer when the video is shown on a 3D color television. Thus, a time-stable segmentation method is the first step in the conversion process from 2D to 3D video. Image segmentation using a constant color model achieves this desired effect. This method of image segmentation is described in greater detail below. It is based on a first set of initial segments and iterative updates resulting in a second set of updated segments. In other words the segmentation is a conversion of a first set of initial segments into a second set of updated segments.
The constant color model assumes that the time-varying image of an object segment can be described in sufficient detail by the mean region color. An image is represented by a vector-valued function of image coordinates:
I(x,y)=[r(x,y),g(x,y),b(x,y)] (1)
where r(x,y), g(x,y) and b(x,y) are the red, green and blue color channel. The object is to find a region partition referred to as segmentation L consisting of a fixed number of segments N. The optimal segmentation Lopt is defined as the segmentation that minimizes the sum of an error term e(x,y) plus a regularization term ƒ(x,y) over all pixels in the image:
where k is a regularization parameter that weights the importance of the regularization term. In the book “Pattern Classification”, by Richard O. Duda, Peter E. Hart, and David G. Stork, pp. 548-549, John Wiley and Sons, Inc., New York, 2001 equations are derived for a simple and efficient update of the error criterion when one sample is moved from one cluster to another cluster. These derivations were applied in deriving the equations of the segmentation method. The regularization term is based on a measure presented in the book “Understanding Synthetic Aperture Radar Images” by C. Oliver, S. Quegan, Artech-House, 1998. The regularization term limits the influence that random signal fluctuations, such as sensor noise, have on the edge positions. The error e(x,y) at pixel position (x,y) depends on the color value I(x,y) and on the segment label L(x,y):
e(x,y)=∥I(x,y)−mL(x,y)∥22 (3)
where mL(x,y) is the mean color for the segment with label L(x,y). The subscript at the double vertical bars denotes the Euclidean norm. The regularization term ƒ(x,y) depends on the shape of the boundary between segments:
where (x′,y′) are coordinates from the 8-connected neighbor pixels of (x,y). The value of x(A,B) depends on whether segment labels A and B differ:
Function ƒ(x,y) has a straightforward interpretation. For a given pixel position (x,y), the function simply returns the number of 8-connected neighbor pixels that have a different segment label.
Given the initial segmentation, a change is made at a segment boundary by assigning a boundary pixel to an adjoining segment. Suppose that a pixel with coordinates (x,y) currently in segment with label A is tentatively moved to segment with label B. Then the change in mean color for segment A is:
and the change in mean color for segment B is:
where nA and nB are the number of pixels inside segments A and B respectively. The proposed label change causes a corresponding change in the error function given by
The proposed label change from A to B at pixel (x,y) also changes the global regularization function ƒ. The proposed move affects ƒ not only at (x,y), but also at the 8-connected neighbor pixel positions of (x,y). The change in regularization function is given by the sum
where (x′,y′) are the 8-connected neighbor pixels of(x,y).
The proposed label change improves the fit criterion if
receiving means 602 for receiving a signal representing video images;
a segmentation unit 604 for determining a first set of initial segments of one of the video images;
a conversion unit 606 for converting the first set of initial segments into a second set of updated segments A′,B′,C′,D′; and
an image processing unit 608 for processing the video image 110b on basis of the second set of updated segments A′,B′,C′,D′.
The input signal may be a broadcast signal received via an antenna or cable but may also be a signal from a storage device like a VCR (Video Cassette Recorder) or Digital Versatile Disk (DVD). The input signal is provided at the input connector 610. The image processing apparatus 600 provides the output at the output connector 612.
The conversion unit 604 for converting the first set of initial segments into a second set of updated segments may be implemented using one processor. Normally, this function is performed under control of a software program product. During execution, normally the software program product is loaded into a memory, like a RAM, and executed from there. The program may be loaded from a background memory, like a ROM, hard disk, or magnetically and/or optical storage, or may be loaded via a network like Internet. Optionally an application specific integrated circuit provides the disclosed functionality.
The segmentation unit 604, the conversion unit 606 and the image processing unit 608 can be combined into one processor.
The output might be a stream of compressed video data. Alternatively the output represents 3D video content. The conversion of the received video images into the 3D video content might be as disclosed by M. Op de Beeck and A. Redert, in “Three dimensional video for the home”, in Proceedings of the International Conference on Augmented Virtual Environments and Three-Dimensional Imaging, Myconos, Greece, 2001, pp 188-191.
The image processing apparatus 600 might e.g. be a TV. The image processing apparatus 600 might comprise a display device. Alternatively the image processing apparatus 600 does not comprise the optional display device but provides the output data to an apparatus that does comprise a display device. Then the image processing apparatus 600 might be e.g. a set top box, a satellite-tuner, a VCR player, a DVD player or recorder. The image processing apparatus 600 might also be a system being applied by a film-studio or broadcaster.
Optionally the image processing apparatus 600 comprises storage means, like a hard-disk or means for storage on removable media, e.g. optical disks.
The conversion unit 706 comprises computation means for performing first a number of iterative updates for pixels of a first two-dimensional block of pixels 208 of the image and for, after that, performing the number of iterative updates for pixels of a second two-dimensional block of pixels 214 of the image. The pixels of the blocks 200-216 are simultaneously cached within the cache 704 when the pixels of the central block 208 are evaluated. After all evaluations have been performed for the central block 208 a new window 502 is defined within the image. This new window comprises the blocks 206-222. The central block 214 of this window will be evaluated now.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be constructed as limiting the claim. The word ‘comprising’ does not exclude the presence of elements or steps not listed in a claim. The word “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements and by means of a suitable programmed computer. In the unit claims enumerating several means, several of these means can be embodied by one and the same item of hardware.
Number | Date | Country | Kind |
---|---|---|---|
03101178.6 | Apr 2003 | EP | regional |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/IB04/50525 | 4/27/2004 | WO | 10/25/2005 |