The present invention relates, in general, to the transmission of digital pictures and, in particular, to enhancing the visibility of objects of interest in digital pictures, especially digital pictures that are displayed in units that have low resolution, low bit rate video coding.
There is an increasing demand for delivering video content to handheld devices, such as cell phones and PDA's. Because of small screen sizes, limited bandwidth and limited decoder-end processing power, the videos are encoded with low bit rates and at low resolutions. One of the main problems of low resolution, low bit rate video encoding is the degradation or loss of objects crucial to the perceived video quality. For example, it is annoying to watch a video clip of a soccer match or a tennis match when the ball is not clearly visible.
It is, therefore, desirable to highlight objects of interest to improve the subjective visual quality of low resolution, low bit rate video. In various implementations of the present invention, the visibility of an object of interest in a digital image is enhanced, given the approximate location and size of the object in the image, or the visibility of the object is enhanced after refinement of the approximate location and size of the object. Object enhancement provides at least two benefits. First, object enhancement makes the object easier to see and follow, thereby improving the user experience. Second, object enhancement helps the object sustain less degradation during the encoding (i.e., compression) stage. One main application of the present invention is video delivery to handheld devices, such as cell phones and PDA's, but the features, concepts, and implementations of the present invention also may be useful for a variety of other applications, contexts, and environments, including, for example, video over internet protocol (low bit rate, standard definition content).
The present invention provides for highlighting objects of interest in video to improve the subjective visual quality of low resolution, low bit rate video. The inventive system and method are able to handle objects of different characteristics and operate in fully-automatic, semi-automatic (i.e., manually assisted), and full manual modes. Enhancement of objects can be performed at a pre-processing stage (i.e., before or in the video encoding stage) or at a post-processing stage (i.e., after the video decoding stage).
In accordance with the present invention, the visibility of an object in a digital picture is enhanced by providing an input video containing an object, storing information representative of the nature and characteristics of the object, and developing, in response to the video input and the information representative of the nature and characteristics of the object, object localization information that identifies and locates the object. An enhanced video of that portion of the input video that contains the object and the region in which the object is located is developed from the input video in response to the object localization information and the enhanced video is encoded.
Referring to
The
The
In general, object localization module 14 implements one or more of the following methods in identifying and locating an object of interest:
Ideally, object localization module 14 operates in a fully automated mode. In practice, however, some manual assistance might be required to correct errors made by the system, or, at the very least, to define important objects for the system to localize. Enhancing non-object areas can cause the viewer to be distracted and miss the real action. To avoid or minimize this problem, a user can draw, as described above, an ellipse around the object and the system then can track the object from the specified location. If an object is successfully located in a frame, object localization module 14 outputs the corresponding ellipse parameters (i.e., center point, major axis, and minor axis). Ideally, the contour of this bounding ellipse would coincide with that of the object.
When, however, the parameters might be only approximate and the resulting ellipse does not tightly contain the object and object enhancement is applied, two problems might occur. First, the object might not be wholly enhanced because the ellipse does not include the entire object. Second, non-object areas might be enhanced. Because both these results can be undesirable, it is useful, under such circumstances, to refine the object region before enhancement. Refinement of object localization information is considered in greater detail below.
The
When enhancing the object, the visibility of the object is improved by applying image processing operations in the region in which the object of interest is located. These operations can be applied along the object boundary (e.g. edge sharpening), inside the object (e.g. texture enhancement), and possibly even outside the object (e.g. contrast increase, blurring outside the object area). For example, one way to draw more attention to an object is to sharpen the edges inside the object and along the object contour. This makes the details in the object more visible and also makes the object stand out from the background. Furthermore, sharper edges tend to survive encoding better. Another possibility is to enlarge the object, for instance by iteratively applying smoothing, sharpening and object refinement operations, not necessarily in that order.
Inclusion of object enhancement in the
As indicated above, refinement of the object localization information, prior to enhancement, might be required when the object localization information only approximates the nature of the object and the location of the object in each frame to avoid enhancing features outside the boundary of the region in which the object is located.
The development of the object localization information by object localization module 14 and the delivery of the object localization information to object enhancement module 16 can be fully-automatic as described above. As frames of the input video are received by object localization module 14, the object localization information is updated by the object localization module and the updated object localization information is delivered to object enhancement module 16.
The development of the object localization information by object localization module 14 and the delivery of the object localization information to object enhancement module 16 also can be semi-automatic. Instead of delivery of the object localization information directly from object localization module 14 to object enhancement module 16, a user, after having available the object localization information, can manually add to the digital picture of the input video markings, such boundary lines, which define the region of predetermined size in which the object is located.
The development of the object localization information and delivery of the object localization information to object enhancement module 16 also can be fully-manual. In such operation, a user views the digital picture of the input video and manually adds to the digital picture of the input video markings, such boundary lines, which define the region of predetermined size in which the object is located. As a practical matter, fully-manual operation is not recommended for live events coverage.
The refinement of object localization information, when necessary or desired, involves object boundary estimation, wherein the exact boundary of the object is estimated. The estimation of exact boundaries helps in enhancing the object visibility without the side effect of unnatural object appearance and motion and is based on several criteria. Three approaches for object boundary estimation are disclosed.
The first is an ellipse-based approach that determines or identifies the ellipse that most tightly bounds the object by searching over a range of ellipse parameters. The second approach for object boundary estimation is a level-set based search wherein a level-set representation of the object neighborhood is obtained and then a search is conducted for the level-set contour that most likely represents the object boundary. A third approach for object boundary estimation involves curve evolution methods, such as contours or snakes, that can be used to shrink or expand a curve with certain constraints, so that it converges to the object boundary. Only the first and second approaches for object boundary estimation are considered in greater detail below.
In the ellipse-based approach, object boundary estimation is equivalent to determining the parameters of the ellipse that most tightly bounds the object. This approach searches over a range of ellipse parameters around the initial values (i.e., the output of the object localization module 14) and determines the tightness with which each ellipse bounds the object. The output of the algorithm, illustrated in
The tightness measure of an ellipse is defined to be the average gradient of image intensity along the edge of the ellipse. The rationale behind this measure is that the tightest bounding ellipse should follow the object contour closely and the gradient of image intensity is typically high along the object contour (i.e., the edge between object and background). The flowchart for the object boundary estimation algorithm is shown in
The flow chart of
The ellipse-based approach may be applied to environments in which the boundary between the object and the background has a uniformly high gradient. However, this approach may also be applied to environments in which the boundary does not have a uniformly high gradient. For example, this approach is also useful even if the object and/or the background has variations in intensity along the object/background boundary.
The ellipse-based approach produces, in a typical implementation, the description of a best-fit ellipse. The description typically includes centerpoint, and major and minor axes.
An ellipse-based representation can be inadequate for describing objects with arbitrary shapes. Even elliptical objects may appear to be of irregular shape when motion-blurred or partially occluded. The level-set representation facilitates the estimation of boundaries of arbitrarily shaped objects.
A level-set representation is analogous in many ways to a topographical map. The topographical map typically includes closed contours for various values of elevation.
In practice, the image I can be a subimage containing the object whose boundary is to be estimated. A level-set representation, Ll(M), where M={i1, i2 . . . , iN} is extracted. The set M can be constructed based on the probable intensities of the object pixels, or could simply span the entire intensity range with a fixed step, (e.g. M={0.5, 1.5, . . . , 254.5, 255.5}). Then, all the level-set curves (i.e., closed contours) Cj; contained in the set Ll(M) are considered. Object boundary estimation is cast as a problem of determining the level-set curve, C*, which best satisfies a number of criteria relevant to the object. These criteria may include, among others, the following variables:
The criteria may place constraints on these variables based on prior knowledge about the object. In the following, there is described a specific implementation of object boundary estimation using level-sets.
Let mref, sref, aref, and xref=(xref, yref), be the reference values for the mean intensity, standard deviation of intensities, area, and the center, respectively, of the object. These can be initialized based on prior knowledge about the object, (e.g., object parameters from the object localization module 14, for example, obtained from an ellipse). The set of levels, M, is then constructed as,
M={i
min
,i
min+Δl,imin+2Δl, . . . ,imax},
where imin=└mref−sref┘−0.5, imax=└mref+sref┘+0.5, and Δl=└(imax−imin)/N┘, where N is a preset value (e.g., 10). Note that └.┘ denotes an integer flooring operation.
For a particular level-set curve Cj, let mj, sj, aj, and xj=(xj, yj), be the measured values of the mean intensity, standard deviation of intensities, area, and the center, respectively, of the image region contained by Cj. Also computed are the average intensity gradients, Gavg(Cj), along Cj. In other words, Gavg(Cj) is the average of the gradient magnitudes at each pixel on Cj. For each Cj, a score is now computed as follows:
S(Cj)=Gavg(Cj)Sa(aref,aj)Sx(xref,xj)
where Sa and Sx are similarity functions whose output values lie in the range [0, 1], with a higher value indicating a better match between the reference and measured values. For example, Sa=exp(−|aref−aj|) and Sx=exp(−∥xref−xj∥2). The object boundary C* is then estimated as the curve that maximizes this score,
After estimating the object boundary, the reference values mref, sref, aref, and xref can be updated with a learning factor αε[0, 1],
(e.g., mrefnew=αmj+(1−α)mref). In the case of a video sequence, the factor αcould be a function of time (e.g., frame index) t, starting at a high value and then decreasing with each frame, finally saturating to a fixed low value, αmin.
In the enhancement of the object, the visibility of the object is improved by applying image processing operations in the neighborhood of the object. These operations may be applied along the object boundary (e.g., edge sharpening), inside the object (e.g., texture enhancement), and possibly even outside the object (e.g., contrast increase). In implementations described herein, a number of methods for object enhancement are proposed. A first is to sharpen the edges inside the object and along its contour. A second is to enlarge the object by iteratively applying smoothing, sharpening and boundary estimation operations, not necessarily in that order. Other possible methods include the use of morphological filters and object replacement.
One way to draw more attention to an object is to sharpen the edges inside the object and along the contour of the object. This makes the details in the object more visible and also makes the object stand out from the background. Furthermore, sharper edges tend to survive compression better. The algorithm for object enhancement by sharpening operates on an object one frame at a time and takes as its input the intensity image I(x, y), and the object parameters (i.e., location, size, etc.) provided by object localization module 14. The algorithm comprises three steps as follows:
The sharpening filter Fα is defined as the difference of the Kronecker delta function and the discrete Laplacian operator ∇α2
F
α(x,y)=δ(x,y)−∇α2(x,y)
The parameter αε[0, 1] controls the shape of the Laplacian operator. In practice, a 3×3 filter kernel is constructed with the center of the kernel being the origin (0, 0). An example of such a kernel is shown below:
Object enhancement by enlargement attempts to extend the contour of an object by iteratively applying smoothing, sharpening and boundary estimation operations, not necessarily in that order. The flowchart for a specific embodiment of the object enlargement algorithm is shown in
The smoothing filter Gσ is a two-dimensional Gaussian function
The parameter σ>0 controls the shape of the Gaussian function, greater values resulting in more smoothing. In practice, a 3×3 filter kernel is constructed with the center of the kernel being the origin (0, 0). An example of such a kernel is shown below:
The
To optimize enhancement of the input video, object-aware encoder 18 receives the object localization information from object localization module 14, thereby better preserving the enhancement of the region in which the object is located and, consequently, the object. Whether the enhancement is preserved or not, the region in which the object is located is better preserved than without encoding by object-aware encoder 18. However, the enhancement also minimizes object degradation during compression. This optimized enhancement is accomplished by suitably managing encoding decisions and the allocation of resources, such as bits.
Object-aware encoder 18 can be arranged for making “object-friendly” macroblock (MB) mode decisions, namely those that are less likely to degrade the object. Such an arrangement, for example, can include an object-friendly partitioning of the MB for prediction purposes, such as illustrated by
Referring to
As indicated in
Accordingly, the
Ignoring temporarily the object-aware post-processing module 24, shown in dotted lines in
The modes of operation of the
Instead of enhancing the visibility of the object before encoding as described above, the input video can be conducted directly to object-aware encoder module 18, as represented by dotted line 19, and encoded without the visibility of the object enhanced and have the enhancement effected by an object-aware post-processing module 24 in receiver 20. This mode of operation of the
As indicated above, one advantage of a transmitter-end object highlighting system (i.e., the pre-processing mode of operation) is avoiding the need to increase the complexity of the receiver-end which is typically a low power device. In addition, the pre-processing mode of operation allows using standard video decoders, which facilitates the deployment of the system.
The implementations that are described may be implemented in, for example, a method or process, an apparatus, or a software program. Even if only discussed in the context of a single form of implementation (e.g., discussed only as a method), the implementation or features discussed may also be implemented in other forms (e.g., an apparatus or a program). An apparatus may be implemented in, for example, appropriate hardware, software, and firmware. The methods may be implemented in, for example, an apparatus such as, for example, a computer or other processing device. Additionally, the methods may be implemented by instructions being performed by a processing device or other apparatus, and such instructions may be stored on a computer readable medium such as, for example, a CD, or other computer readable storage device, or an integrated circuit.
As should be evident to one skilled in the art, implementations may also produce a signal formatted to carry information that may be, for example, stored or transmitted. The information may include, for example, instructions for performing a method, or data produced by one of the described implementations. For example, a signal may be formatted to carry as data various types of object information (i.e., location, shape), and/or to carry as data encoded image data.
Although the invention is illustrated and described herein with reference to specific embodiments, the invention is not intended to be limited to the details shown. Rather, various modifications may be made in the details within the scope and range of equivalents of the claims and without departing from the invention.
This application claims the benefit of U.S. Provisional Patent Application Ser. No. 61/123,844 (Atty Docket PU080054), entitled “PROCESSING IMAGES HAVING OBJECTS” and filed Apr. 11, 2008, which is incorporated by reference herein in its entirety.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US09/02173 | 4/7/2009 | WO | 00 | 10/6/2010 |
Number | Date | Country | |
---|---|---|---|
60123844 | Mar 1999 | US |