PICTURE ATTRIBUTE ALLOCATION

Information

  • Patent Application
  • 20100045693
  • Publication Number
    20100045693
  • Date Filed
    July 20, 2007
    17 years ago
  • Date Published
    February 25, 2010
    14 years ago
Abstract
The invention concerns image processing and, in particular, the processing of picture attribute fields for an image. A method of obtaining a new picture attribute field of an image is disclosed in which a picture attribute value at one position is allocated to a new position in the image in dependence upon the value of a parameter, such as luminance data, at the original position and at the new position and/or in dependence on the distance between the original position and the new position. The invention may be used to process picture attribute fields comprising: motion vectors; motion vector confidence; segment labels; depth labels; texture labels.
Description

The invention relates to image processing and, in particular, concerns the processing of picture attribute fields for an image. In one embodiment the picture attribute field is a motion vector field for an image in a sequence of images, where the vectors describe the changes in the positions of features portrayed in the images between successive images in the sequence.


The use of motion vectors in the processing of sequences of images such as film frames or video fields is widespread. Motion vectors are used in standards conversion, e.g. temporal sample rate conversion and de-interlacing, and other processes such as noise reduction. Almost invariably the images are spatially sampled on a pixel raster and each pixel is represented by one or more numerical values representative of its luminance and/or chromaticity.


Known methods of deriving motion vectors for a sequence of images include Block Matching and Phase Correlation. In Block Matching one or more contiguous pixels from a position within a first image in the sequence are compared with pixels at various positions in a second image from the sequence, and the position of minimum pixel value difference, i.e. closest match, is established. The second image may occur before or after the first image in the sequence of images. A motion vector is created to describe the displacement between the pixel or group of pixels in the first image and the matching pixel or group of pixels in the second image. This vector is then associated with the compared pixel or pixels.


In Phase Correlation the images are divided into blocks of pixels and a two-dimensional Fourier transform is applied to each block. The phase differences between respective spatial frequency components of co-located blocks in successive images of the sequence are evaluated to determine the changes in the positions of image features from one image to the next. These phase differences can be used to calculate motion vectors. Phase Correlation has the advantage over Block Matching that more than one direction of motion can be determined from a single block of pixels; for example there may be two, differently moving, objects portrayed within the block.


Regardless of the way the motion vectors are determined each vector will always be associated with a particular pixel, group of pixels or block. This means that the set of vectors derived from an image will be spatially sampled; and, because it is unusual to derive a vector from every pixel, the spatial resolution of the vector field is usually lower than the spatial resolution of the image to which it refers.


In many image processing tasks it is helpful to have access to a motion vector field where the spatial sampling of the vectors is different from the inherent sampling pattern of the motion estimation process used to derive the vectors. For example there may be a need for at least one vector for every pixel (where the vectors are derived from blocks of pixels); or, appropriate vectors may be needed for resampling, decimating or interpolating the image to a sampling raster different from that on which it was received.


More generally, it may be helpful to have access to a field of any picture attribute having a new spatial sampling of the picture attribute different from a previous spatial sampling of the picture attribute.


The invention consists in one aspect of a method and apparatus for obtaining a new picture attribute field of an image from an existing picture attribute field of the image and a pixel calorimetric parameter field of the image, comprising the step of, for one or more new positions in the image, allocating a picture attribute value at an originally associated position to the new position in the image in dependence upon the value of a pixel calorimetric parameter at the originally associated position and the value of said pixel calorimetric parameter at the new position.


The step of allocating the picture attribute value may depend upon the distance between the new position and the originally associated position.


In one embodiment, the allocation depends on a weighted sum of a distance parameter and a pixel colorimetric value difference. The picture attribute values at the originally associated location corresponding to the minimum value of the said weighted sum may be allocated to the said new position.


In a second aspect of the invention there is provided a method of spatially interpolating a picture attribute field of an image whereby one or more picture attribute values are allocated to new positions in the said image in dependence upon the distance between the new position and the originally associated position.


The picture attribute value at an originally associated position nearest to the new position may be allocated to the new position. The allocation may depend upon the square of the said distance.


The value of the pixel colorimetric parameter at the new position and/or at the originally associated position may be determined by spatial interpolation of the pixel calorimetric parameter field of the image.


In certain embodiments the spatial sampling frequency of the picture attribute field is increased.


In other embodiments the spatial sampling frequency of the picture attribute field is decreased.


The picture attribute may be one of: motion vectors; motion vector confidence; segment labels; depth labels; texture labels. In general the picture attribute may be any piecewise continuous attribute i.e. a quantity that would be expected to have discontinuities in its values corresponding to different image structures or properties of portrayed objects. The picture attribute may be regarded as non-viewable in the sense that the picture attribute values are distinguished from the pixel values which form the viewable image.


The invention also relates to a computer program product comprising code adapted to implement a method in accordance with the invention.





An example of the invention will now be described with reference to the drawings in which:



FIG. 1 shows the use of motion vectors at a number of pixel positions which are different from the positions for which vectors are available.



FIG. 2 shows a block diagram of a motion vector interpolator in accordance with an embodiment of the information.





The invention will now be described with reference to a motion vector field. However, the skilled person will recognise that the invention may be applied equally to other picture attribute fields associated with an image. Picture attributes that may give rise to a picture attribute field for an image are exemplified by, but not limited to: texture labels; segment labels; depth estimates; motion vectors; and motion vector confidence measures. The creation of such picture attribute field will be familiar to a skilled person and will not be explained in more detail herein. As with the motion vector field discussed herein, in many image processing tasks it is helpful to have access to fields of these picture attributes at a spatial sampling pattern different from an original spatial sampling pattern.


Although the following description relates to motion vectors within an image sequence, the invention may also be applied to picture attributes in a single image.


Where a sequence of images portrays uniform motion, for example a camera pan across a stationary background, the motion vector field derived between a pair of images in the sequence is substantially uniform. That is to say that the motion vectors derived at different, but closely spaced, positions within the image are substantially similar. Therefore if a motion vector is required at a particular position in the image it is possible to use a vector from a nearby point without significant error.



FIG. 1 shows an example of the application of this principle. In the Figure the locations where vectors are required are marked by crosses and the locations at which vectors are available are marked by circles. The arrows show the closest available vector location for each location at which a vector is required. For example the point (1), for which a vector is required, is closest to the point (2), at which a vector is available. The available vector from point (2) is thus used at the point (1) and this is indicated in the Figure by the arrow (3). The points (4) and (5) are also closer to the point (2) than any other point at which a vector is available, and so the vector from point (2) is also used at these points.


This technique can be used for either increasing or decreasing the spatial sampling frequency of the vector field. (In the example of FIG. 1 the field is up-sampled vertically and down-sampled horizontally.)


Where different parts of the image are moving at different speeds, for example where a number of independently moving objects are portrayed, the vector field will be non-uniform and the method of FIG. 1 will be prone to errors when vectors relating to one moving object are used in image areas corresponding to other, nearby objects.


This difficulty can be overcome by a system in which the luminance (or another colorimetric parameter) of the pixels is used in the choice of available vectors. Such a system is shown in FIG. 2.


In the following description, both distance and a pixel calorimetric parameter are used in the choice of vector to allocate to the new position. However, it is not necessary in all embodiments to use distance and the choice of vector to allocate to the new position may be made solely of the basis of pixel colorimetric parameters, as will be explained below.


In FIG. 2 a set of pixel colorimetric parameter values LijF (20) from a sequence of images are used to produce a set of output motion vectors [Vx, Vy]klF (21). The parameter values LijF would typically be the luminance values of pixels of image F of the sequence at positions in the scanning raster defined by respective horizontal and vertical coordinates i and j. The set of output vectors [Vx, Vy]klF are associated with image positions defined by respective horizontal and vertical coordinates k and l in the scanning raster for which vectors are required.


A motion estimator (22) compares successive images of the sequence and generates sets of motion vectors [Vx, Vy]mnF (25) for some or all of the images by any of the known motion estimation techniques. These vectors correspond to positions in the image defined by respective horizontal and vertical coordinates m and n. Typically the set of coordinate positions m, n would be the centres of blocks of pixels used by the motion estimator (22) to compute the vectors. The coordinate positions m, n need not necessarily correspond with any of the pixels (20) and there may be more than one vector associated with each coordinate position.


An interpolator (23) computes calorimetric parameter values LklF (26) from the input parameter values (20), where the respective horizontal and vertical coordinates k and l represent locations at which the output motion vectors (21) are required. The interpolator (23) can use any of the known techniques of spatial interpolation or decimation to obtain values of the colorimetric parameter L at locations other than those of the input values LijF (20). Such methods include linear and non-linear filters which may or may not be variables-separable.


A second interpolator (24) computes calorimetric parameter values LmnF (27) from the input parameter values (20), where the respective horizontal and vertical coordinates m and n represent locations which correspond to the positions of the motion vectors (25) from the motion estimator (22). The interpolator (24) can operate in the same way as the interpolator (23) or use some other known spatial interpolation or decimation technique.


The skilled person will appreciate that it is not always necessary to interpolate both calorimetric parameter values LklF (26) and colorimetric parameter values LmnF (27), as one or both of these pixel calorimetric parameter values may correspond to pixel colorimetric parameter values LijF (20). In addition interpolators (23,24) are shown separately, but any functionally equivalent arrangement may be used.


The output motion vectors [Vx, Vy]klF (21) are determined by a motion vector allocation block (29) which allocates the motion vectors [Vx, Vy]mnF (25) to the required output coordinate positions k, l in dependence upon Euclidean inter-pixel distances and calorimetric parameter value differences. (In FIG. 1 the length of the arrow (3) represents the Euclidean distance between the required sample position (1) and the available vector location (2)).


Vectors are allocated by finding the vector having the lowest value of an ‘allocation parameter’, the allocation parameter being based on the location for which the vector was determined and the required output vector position and the difference between the value of the calorimetric parameter at these two locations.


The allocation parameter is defined as follows:






D[(m,n),(k,l)]=W1.r(m,n),(k,l)2+w2.|Lmn−Lkl|

    • Where: r(m,n),(k,l) is the Euclidean distance between the available vector position (m,n) and the required output vector position (k,l); and, w1 and w2 are weighting factors.


In one embodiment w2 is twenty times greater than w1, in the case where Euclidean distances are measured on the grid of input pixels and calorimetric parameter values are expressed as 8-bit integers.


In one embodiment w1 is zero and the allocation parameter depends only on the colorimetric parameter at the two locations.


In FIG. 2 all the values of r(m,n),(k,l)2 are pre-calculated (from knowledge of the required output vector positions and knowledge of the methods used by motion estimator (22)) and input, at terminal (28), to the vector allocation block (29). The block (29) also receives the calorimetric values LklF (26) and LmnF (27) and can therefore determine the allocation parameter D[(m,n),(l,f)] for each required output vector position (k,l) according to the above equation. In practice not all the modified distances need be calculated and it may be helpful to limit the calculation to a region encompassing the relevant input and output pixel positions.


For each required output vector position (k,l), the vector location (m,n) having the lowest modified allocation parameter D[(m,n),(k,l)] is found. The corresponding vector [Vx, Vy]mnF calculated by the motion estimator (22) for the location (m,n) is allocated to the output vector location (k,l) by the motion vector allocation block (29) and output at terminal (21). If the motion estimator (22) generates more than one vector for a particular location (m,n) then all of the vectors from that location are allocated to the chosen output vector location (k,l). In the event that two or more vector locations yield the same lowest modified distance value, any known technique could be used to calculate a single output vector. For example, a choice could be made at random, or an average of motion vector values could be taken, or the vector location having either the lowest Euclidean distance or the lowest colorimetric parameter value difference.


The invention has been described by way of example and other embodiments are possible. For example, calorimetric parameters other than luminance, such as hue, saturation or colour component values (e.g. Red Green or Blue) could be used instead of luminance.


Typically, the invention may be carried out on picture attribute fields for a video data stream. However, the invention may also be applied to the processing of picture attribute fields for files of video data and in some embodiments to the processing of a picture attribute field of a single image.


As will be apparent to a skilled person, the described invention may be implemented in both hardware and software as seems appropriate to a skilled person.


Although illustrated and described above with reference to certain embodiments, the present invention is nevertheless not intended to be limited to the details shown. Rather various modifications may be made without departing from the invention defined by the appended claims.

Claims
  • 1. A method of obtaining a new picture attribute field of an image from an existing picture attribute field of the image and a pixel calorimetric parameter field of the image, comprising the step of, for one or more new positions in the image, allocating a picture attribute value at an originally associated position to the new position in the image in dependence upon the value of a pixel calorimetric parameter at the originally associated position and the value of said pixel colorimetric parameter at the new position.
  • 2. The method according to claim 1, wherein the step of allocating the picture attribute value also depends upon the distance between the new position and the originally associated position.
  • 3. A method according to claim 2 in which the allocation depends on a weighted sum of a distance parameter and a pixel calorimetric value difference.
  • 4. A method according to claim 3 in which the picture attribute value at the originally associated location corresponding to the minimum value of the said weighted sum is allocated to the said new position.
  • 5. A method of spatially interpolating a picture attribute field of an image whereby one or more picture attribute values are allocated to new positions in the said image in dependence upon the distance between the new position and the originally associated position.
  • 6. A method according to claim 5 in which the picture attribute value at an originally associated position nearest to the new position is allocated to the new position.
  • 7. A method according to one of claim 5 in which the allocation depends upon the square of the said distance.
  • 8. The method according to claim 1 further comprising the step of determining the value of the pixel calorimetric parameter at the new position by spatial interpolation of the pixel calorimetric parameter field of the image.
  • 9. The method according to claim 1 further comprising the step of determining the value of the pixel calorimetric parameter at the originally associated position by spatial interpolation of the pixel calorimetric parameter field of the image
  • 10. A method according to claim 1 in which the spatial sampling frequency of the picture attribute field is increased.
  • 11. A method according to claim 1 in which the spatial sampling frequency of the picture attribute field is decreased.
  • 12. A method according to claim 1 where the picture attribute is one of: motion vectors; motion vector confidence; segment labels; depth labels; texture labels.
  • 13. For an image which comprises pixel values Lij at pixel locations [i,j] and which has motion vector values Vmn at a first set of locations [m,n] in the image, a method of providing motion vector values Vkl at a second set of locations [k, l], the method comprising the steps of obtaining pixel values Lmn at locations [m,n]; obtaining pixel values Lkl at locations [k,l]; selecting for a particular location [k,l] a corresponding location [m,n] through comparison of pixel values Lmn with pixel values Lkl; and allocating to the location [k,l] the motion vector value Vmn from said corresponding location [m,n] to provide the motion vector value Vkl.
  • 14. A method according to claim 13, wherein the step of selecting for a particular location [k,l] a corresponding location [m,n], comprises the step of minimising an allocation parameter Dmn,kl which is a function of a difference between a pixel value Lmn and a pixel value Lkl and a function of a distance measure between the location [k,l] and the location [m, n].
  • 15. A method according to claim 14, wherein the allocation parameter Dmn,kl is a weighted sum of a difference between a pixel value Lmn and a pixel value Lkl and a distance measure between the location [k,l] and the location [m,n].
  • 16. A method according to claims 13, wherein the pixel values Lmn are obtained by interpolation from the pixel values Lij.
  • 17. A method according to claims 13, wherein the pixel values Lkl are obtained by interpolation from the pixel values Lij.
  • 18. A computer-readable medium having stored thereon computer-executable instructions to obtain a new picture attribute field of an image from an existing picture attribute field of the image and a pixel calorimetric parameter field of the image; andfor one or more new positions in the image, allocate a picture attribute value at an originally associated position to at least one of the one or more new positions in the image in dependence upon the value of a pixel calorimetric parameter at the originally associated position and the value of said pixel calorimetric parameter at the at least one new position.
  • 19. (canceled)
Priority Claims (1)
Number Date Country Kind
0614567.6 Jul 2006 GB national
PCT Information
Filing Document Filing Date Country Kind 371c Date
PCT/GB07/02809 7/20/2007 WO 00 8/26/2009