Claims
- 1. Apparatus for determining motion in video frames, the apparatus comprising:
a motion estimator for tracking a feature between a first one of said video frames and in a second one of said video frames, therefrom to determine a motion vector of said feature, and a neighboring feature motion assignor, associated with said motion estimator, for applying said motion vector to other features neighboring said first feature and appearing to move with said first feature.
- 2. The apparatus of claim 1, wherein said tracking a feature comprises matching blocks of pixels of said first and said second frames.
- 3. The apparatus of claim 2, wherein said motion estimator is operable to select initially a predetermined small groups of pixels in a first frame and to trace said groups of pixels in said second frame to determine motion therebetween, and wherein said neighboring feature motion assignor is operable, for each group of pixels, to identify neighboring groups of pixels that move therewith.
- 4. The apparatus of claim 3, wherein said neighboring feature assignor is operable to use cellular automata based techniques to find said neighboring groups of pixels to identify, and assign motion vectors to these groups of pixels.
- 5. The apparatus of claim 3, further operable to mark all groups of pixels assigned a motion as paved, and to repeat said motion estimation for unmarked groups of pixels by selecting further groups of pixels to trace and find neighbors therefor, said repetition being repeated up to a predetermined limit.
- 6. Apparatus according to claim 1, further comprising a feature significance estimator, associated with said neighboring feature motion assignor, for estimating a significance level of said feature, thereby to control said neighboring feature motion assignor to apply said motion vector to said neighboring features only if said significance exceeds a predetermined threshold level.
- 7. The apparatus of claim 6, further operable to mark all groups of pixels in a frame assigned a motion as paved, said marking being repeated up to a predetermined limit according to a threshold level of matching, and to repeat said motion estimation for unpaved groups of pixels by selecting further groups of pixels to trace and find unmarked neighbors therefor, said predetermined threshold level being kept or reduced for each repetition.
- 8. Apparatus according to claim 6, said feature significance estimator comprising a match ratio determiner for determining a ratio between a best match of said feature in said succeeding frames and an average match level of said feature over a search window, thereby to exclude features indistinct from a background or neighborhood.
- 9. Apparatus according to claim 6, wherein said feature significance estimator comprises a numerical approximator for approximating a Hessian matrix of a misfit function at a location of said matching, thereby to determine the presence of a maximal distinctiveness.
- 10. Apparatus according to claim 6, wherein, said feature significance estimator is connected prior to said feature identifier and comprises an edge detector for carrying out an edge detection transformation, said feature identifier being controllable by said feature significance estimator to restrict feature identification to features having relatively higher edge detection energy.
- 11. Apparatus according to claim 1, further comprising a downsampler connected before said feature identifier for producing a reduction in video frame resolution by merging of pixels within said frames.
- 12. Apparatus according to claim 1, further comprising a downsampler connected before said feature identifier for isolating a luminance signal and producing a luminance only video frame.
- 13. Apparatus according to claim 12, wherein said downsampler is further operable to reduce resolution in said luminance signal.
- 14. Apparatus according to claim 1, wherein said succeeding frames are successive frames.
- 15. Apparatus according to claim 14, wherein said frames are a sequence of an I frame, a B frame and a P frame, wherein motion estimation is carried out between said I frame and said P frame and wherein the apparatus further comprises an interpolator for providing an interpolation of said motion estimation to use as a motion estimation for said B frame.
- 16. Apparatus according to claim 14, wherein said frames are a sequence comprising at least an I frame, a first P frame and a second P frame, wherein motion estimation is carried out between said I frame and said first P frame and wherein the apparatus further comprises an extrapolator for providing an extrapolation of said motion estimation to use as a motion estimation for said second P frame.
- 17. Apparatus according to claim 1, wherein said frames are divided into blocks and wherein said feature identifier is operable to make a systematic selection of blocks within said first frame to identify features therein.
- 18. Apparatus according to claim 1, wherein said frames are divided into blocks and wherein said feature identifier is operable to make a random selection of blocks within said first frame to identify features therein.
- 19. Apparatus according to claim 1, said motion estimator comprising a searcher for searching for said feature in said succeeding frame in a search window around the location of said feature in said first frame.
- 20. Apparatus according to claim 19, further comprising a search window size presetter for presetting a size of said search window.
- 21. Apparatus according to claim 19, wherein said frames are divided into blocks and said searcher comprises a comparator for carrying out a comparison between a block containing said feature and blocks in said search window, thereby to identify said feature in said succeeding frame and to determine a motion vector of said feature between said first frame and said succeeding frame, for association with each of said blocks.
- 22. Apparatus according to claim 21, wherein said comparison is a semblance distance comparison.
- 23. Apparatus according to claim 22, further comprising a DC corrector for subtracting average luminance values from each block prior to said comparison.
- 24. Apparatus according to claim 21, wherein said comparison comprises non-linear optimization.
- 25. Apparatus according to claim 24, wherein said non-linear optimization comprises the Nelder Mead Simplex technique.
- 26. Apparatus according to claim 21, wherein said comparison comprises use of at least one of L1 and L2 norms.
- 27. Apparatus according to claim 21, further comprising a feature significance estimator for determining whether said feature is a significant feature.
- 28. Apparatus according to claim 27, wherein said feature significance estimator comprises a match ratio determiner for determining a ratio between a closest match of said feature in said succeeding frames and an average match level of said feature over a search window, thereby to exclude features indistinct from a background or neighborhood.
- 29. Apparatus according to claim 28, wherein said feature significance estimator further comprises a thresholder for comparing said ratio against a predetermined threshold to determine whether said feature is a significant feature.
- 30. Apparatus according to claim 27, wherein said feature significance estimator comprises a numerical approximator for approximating a Hessian matrix of a misfit function at a location of said matching, thereby to locate a maximum distinctiveness.
- 31. Apparatus according to claim 27, wherein said feature significance estimator is connected prior to said feature identifier, the apparatus further comprising an edge detector for carrying out an edge detection transformation, said feature identifier being controllable by said feature significance estimator to restrict feature identification to regions of detection of relatively higher edge detection energy.
- 32. Apparatus according to claim 27, wherein said neighboring feature motion assignor is operable to apply said motion vector to each higher resolution block of said frame corresponding to a low resolution block for which said motion vector has been determined.
- 33. Apparatus according to claim 27, wherein said neighboring feature motion assignor is operable to apply said motion vector to each full resolution block of said frame corresponding to a low resolution block for which said motion vector has been determined.
- 34. Apparatus according to claim 32, comprising a motion vector refiner operable to carry out feature matching on high resolution versions of said succeeding frames to refine said motion vector at each of said higher resolution blocks.
- 35. Apparatus according to claim 33, comprising a motion vector refiner operable to carry out feature matching on high resolution versions of said succeeding frames to refine said motion vector at each of said full resolution blocks.
- 36. Apparatus according to claim 34, wherein said motion vector refiner is further operable to carry out additional feature matching operations on adjacent blocks of feature matched higher resolution blocks, thereby further to refine said corresponding motion vectors.
- 37. Apparatus according to claim 35, wherein said motion vector refiner is further operable to carry out additional feature matching operations on adjacent blocks of feature matched full resolution blocks, thereby further to refine said corresponding motion vectors.
- 38. Apparatus according to claim 36, wherein said motion vector refiner is further operable to identify higher resolution blocks having a different motion vector assigned thereto from a previous feature matching operation originating from a different matched block, and to assign to any such higher resolution block an average of said previously assigned motion vector and a currently assigned motion vector.
- 39. Apparatus according to claim 37, wherein said motion vector refiner is further operable to identify full resolution blocks having a different motion vector assigned thereto from a previous feature matching operation originating from a different matched block, and to assign to any such full resolution block an average of said previously assigned motion vector and a currently assigned motion vector.
- 40. Apparatus according to claim 36, wherein said motion vector refiner is further operable to identify higher resolution blocks having a different motion vector assigned thereto from a previous feature matching operation originating from a different matched block, and to assign to any such higher resolution block a rule decided derivation of said previously assigned motion vector and a currently assigned motion vector.
- 41. Apparatus according to claim 37, wherein said motion vector refiner is further operable to identify full resolution blocks having a different motion vector assigned thereto from a previous feature matching operation originating from a different matched block, aid to assign to any such full resolution block a rule decided derivation of said previously assigned motion vector and a currently assigned motion vector.
- 42. Apparatus according to claim 36, further comprising a block quantization level assigner for assigning to each high resolution block a quantization level in accordance with a respective motion vector of said block.
- 43. Apparatus according to claim 1, wherein said frames are arrangeable in blocks, the apparatus further comprising a subtractor connected in advance of said feature detector, the subtractor comprising:
a pixel subtractor for pixelwise subtraction of luminance levels of corresponding pixels in said succeeding frames to give a pixel difference level for each pixel, and a block subtractor for removing from motion estimation consideration any block having an overall pixel difference level below a predetermined threshold.
- 44. The apparatus of claim 1, wherein said feature identifier is operable to search for features by examining said frame in blocks.
- 45. The apparatus of claim 44, wherein said blocks are of a size in pixels according to at least one of the MPEG and JVT standard.
- 46. The apparatus of claim 45, wherein said blocks are any one of a group of sizes comprising 8×8, 16×8, 8×16 and 16×16.
- 47. The apparatus of claim 44, wherein said blocks are of a size in pixels lower than 8×8.
- 48. The apparatus of claim 47, wherein said blocks are of size no larger than 7×6 pixels.
- 49. The apparatus of claim 47, wherein said blocks are of size no larger than 6×6 pixels.
- 50. The apparatus of claim 1, wherein said motion estimator and said neighboring feature motion assigner are operable with a resolution level changer to search and assign on successively increasing resolutions of each frame.
- 51. The apparatus of claim 50, wherein said successively increasing resolutions are respectively substantially at least some of a {fraction (1/64)}, {fraction (1/32)}, {fraction (1/16)}, eighth, a quarter, a half and full resolution.
- 52. Apparatus for video motion estimation comprising:
a non-exhaustive search unit for carrying out a non exhaustive search between low resolution versions of a first video frame and a second video frame respectively, said non-exhaustive search being to find at least one feature persisting over said frames, and to determine a relative motion of said feature between said frames.
- 53. The apparatus of claim 52, wherein said non-exhaustive search unit is further operable to repeat said searches at successively increasing resolution versions of said video frames.
- 54. The apparatus of claim 52, further comprising a neighbor feature identifier for identifying a neighbor feature of said persisting feature that appears to move with said persisting feature, and for applying said relative motion of said persisting feature to said neighbor feature.
- 55. The apparatus of claim 52, further comprising a feature motion quality estimator for comparing matches between said persisting feature in respective frames with an average of matches between said persisting feature in said first frame and points in a window in said second frame, thereby to provide a quantity expressing a goodness of said match to support a decision as to whether to use said feature and corresponding relative motion in said motion estimation or to reject said feature.
- 56. A video frame subtractor for preprocessing video frames arranged in blocks of pixels for motion estimation, the subtractor comprising:
a pixel subtractor for pixelwise subtraction of luminance levels of corresponding pixels in succeeding frames of a video sequence to give a pixel difference level for each pixel, and a block subtractor for removing from motion estimation consideration any block having an overall pixel difference level below a predetermined threshold.
- 57. A video frame subtractor according to claim 56, wherein said overall pixel difference level is a highest pixel difference value over said block.
- 58. A video frame subtractor according to claim 56, wherein said overall pixel difference level is a summation of pixel difference levels over said block.
- 59. A video frame subtractor according to claim 57, wherein said predetermined threshold is substantially zero.
- 60. A video frame subtractor according to claim 58, wherein said predetermined threshold is substantially zero.
- 61. A video frame subtractor according to claim 56, wherein said predetermined threshold of said macroblocks is substantially a quantization level for motion estimation.
- 62. A post-motion estimation video quantizer for providing quantization levels to video frames arranged in blocks, each block being associated with motion data, the quantizer comprising a quantization coefficient assigner for selecting, for each block, a quantization coefficient for setting a detail level within said block, said selection being dependent on said associated motion data.
- 63. Method for determining motion in video frames arranged into blocks, the method comprising:
matching a feature in succeeding frames of a video sequence, determining relative motion between said feature in a first one of said video frames and in a second one of said video frames, and applying said determined relative motion to blocks neighboring said block containing said feature that appear to move with said feature.
- 64. The method of claim 63, further comprising determining whether said feature is a significant feature.
- 65. The method of claim 64, wherein said determining whether said feature is a significant feature comprises determining a ratio between a closest match of said feature in said succeeding frames and an average match level of said feature over a search window.
- 66. The method of claim 65, further comprising comparing said ratio against a predetermined threshold, thereby to determine whether said feature is a significant feature.
- 67. The method of claim 64, comprising approximating a Hessian matrix of a misfit function at a location of said matching, thereby to produce a level of distinctiveness.
- 68. The method of claim 64, comprising carrying out an, edge detection transformation, and restricting feature identification to blocks having higher edge detection energy.
- 69. The method of claim 63, further comprising producing a reduction in video frame resolution by merging blocks in said frames.
- 70. The method of claim 63, further comprising isolating a luminance signal, thereby to produce a luminance only video frame.
- 71. The method of claim 70, further comprising reducing resolution in said luminance signal.
- 72. The method of claim 63, wherein said succeeding frames are successive frames.
- 73. The method of claim 63, further comprising making a systematic selection of blocks within said first frame to identify features therein.
- 74. The method of claim 63, further comprising making a random selection of blocks within said first frame to identify features therein.
- 75. The method of claim 63, further comprising searching for said feature in blocks in said succeeding frame in a search window around the location of said feature in said first frame.
- 76. The method of claim 75, further comprising presetting a size of said search window.
- 77. The method of claim 75, further comprising carrying out a comparison between said block containing said feature and said blocks in said search window, thereby to identify said feature in said succeeding frame and determine a motion vector for said feature, to be associated with said block.
- 78. The method of claim 77, wherein said comparison is a semblance distance comparison.
- 79. The method of claim 78, further comprising subtracting average luminance values from each block prior to said comparison.
- 80. The method of claim 77, wherein said comparison comprises non-linear optimization.
- 81. The method of claim 80, wherein said non-linear optimization comprises the Nelder Mead Simplex technique.
- 82. The method of claim 77, wherein said comparison comprises use of at least one of a group comprising L1 and L2 norms.
- 83. The method of claim 77, further comprising determining whether said feature is a significant feature.
- 84. The method of claim 83, wherein said feature significance determination comprises determining a ratio between a closest match of said feature in said succeeding frames and an average match level of said feature over a search window.
- 85. The method of claim 84, further comprising comparing said ratio against a predetermined threshold to determine whether said feature is a significant feature.
- 86. The method of claim 83, further comprising approximating a Hessian matrix of a misfit function at a location of said matching, thereby to produce a level of distinctiveness.
- 87. The method of claim 83, comprising carrying out an edge detection transformation, and restricting feature identification to regions of higher edge detection energy.
- 88. The method of claim 83, further comprising applying said motion vector to each high resolution block of said frame corresponding to a low resolution block for which said motion vector has been determined.
- 89. The method of claim 88, comprising carrying out feature matching on high resolution versions of said succeeding frames to refine said motion vector at each of said high resolution blocks.
- 90. The method of claim 89, further comprising carrying out additional feature matching operations on adjacent blocks of feature matched high resolution blocks, thereby further to refine said corresponding motion vectors.
- 91. The method of claim 90, further comprising identifying high resolution blocks having a different motion vector assigned thereto from a previous feature matching operation originating from a different matched block, and assigning to any such high resolution block an average of said previously assigned motion vector and a currently assigned motion vector.
- 92. The method of claim 90, further comprising identifying high resolution blocks having a different motion vector assigned thereto from a previous feature matching operation originating from a different matched block, and assigning to any such high resolution block a rule decided derivation of said previously assigned motion vector and a currently assigned motion vector.
- 93. The method of claim 90, further comprising assigning to each high resolution block a quantization level in accordance with a respective motion vector of said block.
- 94. The method of claim 63, further comprising
pixelwise subtraction of luminance levels of corresponding pixels in said succeeding frames to give a pixel difference level for each pixel, and removing from motion estimation consideration any block having an overall pixel difference level below a predetermined threshold.
- 95. A video frame subtraction method for preprocessing video frames arranged in blocks of pixels for motion estimation, the method comprising:
pixelwise subtraction of luminance levels of corresponding pixels in succeeding frames of a video sequence to give a pixel difference level for each pixel, and removing from motion estimation consideration any block having an overall pixel difference level below a predetermined threshold.
- 96. The method of claim 95, wherein said overall pixel difference level is a highest pixel difference value over said block.
- 97. The method of claim 95, wherein said overall pixel difference level is a summation of pixel difference levels over said block.
- 98. The method of claim 96, wherein said predetermined threshold is substantially zero.
- 99. The method of claim 97, wherein said predetermined threshold is substantially zero.
- 100. The method of claim 95, wherein said predetermined threshold of said macroblocks is substantially a quantization level for motion estimation.
- 101. A post-motion estimation video quantization method for providing quantization levels to videoframes arranged in blocks, each block being associated with motion data, the method comprising selecting, for each block, a quantization coefficient for setting a detail level within said block, said selection being dependent on said associated motion data.
RELATIONSHIP TO EXISTING APPLICATIONS
[0001] The present application claims priority from U.S. Provisional Application No. 60/301,804 filed Jul. 2, 2001.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60301804 |
Jul 2001 |
US |