Technical Field
The present disclosure relates to digital video processing, and more particularly, to filtering of motion vectors.
Description of the Related Art
Digital video compression is used to reduce the quantity of data used to represent digital video images. Digital video compression and decompression schemes often result in inaccurate object motions appearing within the video due to a particular compression scheme used to achieve a large compression ratio, moving objects being blocked in a video frame by other objects, a very low bit rate requirement, and/or due to skipped or missing video frames. This is because in digital video compression, motion vector (MV) data based on estimated motion or estimated stillness of objects or picture fragments between frames of the digital video are used to replace actual originally captured data.
For example, in two adjacent digital video frames, finding the best correlation of a predefined picture fragment from one of the frames onto another frame results in an initial estimation of motion vectors related to the fragment. If the fragment is defined to be of a basic shape, for instance a square shape, the problem of finding an accurate MV arises when the shape comprises picture elements (pixels) from different objects moving in different directions and/or at different speeds. If a MV is assigned to a center pixel of the fragment and there are two objects in the fragment, there is a chance that the center pixel which belongs to one of the objects will be assigned to a MV corresponding to the other object.
To determine the appropriate motion vector to assign to a pixel in a digital video frame, a comparison of motion vectors of particular surrounding pixels is used. This is especially useful for determining motion vectors of pixels that compose or are near boundaries of various objects moving in different directions or speeds between the digital video frames. A given digital video frame is analyzed to determine the direction of object boundaries (i.e., determine in what general direction they are sloped). This is done by detecting direction of color transition or color brightness transition in the digital video frame. The particular surrounding pixels are selected and grouped according to the detected object boundary direction at each pixel. Filtering of the motion vectors is performed by comparison of the motion vectors of the surrounding pixels, which then provides information on which group of pixels to assign a current pixel being processed based in part on how close the motion vectors of the surrounding groups match a group pixels to which the pixel being process belongs. A filtered motion vector output is then provided for the pixel being processed based on which group of pixels the current pixel being processed was assigned.
The following is a description of the parts and structure of the system 100. Following the description of the parts and structure of the system 100 will be a description of the operation of the system 100.
The system 100 includes a directional filter bank 102, a direction selector 104, a filter bank for vector filtering 106, and a motion vector analyzer 108.
An image, such as a digital video frame 112, is input to the directional filter bank 102. The output of the directional filter bank 102 is coupled to input of the direction selector 104. The output of the direction selector 104 is coupled to the input of the filter bank for vector filtering 106. The inputs to the motion vector analyzer 108 include an output from the filter bank for vector filtering 106 and motion vectors 114 assigned to objects within the image 112. The output is a filtered vector 116.
Following is a description of an example operation of the system 100.
The directional filter bank 102 detects the direction of image gradients of the input image 112 to determine the direction of an object boundary. The direction of the object boundary is orthogonal to the direction of the gradient. For example, by performing spatial motion vector filtering of the input image 112, the directional filter bank 102 filters the image 112 such that object boundaries within the image 112 are associated with one of four directions: 0 degrees, 45 degrees, 90 degrees and 135 degrees. Areas of the image 112 may also be considered omni-directional if a particular direction is not detected to a certain degree of specificity.
The direction selector is coupled or otherwise communicatively connected to the directional filter bank 102. The direction selector selects the object boundaries associated with a particular direction for processing and assigns a corresponding directional index to those boundaries and may also assign an omni-directional index to areas a particular direction is not detected to a certain degree of specificity. This directional index is input to the filter bank for vector filtering 106 that is coupled or otherwise communicatively connected to the output of the direction selector 104. The motion vector filter structure corresponding to the assigned index is then selected from the filter bank for vector filtering 106 and loaded into the motion vector analyzer 108 along with information regarding a current block of pixels associated with the selected object boundaries associated with the assigned directional index. Also motion vectors 114 are input into the motion vector analyzer, including those being processed corresponding to the selected object boundaries associated with the assigned directional index.
The motion vector analyzer 108, which is coupled or otherwise communicatively connected to the filter bank for vector filtering 106, separately analyzes and filters motion vectors according to the detected direction and associated filter to associate a current pixel being processed with a particular object or area, and thus determine an appropriate motion vector for that pixel. This process is described in more detail below with reference to
The system 100 repeats the above process for each object boundary associated with a particular direction (i.e., for each direction: 0 degrees, 45 degrees, 90 degrees and 135 degrees), for each omni-directional area, and for each pixel position to be processed within those areas. It should be noted that more or fewer directional categories, indexes and associated filters may be used in the gradient detection, directional filtering and motion vector analysis. For example, object boundaries within the image 112 may instead be associated with one of eight directions for finer granularity in the detection process.
At 202, the process detects direction of either brightness changes, color transition, or both (i.e., gradient detection), in the frames to determine direction of an object's boundary. The process then proceeds to step 204 as indicated by arrow 208.
At 204 a directional index is assigned to the object boundary or area of the image. The process then proceeds to step 206, as indicated by arrow 210.
At 206, the process separately analyzes and filters motion vectors according to detected direction. The analyzing and filtering of motion vectors is performed for each object boundary detected, each omni-directional area, and each pixel within those areas. Alternatively, the analyzing and filtering of motion vectors may be performed for selected areas or objects, and each pixel within those areas.
At 302, the process determines whether an omni-directional motion vector filter kernel is being used. This may occur, for example, when the current pixel being processed is in an area where there was not a discernable object boundary detected as described above.
At 304, if the omni-directional motion vector filter kernel is not being used, the process compares a middle group of averaged motion vectors (V0) in a block of pixels to two side groups (V1 and V2) according to the detected object boundary direction and the corresponding motion vector filter corresponding to the directional index of the detected object boundary direction.
For example, shown in
As mentioned above, there are four adaptive directional filters and one omni-directional filter. Thus there are five filter structures used, one corresponding to each filter. As explained above,
Referring again to
Once the middle group of averaged motion vectors (V0) in a block of pixels is compared to the two side groups (V1 and V2) according to the detected object boundary direction and the corresponding motion vector filter, at 308, the process determines whether V0 is acceptably close to either V1 or V2. The threshold values used in determining whether V0 is acceptably close to either V1 or V2 may be selected according to any number of criteria including, but not limited to, desired accuracy, processing ability, system performance and operating conditions.
At 310, If V0 is not acceptably close to either V1 or V2, then the center pixel is treated as an occlusion and is not associated with the motion vector of any object. This is based on the principle that the closer V0 is to V1 or V2, the more likely that V0 is associated with the object associated with V1 or V2. However, if the V0 is not acceptably close to either V1 or V2, then it is likely an occlusion and not a part of any object associated with V1 or V2. This is illustrated in the example shown in
At 312, if it had been determined at 308 that V0 is acceptably close to either V1 or V2, then it is determined whether V1 is acceptably close to both V1 and V2.
At 316, if it had been determined at 312 that V0 is not acceptably close to both V1 and V2, then the center pixel is assigned to the object associated with the vector (V1 or V2) to which V0 is closest. This is also based on the principle, as stated above, that the closer V0 is to V1 or V2, the more likely that V0 is associated with the object associated with V1 or V2 (whichever V0 is closest to). This case is illustrated in
In an opposite case, shown in
At 314, if it had been determined at 312 that V0 is acceptably close to both V1 and V2, then the center pixel is treated as part of an object including the entire block of pixels. In other words, the center pixel is assigned to an object associated with both the vectors V1 and V2. This is illustrated in
The process above is repeated for each object boundary associated with a particular direction (i.e., for each direction: 0 degrees, 45 degrees, 90 degrees and 135 degrees), for each omni-directional area, and for each pixel position to be processed within those areas until the entire image is processed and the motion vectors are assigned accordingly for each pixel.
The computing environment 700 will at times be referred to in the singular herein, but this is not intended to limit the embodiments to a single device since in typical embodiments there may be more than one computer system or device involved. Unless described otherwise, the construction and operation of the various blocks shown in
The computing environment 700 may include one or more processing units 712a, 712b (collectively 712), a system memory 714 and a system bus 716 that couples various system components including the system memory 714 to the processing units 712. The processing units 712 may be any logic processing unit, such as one or more central processing units (CPUs) 712a, digital signal processors (DSPs) 712b, digital video or audio processing units such as coder-decoders (codecs) or compression-decompression units, application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), etc. The system bus 716 can employ any known bus structures or architectures, including a memory bus with memory controller, a peripheral bus, and a local bus. The system memory 714 includes read-only memory (“ROM”) 718 and random access memory (“RAM”) 720. A basic input/output system (“BIOS”) 722, which can form part of the ROM 718, contains basic routines that help transfer information between elements within the computing environment 700, such as during start-up.
The computing environment 700 may include a hard disk drive 724 for reading from and writing to a hard disk 726, an optical disk drive 728 for reading from and writing to removable optical disks 732, and/or a magnetic disk drive 730 for reading from and writing to magnetic disks 734. The optical disk 732 can be a CD-ROM, while the magnetic disk 734 can be a magnetic floppy disk or diskette. The hard disk drive 724, optical disk drive 728 and magnetic disk drive 730 may communicate with the processing unit 712 via the system bus 716. The hard disk drive 724, optical disk drive 728 and magnetic disk drive 730 may include interfaces or controllers (not shown) coupled between such drives and the system bus 716, as is known by those skilled in the relevant art. The drives 724, 728 and 730, and their associated computer-readable storage media 726, 732, 734, may provide nonvolatile and non-transitory storage of computer readable instructions, data structures, program modules and other data for the computing environment 700. Although the depicted computing environment 700 is illustrated employing a hard disk 724, optical disk 728 and magnetic disk 730, those skilled in the relevant art will appreciate that other types of computer-readable storage media that can store data accessible by a computer may be employed, such as magnetic cassettes, flash memory, digital video disks (“DVD”), Bernoulli cartridges, RAMs, ROMs, smart cards, etc. For example, computer-readable storage media may include, but is not limited to, random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory, compact disc ROM (CD-ROM), digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, solid state memory or any other medium which can be used to store the desired information and which may be accessed by processing unit 712a.
Program modules can be stored in the system memory 714, such as an operating system 736, one or more application programs 738, other programs or modules 740 and program data 742. Application programs 738 may include instructions that cause the processor(s) 712 to perform directional motion vector filtering and receive, store and play digital video generated by directional motion vector filtering or on which directional motion vector filtering will be performed. Other program modules 740 may include instructions for handling security such as password or other access protection and communications encryption. The system memory 714 may also include communications programs, for example, a Web client or browser 744 for permitting the computing environment 700 to access and exchange data including digital video with sources such as Web sites of the Internet, corporate intranets, extranets, or other networks and devices as described herein, as well as other server applications on server computing systems. The browser 744 in the depicted embodiment is markup language based, such as Hypertext Markup Language (HTML), Extensible Markup Language (XML) or Wireless Markup Language (WML), and operates with markup languages that use syntactically delimited characters added to the data of a document to represent the structure of the document. A number of Web clients or browsers are commercially available such as those from Mozilla, Google, and Microsoft of Redmond, Wash.
While shown in
An operator can enter commands and information into the computing environment 700 through input devices such as a touch screen or keyboard 746 and/or a pointing device such as a mouse 748, and/or via a graphical user interface in order to receive, process, store and send digital video on which directional motion vector filtering has been or will be performed as described herein. Other input devices can include a microphone, joystick, game pad, tablet, scanner, etc. These and other input devices are connected to one or more of the processing units 712 through an interface 750 such as a serial port interface that couples to the system bus 716, although other interfaces such as a parallel port, a game port or a wireless interface or a universal serial bus (“USB”) can be used. A monitor 752 or other display device is coupled to the system bus 716 via a video interface 754, such as a video adapter which may be configured to perform directional motion vector filtering of the video. The computing environment 700 can include other output devices, such as speakers, printers, etc.
The computing environment 700 can operate in a networked environment using logical connections to one or more remote computers and/or devices. For example, the computing environment 700 can operate in a networked environment using logical connections to one or more other computing systems, mobile devices and other service providers or information servers that provide the digital video in streaming format or other electronic delivery methods. Communications may be via a wired and/or wireless network architecture, for instance wired and wireless enterprise-wide computer networks, intranets, extranets, telecommunications networks, cellular networks, paging networks, and other mobile networks.
The above description of illustrated embodiments, including what is described in the Abstract, is not intended to be exhaustive or to limit the embodiments to the precise forms disclosed. Although specific embodiments of and examples are described herein for illustrative purposes, various equivalent modifications can be made without departing from the spirit and scope of the disclosure, as will be recognized by those skilled in the relevant art. The teachings provided herein of the various embodiments can be applied to other contexts, not necessarily the exemplary context of motion compensation and video compression. It will be understood by those skilled in the art that, although the embodiments described above and shown in the figures are generally directed to the context of motion compensation and video compression, applications related to reconstructing current, previous or other video frames for which a set of applicable motion vectors is available, for example, may also benefit from the concepts described herein.
These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.
Number | Date | Country | |
---|---|---|---|
Parent | 12979107 | Dec 2010 | US |
Child | 15389150 | US |