1. Field of the Invention
The present invention relates to a signal processing device and a signal processing method.
2. Description of the Related Art
There has been known a motion vector estimation technique as represented by a block matching method for estimating as a motion vector a motion of a person or an object that appears in each frame of a video signal. The estimated motion vector is used to, in interlace-to-progressive conversion or in frame rate conversion, for example, compensate for the motion and interpolate frames (or fields). The motion vector estimation technique is also a technique that is indispensable for the inter-frame prediction for increasing the compression efficiency in moving image compression coding. However, the motion vector estimation technique is typically susceptible to the influence of repetitive patterns or noise contained in a video signal. For example, when a single frame of a video signal contains a plurality of similar patterns, it would be difficult to accurately determine to which of the plurality of similar patterns a given pattern in the previous frame has moved.
Referring to
As a result, when a video signal contains a number of high-frequency repetitive patterns or noise, the directions of motion vectors that should be guided for individual pixels could differ in various ways as a number of similar patterns exists within the same frame. This could result in an image corruption due to errors such as variations in the vectors. That is, as errors in the motion vectors can frequently occur, there is a problem that a user may sense that an image may become corrupted after frames are interpolated thereto, for example.
As a method for reducing such errors in the motion vectors, JP 2009-266170A proposes a method of comparing a motion vector, which has been calculated, with the neighboring vectors and correcting the vector in such a manner as to suppress spatial or temporal variations in the vectors. In addition, in the field of MPEG (Moving Picture Experts Group) compression, there is known a method of adaptively applying a low-pass filter to an input video signal in accordance with the content of the input video signal, thereby suppressing noise components such as mosquito noise (for example, see JP 2001-231038A)
However, the method proposed in JP 2009-266170A requires a number of vectors, which has been calculated in the past, to be stored for later comparison purposes, and thus requires resources such as large frame memory. Therefore, it has been impossible with this technique to meet the demand for size and cost reduction of devices, for example. Further, while noise components can be suppressed with a method of filtering an input video signal such as the one disclosed in JP2001-231038A, this technique cannot simply be applied to an estimation of a motion vector. For example, if a low pass filter is applied to a video signal, the image quality (e.g., sharpness) of an output video could degrade depending on the strength of the filter. If one aims to estimate a motion vector, however, it would be only necessary that components that can cause errors be removed from information that serves as a basis for the estimation of a motion vector. Nevertheless, it should be avoided to influence the image quality of an output video. Components that can cause errors are, for example, high-frequency components of a video signal that contains a number of high-frequency repetitive patterns or noise. In such a case, it is expected that a more favorable estimation result can be obtained by estimating a motion vector after extracting or relatively emphasizing the low-frequency components.
In light of the foregoing, it is desirable to provide a novel and improved signal processing device and signal processing method that can provide a video signal for estimating a motion, which appears in each frame of an input video signal, with higher accuracy without influencing the image quality of an output video.
According to an embodiment of the present invention, there is provided a signal processing device including a measured value acquisition unit configured to acquire a measured value for a feature quantity, the feature quantity having an influence on an estimation of a motion that appears in each frame of an input video signal, a determination unit configured to, on the basis of the measured value acquired by the measured value acquisition unit, determine a characteristic of a filter to be applied to the input video signal, and a filtering unit configured to generate a video signal for use in the estimation of a motion by applying to the input video signal a filter with the characteristic determined by the determination unit.
According to the aforementioned configuration, the characteristic of a filter to be applied to an input video signal is determined on the basis of a measured value for a feature quantity, which has an influence on an estimation of a motion that appears in each frame of the input video signal, and a filter with the thus determined characteristic is applied to the input video signal. Then, a video signal generated as a result of the filtering process is used for the estimation of a motion.
The feature quantity having an influence on the estimation of a motion may include a feature quantity depending on an amplitude of a high-frequency component in a horizontal direction or a vertical direction of each frame of the input video signal.
The feature quantity depending on the amplitude of the high-frequency component may include a first feature quantity representing a histogram per band of the horizontal direction or the vertical direction of each frame of the input video signal.
The feature quantity depending on the amplitude of the high-frequency component may include a second feature quantity representing a sum of differences between pixel values of adjacent pixels that are contained in each frame of the input video signal.
The determination unit may change an attenuation level for a high-frequency band as the characteristic of the filter in accordance with the amplitude of the high-frequency component in each frame of the input video signal, the amplitude being indicated by the measured value acquired by the measured value acquisition unit.
The determination unit may change a blocked band as the characteristic of the filter in accordance with a frequency of a band that indicates the maximum frequence in the histogram per band.
The feature quantity having an influence on the estimation of a motion may include a third feature quantity depending on an intensity of a noise component contained in each frame of the input video signal.
The characteristic of the filter may be represented by a filter coefficient to be multiplied by each signal value of the input video signal, and a shift amount for each signal value. The determination unit may change the shift amount in accordance with the intensity of the noise component in each frame of the input video signal, the intensity being indicated by the measured value acquired by the measured value acquisition unit.
The signal processing device may further include a measuring unit configured to measure the feature quantity for each frame of the input video signal.
The signal processing device may further include a motion estimation unit configured to estimate a motion that appears in each frame on the basis of a signal correlation between a first frame and a second frame of the video signal generated by the filtering unit.
The signal processing device may further include an interpolation processing unit configured to interpolate another frame between the first frame and the second frame of the input video signal in accordance with a motion estimated by the motion estimation unit.
According to another embodiment of the present invention, there is provided a signal processing method for processing an input video signal with a signal processing device, the method including the steps of acquiring a measured value for a feature quantity, the feature quantity having an influence on an estimation of a motion that appears in each frame of the input video signal, determining a characteristic of a filter to be applied to the input video signal on the basis of the acquired measured value, and generating a video signal for use in the estimation of a motion by applying to the input video signal a filter with the determined characteristic.
As described above, according to the signal processing device and the signal processing method in accordance with the present invention, it is possible to provide a video signal for estimating a motion, which appears in each frame of an input video signal, with higher accuracy without influencing the image quality of an output video.
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the appended drawings. Note that, in this specification and the appended drawings, structural elements that have substantially the same function and structure are denoted with the same reference numerals, and repeated explanation of these structural elements is omitted.
The “DETAILED DESCRIPTION OF THE EMBODIMENTS” will be given in the following order.
1. Overall Configuration of a Signal Processing Device in Accordance with One Embodiment
2. Description of Each Part
3. Description of the Advantageous Effects
4. Variation
<1. Overall Configuration of a Signal Processing Device in Accordance with One Embodiment>
In this embodiment, the signal processing device 100 acquires an externally input video signal Vin, and processes the input video signal Vin, and then outputs an output video signal Vout with a frame(s) interpolated thereto. A motion vector, which is used for the interpolation of the frame(s) in the signal processing, is a vector that is estimated using a motion estimation video signal Vex. One advantage of the present invention is that the motion estimation video signal Vex is provided independently of the input video signal Vin to which a frame(s) is/are interpolated. The following section will provide a more specific description of the configuration of each part of the signal processing device 100 that generates the aforementioned motion estimation video signal Vex, estimates a motion, and interpolates a frame(s).
The measuring unit 110 measures feature quantities that have an influence on an estimation of a motion that appears in each frame of the input video signal Vin. The feature quantities measured by the measuring unit 110 in this embodiment include a feature quantity depending on the amplitude of high-frequency components in the horizontal direction and the vertical direction of each frame of the input video signal Vin, and a feature quantity depending on the intensity of noise components contained in each frame of the input video signal Vin. Further, the feature quantity depending on the amplitude of high-frequency components can include a histogram per band for the horizontal direction and the vertical direction of each frame of the input video signal Vin, and a sum of the differences between the pixel values of adjacent pixels that are contained in each frame of the input video signal Vin (hereinafter referred to as an “adjacent difference sum”).
Note that the measuring unit 110 in other embodiments need not be configured to measure or output one or more of the aforementioned three types of the measured values: M1, M2, and M3. Further, the measuring unit 110 may be configured to measure feature quantities for one of the horizontal direction and the vertical direction of each frame of the input video signal Vin.
The band measuring unit 112 measures the intensities of repetitive components of the individual bands in the horizontal direction and the vertical direction of each frame of the input video signal Vin, and generates a histogram per band for the horizontal direction and a histogram per band for the vertical direction. The intensities of repetitive components of the individual bands can be measured by using horizontal filters and vertical filters that are band-pass filters adapted to the individual bands.
The first horizontal band-pass filter Fh1 separates the first band components in the horizontal direction of the input video signal Vin. The second horizontal band-pass filter Fh2 separates the second band components in the horizontal direction of the input video signal Vin. Likewise, the M-th horizontal band-pass filter FhM separates the M-th band components in the horizontal direction of the input video signal Vin. That is, in this embodiment, repetitive components in the horizontal direction that are contained in a single frame are separated into M band components to be measured.
Meanwhile, the first vertical band-pass filter Fv1 separates the first band components in the vertical direction of the input video signal Vin. The second vertical band-pass filter Fv2 separates the second band components in the vertical direction of the input video signal Vin. Likewise, the N-th vertical band-pass filter FvN separates the N-th band components in the vertical direction of the input video signal Vin. That is, in this embodiment, repetitive components in the vertical direction that are contained in a single frame are separated into N band components to be measured.
The histogram generation unit 113 integrates the amplitudes of the respective band components input from the horizontal filters Fh1 to FhM and the vertical filters Fv1 to FvN over a single frame to thereby generate a histogram per band M1. The histogram per band M1 includes the frequence of each of the M bands in the horizontal direction (an integrated value of the filter output) and the frequence of each of the N bands in the vertical direction.
The adjacent difference measuring unit 114 measures the adjacent difference sum contained in each frame of the input video signal Vin for each of the horizontal direction and the vertical direction.
The delay unit 115a delays the timing of processing each pixel of the input video signal Vin by one pixel (1 Pixel), and outputs the delayed pixel value to the subtractor 115b. The subtractor 115b calculates the difference between the pixel value of each pixel of the input video signal Vin that has been input to the adjacent difference measuring unit 114 and the delayed pixel value input from the delay unit 115a. The absolute value computing unit 115c calculates the absolute value of the difference calculated by the subtractor 115b. Then, the integrator 115d integrates the absolute values of the differences calculated by the absolute value computing unit 115c over a single frame. Accordingly, the adjacent difference sum of the horizontal direction contained in each frame of the input video signal Vin is calculated.
Meanwhile, the delay unit 116a delays the timing of processing each pixel of the input video signal Vin by one line (1 Line), and outputs the delayed pixel value to the subtractor 116b. The subtractor 116b calculates the difference between the pixel value of each pixel of the input video signal Vin that has been input to the adjacent difference measuring unit 114 and the delayed pixel value input from the delay unit 116a. The absolute value computing unit 116c calculates the absolute value of the difference calculated by the subtractor 116b. Then, the integrator 116d integrates the absolute values of the differences calculated by the absolute value computing unit 116c over a single frame. Accordingly, the adjacent difference sum of the vertical direction contained in each frame of the input video signal Vin is calculated.
The noise measuring unit 118 measures a noise level that represents the intensity of noise components contained in each frame of the input video signal Vin.
The frame memory 119a temporarily stores each frame of the input video signal Vin. The noise level detection unit 119b compares each frame of the input video signal Vin with the previous frame stored in the frame memory 119a, and detects a noise level for each frame on the basis of the comparison result. Detection of a noise level with the level detection unit 119b is performed with a known method disclosed in, for example, JP 2009-3599A. The value of a noise level can be a value obtained by, for example, representing the amount of a standard deviation, variance, or the like using a predetermined number of bits (e.g., 10 bits).
The measuring unit 110 outputs to the measured value acquisition unit 130 the measured values as the measurement results obtained by the aforementioned band measuring unit 112, adjacent difference measuring unit 114, and noise measuring unit 118, that is, the histogram per band M1, the adjacent difference sum M2, and the noise level M3.
The measured value acquisition unit 130 acquires from the measuring unit 110 the measured values for feature quantities that have an influence on an estimation of a motion that appears in each frame of the input video signal Vin. In this embodiment, the measured values acquired by the measured value acquisition unit 130 are the aforementioned histogram per band M1, adjacent difference sum M2, and noise level M3. Then, the measured value acquisition unit 130 outputs the acquired measured values to the determination unit 140.
The determination unit 140 determines the characteristics of a filter to be applied to the input video signal Vin on the basis of the measured values acquired by the measured value acquisition unit 130. A filter to be applied to the input video signal is a filter in the filtering unit 150 (described below). In this embodiment, the characteristics of a filter to be applied to the input video signal Vin are represented by a filter coefficient to be multiplied by each signal value of the input video signal Vin and a shift amount (also referred to as a “scaling parameter”) for each signal value. Thus, the determination unit 140 determines, on the basis of the measured values acquired by the measured value acquisition unit 130, a filter coefficient of a filter to be applied to the input video signal Vin and the shift amount as described below.
The first determination unit 142 changes the filter strength of the horizontal direction and the filter strength of the vertical direction to be applied to the input video signal Vin in accordance with the histogram per band M1 input from the measured value acquisition unit 130. As used in this specification, “filter strength” refers to a concept that encompasses the attenuation level for an input signal and the width of the blocked bands. In the example shown in
More specifically, the first determination unit 142 selects a band that indicates the maximum frequence in the histogram per band for each of the horizontal direction and the vertical direction. Next, the first determination unit 142 compares the frequence of the selected band with a threshold. Herein, if the frequence of the selected band is higher than a predetermined threshold, it is determined that a repetitive pattern with that band is noticeable in the input frame. In this case, the higher the frequency of the selected band, the higher the filter strength that is selected by the first determination unit 142. Meanwhile, if the frequence of the selected band is not higher than the predetermined threshold, it is determined that repetitive patterns with none of the bands are very noticeable in the input frame. In that case, the first determination unit 142 selects the lowest filter strength.
Referring to
Meanwhile, in the example of
The first determination unit 142 performs the aforementioned filter strength determination process for each of the horizontal direction and the vertical direction. Then, the first determination unit 142 outputs to the characteristics determination unit 146 a filter strength S1htmp of the horizontal direction and a filter strength S1vtmp of the vertical direction as the determination results. Note that the subscript “tmp” in the filter strengths S1htmp and S1vtmp means that the filter strengths determined by the first determination unit 142 in this embodiment are temporary values. However, the present invention is not limited to this embodiment, and the filter strengths determined by the first determination unit 142 may be handled as the final values.
Referring to
Next, the first determination unit 142 selects a band that indicates the maximum frequence from the histogram per band of the vertical direction (step S112). Next, the first determination unit 142 determines if the frequence of the selected band is higher than a predetermined threshold (S114). Herein, if the frequence of the selected band is determined to be higher than the predetermined threshold, the first determination unit 142 refers to the strength selection table 143, and sets the filter strength S1vtmp of the vertical direction in accordance with the frequence of the selected band (step S116). Meanwhile, if the frequence of the selected band is not determined to be higher than the predetermined threshold in step S114, the first determination unit 142 sets the filter strength S1vtmp of the vertical direction to the lowest level Lv0 (step S118).
Note that the threshold compared with the frequence of the histogram per band of the horizontal direction in step S104 can be either the same value as or a different value from the threshold compared with the frequence of the histogram per band of the vertical direction in step S114.
The second determination unit 144 changes the filter strength of the horizontal direction and the filter strength of the vertical direction to be applied to the input video signal Vin in accordance with the adjacent difference sum M2 input from the measured value acquisition unit 130. More specifically, the second determination unit 144 compares the adjacent difference sum M2 of each of the horizontal direction and the vertical direction with a predetermined threshold. If the value of the adjacent difference sum M2 is higher than the threshold, the second determination unit 144 selects the highest filter strength, while if the value of the adjacent difference sum M2 is not higher than the threshold, the second determination unit 144 selects the lowest filter strength. The second determination unit 144 performs such a filter strength determination process for each of the horizontal direction and the vertical direction. Then, the second determination unit 144 outputs to the characteristics determination unit 146 a filter strength S2htmp of the horizontal direction and the filter strength S2vtmp of the vertical direction as the determination results.
Referring to
Next, the second determination unit 144 determines if the adjacent difference sum of the vertical direction is higher than a predetermined threshold (step S162). Herein, if the adjacent difference sum is determined to be higher then the predetermined threshold, the second determination unit 144 sets the filter strength S2vtmp of the vertical direction to the highest level Lv4 (step S164). Meanwhile, if the adjacent difference sum is not determined to be higher than the predetermined threshold in step S162, the second determination unit 144 sets the filter strength S2vtmp of the vertical direction to the lowest level Lv0 (step S166).
Note that the threshold compared with the adjacent difference sum of the horizontal direction in step S152 can be either the same value as or a different value from the threshold compared with the adjacent difference sum of the vertical direction in step S162.
The characteristics determination unit 146 determines a filter coefficient of a filter in the horizontal direction to be applied to the input video signal Vin on the basis of the filter strength S1htmp of the horizontal direction input from the first determination unit 142 and the filter strength S2htmp of the horizontal direction input from the second determination unit 144. The characteristics determination unit 146 also determines a filter coefficient of a filter in the vertical direction to be applied to the input video signal Vin on the basis of the filter strength S1vtmp of the vertical direction input from the first determination unit 142 and the filter strength S2vtmp of the vertical direction input from the second determination unit 144. Further, the characteristics determination unit 146 determines a shift amount of a filter to be applied to the input video signal Vin on the basis of the noise level M3 acquired from the measured value acquisition unit 130.
The strength determination unit 147a calculates a single filter strength Sh from the filter strength S1htmp of the horizontal direction input from the first determination unit 142 and the filter strength S2htmp of the horizontal direction input from the second determination unit 144. The filter strength Sh can be a mean value of the filter strengths S1htmp and S2htmp. Alternatively, the filter strength Sh can be calculated by, for example, multiplying each of the filter strengths S1htmp and S2htmp by a predetermined weighting factor and averaging the weighted filter strengths S1htmp and S2htmp. Note that if the calculated mean value has fractions below the decimal point, such fractions can be rounded off, for example. Likewise, the strength determination unit 147a calculates a single filter strength Sv from the filter strength S1vtmp of the vertical direction input from the first determination unit 142 and the filter strength S2vtmp of the vertical direction input from the second determination unit 144. Then, the strength determination unit 147a outputs the thus calculated filter strengths Sh and Sv to the strength step-control unit 147b.
The strength step-control unit 147b controls the output value of the strength such that the filter strength changes in a stepwise manner to prevent a vector error that may otherwise occur due to an abrupt change in the filter strength. For example, the strength step-control unit 147b, if the output value of the strength of the previous frame is Lv0 and the latest strength input from the strength determination unit 147a is Lv4, controls the output value of the strength on a frame-by-frame basis such that the strengths output to the parameter output unit 147d are Lv0→Lv1→Lv2→Lv3→Lv4.
Referring to
In step 212, the strength step-control unit 147b substitutes a value, which is obtained by adding a predetermined variation to the output value of the previous strength, into the filter strength (step S212). For example, if the output value of the previous strength is Lv0 and the variation is defined as level 1, the new filter strength is Lv1. Next, the strength step-control unit 147b determines if the new filter strength is above the upper limit value of the filter strength (step S214). Herein, if the new filter strength is determined to be above the upper limit value of the filter strength, the strength step-control unit 147b outputs the upper limit value (e.g., Lv4) of the filter strength to the parameter output unit 147d (step S216). Meanwhile, if the new filter strength is not determined to be above the upper limit value of the filter strength, the strength step-control unit 147b outputs the new filter strength to the parameter output unit 147d (step S218).
In step S222, the strength step-control unit 147b substitutes a value, which is obtained by subtracting a predetermined variation from the output value of the previous strength, into the filter strength (step S222). For example, if the output value of the previous strength is Lv4 and the variation is defined as level 1, the new filter strength is Lv3. Next, the strength step-control unit 147b determines if the new filter strength is below the lower limit value of the filter strength (step S224). Herein, if the new filter strength is determined to be below the lower limit value of the filter strength, the strength step-control unit 147b outputs the lower limit value (e.g., Lv0) of the filter strength to the parameter output unit 147d (step S226). Meanwhile, if the new filter strength is not determined to be below the lower limit of the filter strength, the strength step-control unit 147b outputs the new filter strength to the parameter output unit 147d (step S228).
The aforementioned step-control process of the strength step-control unit 147b is performed in parallel to each of the filter strength Sh of the horizontal direction and the filter strength Sv of the vertical direction.
The parameter output unit 147d acquires from the filter coefficient table 148 a set of filter coefficients that are associated with the filter strengths Sh and Sv input from the strength step-control unit 147b. Then, the parameter output unit 147d outputs the acquired set of filter coefficients to the filtering unit 150.
First, when the filter strength is Lv0 (the upper left graph), the filter characteristics are one over a range of zero to the highest frequency (½ of the sampling rate fs). That is, in this case, the filter passes all signals as they are. When the filter strength is Lv1 to Lv4, the filter characteristics exhibit the characteristics of a low-pass filter. Thus, the higher the filter strength, the higher the attenuation level for high-frequency bands. In addition, the higher the filter strength, the lower the lowest frequency of the blocked bands. For example, when the filter strength is Lv1 (the upper middle graph), signals of only bands that are close to the highest frequency (fs/2) are blocked, whereas signals of bands around fs/4 are hardly attenuated. In contrast, when the filter strength is Lv4 (the lower right graph), signals of wider bands, down to a band that is below the frequency of fs/4, are blocked.
Note that the filter characteristics shown in
The parameter output unit 147d acquires a set of filter coefficients that exhibit the aforementioned filter characteristics for each of the horizontal direction and the vertical direction, in accordance with the filter strengths input from the strength step-control unit 147b, and outputs the acquired set of filter coefficients to the filtering unit 150.
Note that the filter coefficient table 148 further stores preset values of the shift amount while correlating them with the set of filter coefficients. The preset values of the shift amount are used for the parameter output unit 147d to determine the shift amount as described below.
In this embodiment, a “shift amount” refers to the number of bits that are shifted by a shift operation executed by the filtering unit 150 to prevent the maximum filter output value from exceeding the output dynamic range. Since lower-order bits of a signal value are removed by a shift operation, if the shift amount is large, the sharpness of a frame could decrease while noise contained in the frame can be removed more.
The noise level step-control unit 147c controls the output value of a noise level such that the noise level changes in a stepwise manner to ease an abrupt change in the shift amount that is determined on the basis of the noise level. For example, the noise level step-control unit 147c modifies (adds or subtracts) the value of the noise level M3 output from the noise measuring unit 118 such that the value of the noise level M3 changes on a frame-by-frame basis by a constant amount. The noise level step-control unit 147c can be implemented by a logical process similar to the strength step-control process shown in
The parameter output unit 147d refers to the filter coefficient table 148, and acquires an offset of the shift amount that is associated with the noise level input from the noise level step-control unit 147c. Then, the parameter output unit 147d outputs a value, which is obtained by adding the offset of the shift amount to a preset value of the shift amount acquired from the filter coefficient table 148, to the filtering unit 150 as a shift amount to be finally used.
In the example of
Provided that the preset value of the shift amount defined in advance with the set of filter coefficients is Sfin, the offset of the shift amount acquired according to a noise level is Sfoffset, and the shift amount output from the parameter output unit 147d is Sfout, Sfout can be given by the following formula.
[Formula 1]
Sf
out
=Sf
in
+Sf
offset (1)
The filtering unit 150 applies a filter with characteristics, which have been determined by the determination unit 140, to the input video signal Vin, thereby generating a motion estimation video signal Vex.
The horizontal direction filter 152 filters each frame of the input signal Vin using the set of filter coefficients for the horizontal direction, thereby blocking or attenuating high-frequency components in the horizontal direction contained in each frame. The filtering operation performed by the horizontal direction filter 152 is represented by the following formula.
Herein, Vin[x,y] indicates a pixel value at the coordinates (x,y) of a single frame of the input video signal. M indicates a value that determines the number of filter taps of the horizontal direction filter 152. Coeffh[0] to Coeffh[2M] indicate a set of filter coefficients for the horizontal direction. Vhout[x,y] indicates a pixel value at the coordinates (x,y) of a single frame of the output signal of the horizontal direction filter 152.
The vertical direction filter 154 filters each frame of the output signal Vhoutt from the horizontal direction filter 152 using the set of filter coefficients for the vertical direction, thereby blocking or attenuating high-frequency components in the vertical direction contained in each frame. The filtering operation performed by the vertical direction filter 154 is represented by the following formula.
Herein, N is a value that determines the number of filter taps of the vertical direction filter 154. Coeffv[0] to Coeffv[2N] indicate a set of filter coefficients for the vertical direction. Vvout[x,y] indicates a pixel value at the coordinates (x,y) of a single frame of the output signal of the vertical direction filter 154.
The scaling unit 156 shifts the output signal of the vertical direction filter 154 such that the output signal from the filtering unit 150 does not exceed the dynamic range. The shift operation performed by the scaling unit 156 is represented by the following formula.
[Formula 4]
V
ex
[x,y]=V
vout
[x,y]>>Sf
out (4)
Vex[x,y] indicates a pixel value at the coordinates (x,y) of a single frame of the motion estimation video signal Vex output from the filtering unit 150 as a result of the filtering process.
The frame memory 160 temporarily stores each frame of the motion estimation video signal Vex output from the filtering unit 150. Each frame of the motion estimation video signal Vex stored in the frame memory 160 is used for the motion estimation unit 170 to estimate a motion vector. In addition, the frame memory 160 temporarily stores each frame of the input video signal Vin input to the signal processing device 100. Further, the frame memory 160 also temporarily stores a motion vector for each frame estimated by the motion estimation unit 170. Each frame of the input video signal Vin and the motion vector for each frame that are stored in the frame memory 160 are used for the interpolation processing unit 180 to interpolate a new frame(s).
The motion estimation unit 170 estimates a motion vector representing a motion that appears in each frame on the basis of the signal correlation between a first frame and a second frame of the motion estimation video signal Vex generated by the filtering unit 150. The first frame and the second frame correspond to, for example, the current (latest) frame and the previous frame. Estimation of a motion vector by the motion estimation unit 170 can be performed with a known method such as a block matching method. Then, the motion estimation unit 170 outputs the estimated motion vector to the interpolation processing unit 180.
The interpolation processing unit 180 interpolates a new frame(s) between the first frame and the second frame of the input video signal Vin in accordance with a motion estimated by the motion estimation unit 170, namely, the motion vector input from the motion estimation unit 170. Interpolation of a frame(s) by the interpolation processing unit 180 can also be performed with a known method. Then, the interpolation processing unit 180 outputs an output video signal Vout with the interpolated frame(s). The output video signal Vout can be used either directly as a frame-rate-converted video signal or for applications such as interlace-to-progressive conversion.
The signal processing device 100 in accordance with one embodiment of the present invention has been described in detail with reference to
In addition, according to this embodiment, feature quantities that have an influence on an estimation of a motion include a feature quantity depending on the amplitude of high-frequency components in the horizontal direction or the vertical direction of each frame of an input video signal. That is, using the amplitude of the high-frequency components in the horizontal direction or the vertical direction (or both) as the basis for the determination of the filter characteristics makes it possible to identify the intensity of a repetitive pattern that appears in the input frame and to select filter characteristics that will allow such repetitive pattern to be removed or eased. The feature quantity depending on the amplitude of high-frequency components is, for example, a histogram per band of the horizontal direction or the vertical direction of each frame of an input video signal. Using the histogram per band allows sorting of the amplitudes of the high-frequency components into a plurality of levels according to the number of bands. Thus, the filter characteristics can be controlled more flexibly. Another example of the feature quantity depending on the amplitude of high-frequency components is a sum of the differences between the pixel values of adjacent pixels that are contained in each frame of an input video signal. Determining the sum of the differences between the pixel values of the adjacent pixels would not require a complex calculation process. Thus, such a sum can be determined with a low calculation cost and a relative small circuit size.
Further, according to this embodiment, the feature quantities that have an influence on an estimation of a motion include a noise level that represents the intensity of noise components contained in each frame of an input video signal. For example, if a shift amount as one of the filter characteristics is determined in accordance with the noise level, it is possible to, when the noise level is low, maintain the sharpness of the frame, and, when the noise level is high, remove the noise. Accordingly, robustness of the motion vector estimation can be further improved.
The aforementioned embodiment has illustrated an example in which the signal processing device 100 includes the measuring unit 110, the motion estimation unit 170, and the interpolation processing unit 180. However, the present invention is not limited thereto. For example, a device can be provided that includes only the aforementioned measured value acquisition unit 130, determination unit 140, and filtering unit 150; or only the measured value acquisition unit 130 and the determination unit 140. For example, a signal processing device 200 in accordance with one variation shown in
The signal processing device 100 or 200 need not use one or more of the aforementioned three types of the measured values: M1, M2, and M3 for the determination of the filter characteristics. For example, if the adjacent difference sum M2 is not used, the characteristics determination unit 146 of the measuring unit 140 can determine the filter characteristics on the basis of only the filter strengths S1htmp and S1hvmp input from the first determination unit 142. Likewise, if the histogram per band M1 is not used, the characteristics determination unit 146 of the determination unit 140 can determine the filter characteristics on the basis of only the filter strengths S2htmp and S2hvmp input from the second determination unit 144. Further, the signal processing device 100 or 200 need not determine the filter characteristics or perform the filtering process for one of the horizontal direction and the vertical direction.
Note that some or all of a series of the processes performed by the signal processing devices 100 and 200 described in this specification can be implemented with software. A program that constitutes such software for implementing some or all of the series of processes is stored in advance in a storage medium that is provided in or outside of the device. Each program is, when executed, read into RAM and executed by a processor such as a CPU.
Although the preferred embodiments of the present invention have been described in detail with reference to the appended drawings, the present invention is not limited thereto. It is obvious to those skilled in the art that various modifications or variations are possible insofar as they are within the technical scope of the appended claims or the equivalents thereof. It should be understood that such modifications or variations are also within the technical scope of the present invention.
The present application contains subject matter related to that disclosed in Japanese Priority Patent Application JP 2010-112273 filed in the Japan Patent Office on May 14, 2010, the entire content of which is hereby incorporated by reference.
Number | Date | Country | Kind |
---|---|---|---|
2010-112273 | May 2010 | JP | national |