Before describing embodiments of the present invention, the correspondence between the features of the present invention and embodiments of the present invention disclosed in this specification or the accompanying drawings is discussed below. This description is intended to assure that embodiments supporting the present invention are described in this specification or the accompanying drawings. Thus, even if an embodiment in this specification or the accompanying drawings is not described as relating to a certain feature of the present invention, that does not necessarily mean that the embodiment does not relate to that feature of the present invention. Conversely, even if an embodiment is described herein as relating to a certain feature of the present invention, that does not necessarily mean that the embodiment does not relate to other features of the present invention.
An image processing apparatus (for example, an image processing apparatus 1 shown in
The correlation computation unit can include: a Fourier transform unit (for example, Fourier transform units 31A and 31B shown in
The detection unit can include: a counter unit (for example, the counter section 17 shown in
The detection unit can further include a normalization unit (for example, a normalization section 16 shown in
The image processing apparatus according to an embodiment of the present invention can further include an extraction unit (for example, region extraction sections 12A and 12B shown in
The image processing apparatus according to an embodiment of the present invention can further include a reducing unit (for example, image reducing sections 13A and 13B shown in
The image processing apparatus according to an embodiment of the present invention can further include a non-image detection unit (for example, non-image detection sections 14A and 14B shown in
The non-image detection unit can include: a Fourier transform unit (for example, the Fourier transform units 31A and 31B shown in
The image processing apparatus according to an embodiment of the present invention can further include a difference computation unit (for example, a difference computation unit 91 shown in
The image processing apparatus according to an embodiment of the present invention can further include a dividing unit (for example, a dividing section 111 shown in
The image processing apparatus according to an embodiment of the present invention can further include a representative image detection unit (for example, a representative image detection section 201 shown in
The image processing apparatus according to an embodiment of the present invention can further include: a Fourier transform unit (for example, the Fourier transform units 31A and 31B shown in
An image processing method or a program according to an embodiment of the present invention includes the steps of: computing a phase correlation between image signals forming a plurality of images (for example step S7 shown in
An image processing apparatus (for example, the image processing apparatus 1 shown in
The image processing apparatus according to another embodiment of the present invention can further include an extraction unit (for example, the region extraction sections 12A and 12B shown in
The image processing apparatus according to another embodiment of the present invention can further include a reducing unit (for example, the image reducing sections 13A and 13B shown in
An image processing method or a program according to another embodiment of the present invention includes the steps of: computing an average for each of image signals forming a plurality of images (for example, step S304 shown in
Embodiments of the present invention will be described with reference to the accompanying drawings.
An image processing apparatus 1 is provided with image input sections 11A and 11B, the region extraction sections 12A and 12B, the image reducing sections 13A and 13B, the non-image detection sections 14A and 14B, the computation section 15, the normalization section 16, the counter section 17, the determination section 18, and a storage section 19.
The image input section 11A is configured with, for example, a tuner, and receives a television broadcast signal and outputs the received signal to the region extraction section 12A. The region extraction section 12A extracts an image signal corresponding to a predetermined region of a single image represented by the received image signal. The image reducing section 13A reduces the size of the predetermined region represented by the image signal extracted by the region extraction section 12A by reducing the number of pixels included in the predetermined region. The image signal corresponding to the size-reduced region reduced by the image reducing section 13A is supplied to the non-image detection section 14A.
The image input section 11B, the region extraction section 12B, and the image reducing section 13B perform the same processing as the image input section 11A, the region extraction section 12A, and the image reducing section 13A, respectively, upon different images. The image input section 11B may be removed, and the output of the image input section 11A may be supplied to the region extraction section 12B.
The non-image detection sections 14A and 14B detect an image that can hardly be defined as an image (hereinafter referred to as a non-image) such as a white overexposed image obtained after a flash has been fired. The non-image detection section 14A is provided with the Fourier transform unit 31A, the alternating component detection unit 32A, and the determination unit 33A. The non-image detection section 14B is similarly provided with the Fourier transform unit 31B, the alternating component detection unit 32B, and the determination unit 33B.
The Fourier transform unit 31A performs a fast Fourier transform upon the image signal transmitted from the image reducing section 13A, and outputs the processed image signal to the alternating component detection unit 32A. The alternating component detection unit 32A detects an alternating component from the image signal transmitted from the Fourier transform unit 31A. The determination unit 33A compares the value of the alternating component detected by the alternating component detection unit 32A with a predetermined threshold value that has been set in advance, determines whether an image represented by the received image signal is a non-image on the basis of the comparison result, and then controls the operation of the cross power spectrum detection unit 51 on the basis of the determination result.
The Fourier transform unit 31B, the alternating component detection unit 32B, and the determination unit 33B, which are included in the non-image detection section 14B, perform the same processing as the Fourier transform unit 31A, the alternating component detection unit 32A, and the determination unit 33A, respectively, which are included in the non-image detection section 14A, upon the output of the image reducing section 13B. Subsequently, the operation of the cross power spectrum detection unit 51 is controlled on the basis of the determination result of the determination unit 33B.
The computation section 15 performs computation compliant with SPOMF (Symmetrical Phase-Only Matched Filtering). SPOMF is described in “Symmetric Phase-Only Matched Filtering of Fourier-Mellin Transforms for Image Registration and Recognition”, IEEE Transaction on Pattern Analysis and Machine Intelligence, VOL. 16, No. 12, December 1994.
The computation section 15 is provided with the cross power spectrum detection unit 51 and the inverse Fourier transform unit 52. However, the Fourier transform units 31A and 31B included in the non-image detection sections 14A and 14B configure a portion of the computation section 15 in reality. That is, the Fourier transform units 31A and 31B in the non-image detection sections 14A and 14B serve as a Fourier transform unit of the computation section 15. A dedicated Fourier transform unit may be disposed in the computation section 15.
The cross power spectrum detection unit 51 computes a cross power spectrum using the outputs of the Fourier transform units 31A and 31B. The operation of the cross power spectrum detection unit 51 is controlled on the basis of the outputs of the determination units 33A and 33B. That is, if the determination unit 33A or 33B determines that an image being processed is a non-image, the operation of the cross power spectrum detection unit 51 is interrupted. The inverse Fourier transform unit 52 performs a fast inverse Fourier transform upon the output of the cross power spectrum detection unit 51.
The normalization section 16 normalizes the output of the inverse Fourier transform unit 52. The counter section 17 detects the number of peaks of the output of the normalization section 16, and outputs the detection result to the determination section 18. The determination section 18 compares the detected number of peaks with a predetermined reference value that has been set in advance, and outputs the comparison result to the storage section 19 so as to cause the storage section 19 to store the comparison result. The image signals output from the image input sections 11A and 11B are also stored in the storage section 19.
Scene change detection performed by the image processing apparatus 1 shown in
In step S1, the image input sections 11A and 11B receive images forming different frames. The region extraction sections 12A and 12B extract image signals corresponding to predetermined regions of the images received by the image input sections 11A and 11B, respectively. More specifically, as shown in
Pixel values of pixels outside of the extracted inner region are not set to zero, and are set so that they are smoothly changed from the boundary of the inner region toward the outside thereof like a cross-fade. Consequently, the effect of a spectrum on the border can be reduced.
Next, in step S3, the image reducing sections 13A and 13B reduce the sizes of the regions represented by the image signals transmitted from the region extraction sections 12A and 12B, respectively. More specifically, as shown in
Thus, by significantly reducing the number of pixels, the amount of computation to be performed can be reduced. Since pixel values of pixels included in individual blocks are averaged, the correlation between frames has to be examined using grainy images. Generally, when the rotation of an image is performed between frames, the level of the correlation between them is lowered. It is therefore sometimes falsely detected that one of the frames is a scene change. However, in a case where the correlation between frames is examined using grainy images, the level of the correlation is not lowered even if the rotation of an image is performed between them. Accordingly, the false detection of a scene change can be prevented.
Next, in step S4, the Fourier transform unit 31A performs a two-dimensional fast Fourier transform upon the image signal transmitted from the image reducing section 13A. More specifically, the computation represented by the following equation (1) is performed. Similarly, the Fourier transform unit 31B performs a two-dimensional fast Fourier transform using the following equation (2).
In step S5, the alternating component detection unit 32A detects an alternating component from the output of the Fourier transform unit 31A. Similarly, the alternating component detection unit 32B detects an alternating component from the output of the Fourier transform unit 31B. In step S6, the determination units 33A and 33B compare the detection results of the alternating component detection units 32A and 32B with a predetermined threshold value that has been set in advance to determine whether the values of the detected alternating components are equal to or larger than the threshold value.
If one of the images forming different frames extracted by the region extraction sections 12A and 12B is a white overexposed image, that is, a non-image, and if the other one of the images is a normal image, it is often determined that there is no correlation between these images (that is, the white overexposed image is a scene change). However, in such a case, the white overexposed image is not a scene change in reality, and is simply displayed as a bright image due to light emitted by a flash. Accordingly, it is not desirable that such a frame be detected to be a scene change. In the case of a white overexposed image due to light emitted by a flash, the value of the alternating component represented by a coefficient used in a fast Fourier transform is small. Accordingly, if the values of the alternating components are smaller than the threshold value that has been set in advance, the determination units 33A and 33B determine that the frames being processed are not scene changes is step S14. Thus, a white overexposed image can be prevented from being falsely detected as a scene change.
Next, in step S15, the determination units 33A and 33B determine whether all frames have already been subjected to detection processing. If all frames have not yet been subjected to detection processing, the process returns to step S1 and then the process from step S1 to the subsequent steps is repeatedly performed.
If it is determined in step S6 that the values of the alternating components are equal to or larger than the threshold value, the cross power spectrum detection unit 51 detects a cross power spectrum in step S7. More specifically, the cross power spectrum detection unit 51 computes a cross power spectrum using one of the following equations (3) and (4).
In the above-described equations, fx and fy denote frequency space, and the symbol * included in G*(fx, fy) denotes a complex conjugate of G(fx, fy).
In step S8, the inverse Fourier transform unit 52 performs a two-dimensional fast inverse Fourier transform upon the cross power spectrum output from the cross power spectrum detection unit 51. More specifically, the inverse Fourier transform unit 52 computes the value s(x, y) represented in the following equation (5).
In step S9, the normalization section 16 normalizes the output s(x, y) of the inverse Fourier transform unit 52 so that the maximum value thereof can be one. More specifically, the following equation (6) is computed. The value represented in the denominator on the right-hand side in equation (6) denotes the maximum value of the absolute value of the value s(x, y).
In step S10, the counter section 17 counts the number of amplitudes having a value equal to or larger than a threshold value. In step S11, the determination section 18 determined whether the value counted in step S10 is equal to or larger than a threshold value that has been set in advance. If the counted value is equal to or larger than the threshold value, the determination section 18 determines that one of the images being processed is a scene change in step S12. On the other hand, if it is determined that the counted value is not equal to or larger than the threshold value, the determination section 18 determines that one of the images being processed is not a scene change in step S14.
That is, if the correlation level is low, the output of the inverse Fourier transform unit 52 which has been normalized by the normalization section 16 is represented as shown in
If it is determined in step S12 that one of the images being processed is a scene change, the storage section 19 stores the determination result in step S13. That is, the fact that one of the frames being processed (here, the frame whose image signal has been received by the image input section 11A) is a scene change is stored along with the image signal received by the image input section 11A in the storage section 19.
After the processing operations of steps S13 and 14 have been performed, the determination section 18 determines whether all frames have already been subjected to detection processing in step S15. If all frames have not yet been subjected to detection processing, the process returns to step S1 and the process from step S1 to the subsequent steps is repeatedly performed. If it is determined that all frames have already been subjected to detection processing, the scene change detection ends.
In the image processing apparatus shown in
That is, the image processing apparatus 1 shown in
That is, in the scene change detection performed by the image processing apparatus shown in
The configuration of the image processing apparatus shown in
Next, scene change detection performed by the image processing apparatus 1 shown in
In step S51, the image input sections 11A and 11B receive images forming different frames. In step S52, the region extraction sections 12A and 12B extract image signals corresponding to predetermined regions of the images represented by image signals transmitted from the image input sections 11A and 11B, respectively. In step S53, the image reducing sections 13A and 13B reduce the sizes of the regions represented by the image signals extracted by the region extraction sections 12A and 12B, respectively. This process from step S51 to S53 is the same as the process from step S1 to S3 shown in
Next, in step S54, the difference computation unit 91 computes the difference between the outputs of the image reducing sections 13A and 13B. In step S55, the determination unit 92 compares the difference computed in step S54 with a predetermined threshold value that has been set in advance so as to determine whether the difference value is equal to or larger than the threshold value. If the difference value is not equal to or larger than the threshold value, the process returns to step S51 and the process from step S51 to the subsequent steps is repeatedly performed. On the other hand, if the difference value is equal to or larger than the threshold value, the process proceeds to step S56. The process from step S56 to step S67 is the same as the process from step S4 shown in
That is, in this image processing apparatus according to another embodiment of the present invention, if it is determined in step S55 that the difference value is not equal to or larger than the threshold value, the process from step S56 to the subsequent steps is interrupted. Only if the difference value is equal to or larger than the threshold value, is the process from step S56 to the subsequent steps performed. If one of the images being processed is a scene change, the difference between the images forming two different frames often becomes equal to or larger than the threshold value. On the other hand, if one of the images being processed is not a scene change, the difference becomes comparatively small. Accordingly, by comparing a difference between images forming two different frames, whether one of the images being processed is a scene change can be easily detected. If it is determined by the simplified detection that one of the images being processed is not a scene change, the subsequent detailed scene change detection is interrupted. Accordingly, performance of unnecessary processing can be avoided.
In the image processing apparatus shown in
That is, the image processing apparatus shown in
That is, in the image processing apparatus show in
In the above-described embodiments, image forming frames are processed. However, a frame may be divided into a plurality of regions and the divided regions may be processed.
More specifically, as shown in
That is, the configuration of the part of the image processing apparatus shown in
Next, scene change detection preformed by the image processing apparatus shown in
In step S81, the image input section 11 receives an image. In step S82, the region extraction section 12 extracts an image signal corresponding to a predetermined region of the image represented by an image signal transmitted from the image input section 11. In step S83, the image reducing section 13 reduces the size of the region represented by the image signal extracted by the region extraction section 12. This process from step S81 to step S83 is the same as the process from step S1 to step S3 in
Next, in step S84, as shown in
In the subsequent process, the same processing as the above-described processing is performed upon each of the divided images. That is, in step S85, the Fourier transform unit 31A in the non-image detection section 14A performs a two-dimensional fast Fourier transform upon the image signal corresponding to the image of 32×32 pixels which has been transmitted from the dividing section 111. In step S86, the alternating component detection unit 32A detects an alternating component from the image signal transmitted from the Fourier transform unit 31A. In step S87, the determination unit 33A determines whether the value of the alternating component transmitted from the alternating component detection unit 32A is equal to or larger than a threshold value. If the value of the alternating component is not equal to or larger than the threshold value, the determination unit 33A interrupts the operation of a cross power spectrum detection unit 51A. In this case, the process proceeds to step S105 in which the determination unit 33A determines that the image being processed is not a scene change. Subsequently, in step S106, the determination unit 33A determines whether all frames have already been subjected to detection processing. If all frames have not yet been subjected to detection processing, the process returns to step S81 and the process from step S81 to the subsequent steps is repeatedly performed.
If it is determined in step S87 that the value of the alternating component is equal to or larger than the threshold value, a delay unit 121A delays the signal transmitted from the Fourier transform unit 31A by a time corresponding to the predetermined number of frames in step S88. The delayed signal is supplied to the cross power spectrum detection unit 51A. In step S89, the cross power spectrum detection unit 51A detects a cross power spectrum using signals forming different frames, one of which has been transmitted directly from the Fourier transform unit 31A and the other one of which has been transmitted from the Fourier transform unit 31A via the delay unit 121A. In step S90, an inverse Fourier transform unit 52A performs a two-dimensional fast inverse Fourier transform upon the output of the cross power spectrum detection unit 51A.
In step S91, the normalization section 16A normalizes the output of the inverse Fourier transform unit 52A. In step S92, the counter section 17A counts the number of amplitude having a value equal to or larger than a threshold value. In step S93, the determination section 18A determines whether the value counted in step S92 is equal to or larger than a threshold value that has been set in advance. If the counted value is not equal to or larger than the threshold value, the process proceeds to step S105 in which the determination section 18A determines that the image being processed is not a scene change. Subsequently, in step S106, the determination section 18A determines whether all frames have already been subjected to detection processing. If all frames have not yet been subjected to detection processing, the process returns to step S81 and the process from step S81 to the subsequent steps is repeatedly performed.
If it is determined in step S93 that the counted value is equal to or larger than the threshold value, the same processing operations as those of step S85 to step S93 are performed upon the image signal corresponding to the other one of the divided images of 32×32 pixels in step S94 to step S102 by the Fourier transform unit 31B, the alternating component detection unit 32B, the determination unit 33B, a delay unit 121B, a cross power spectrum detection unit 51B, an inverse Fourier transform unit 52B, the normalization section 16B, the counter section 17B, and the determination section 18B.
In reality, the process from step S94 to step S102 is performed in parallel with the process from step S85 to step S93.
If it is determined in step S102 that the counted value is equal to or larger than the threshold value, that is, if it is determined that the numbers of amplitudes having a value equal to or larger than the threshold value in the regions of 32×32 pixels on the left and right sides in
After the processing of step S104 or step S105 has been performed, the determination section 18B determines whether all frames have already been subjected to detection processing in step S106. If all frames have not yet been subjected to detection processing, the process returns to step S81 and the process from step S81 to the subsequent steps is repeatedly performed. If it is determined that all frames have already been subjected to detection processing, the scene change detection ends.
Next, a method of detecting a representative image of a scene will be described.
The vector detection unit 211 detects a motion vector from the output of the inverse Fourier transform unit 52. The determination unit 212 included in the representative image detection section 201 detects a frame number corresponding to the minimum motion vector among motion vectors that have been detected by the vector detection unit 211.
Next, scene change detection and representative image detection performed by the image processing apparatus shown in
The process from step S121 to step S145 is the same as the process from the step S31 shown in
In step S149, the determination unit 212 determines whether all frames have already been subjected to detection processing. If all frames have not yet been subjected to detection processing, the process returns to step S121 and the process from step S121 to the subsequent steps is repeatedly performed. If it is determined that all frames have already been subjected to detection processing, the scene change detection and the representative image detection end.
Thus, a most motionless frame (a frame in which the coordinates corresponding to the maximum amplitude thereof is closest to the origin (0, 0) as shown in
Next, scene change detection and representative image detection performed by the image processing apparatus shown in
In step S171, the image input section 11 receives an image. In step S172, the region extraction section 12 extracts an image signal corresponding to a predetermined region of the image represented by an image signal transmitted from the image input section 11. In step S173, the image reducing section 13 reduces the size of the region represented by the image signal extracted by the region extraction section 12. In step S174, the delay unit 101 delays the signal transmitted from the image reducing section 13 and outputs the delayed signal to the difference computation unit 91. In step S175, the difference computation unit 91 computes the difference between signals of images forming different frames, one of which has been transmitted directly from the image reducing section 13 and the other one of which has been transmitted from the image reducing section 13 via the delay unit 101. In step S176, the determination unit 92 determines whether the difference value computed in step S175 is equal to or larger than a threshold value that has been set in advance. If the difference value is not equal to or larger than the threshold value, the process returns to step S171 and the process from step S171 to the subsequent steps is repeatedly performed.
If it is determined in step S176 that the difference value is equal to or larger than the threshold value, the dividing section 111 divides an image represented by a signal transmitted from the image reducing section 13 in step S177. In the following process from step S178 to S186, each of the non-image detection section 14A, the computation section 15A, the normalization section 16A, the counter section 17A, and the determination section 18A performs processing upon an image signal corresponding to one of the divided images. In the process from step S187 to step S195, each of the non-image detection sections 14B, the computation section 15B, the normalization section 16B, the counter section 17B, and the determination section 18B performs processing upon an image signal corresponding to the other one of the divided images. These processes correspond to the process from step S85 shown in
If it is determined in step S195 that the counted value is not equal to or larger than the threshold value, it is determined in step S180 that the value of the alternating component is not equal to or larger than the threshold value, it is determined in steps S186 that the counted value is not equal to or larger than the threshold value, or it is determined in step S189 that the value of the alternating component is not equal to or larger than the threshold value, the determination section 18B determines that the frame being processed is not a scene change in step S200. Subsequently, the determination section 18B determines whether all frames have already been subjected to detection processing in step S202. If all frames have not yet been subjected to detection processing, the process returns to step S171 and the process from step S171 to the subsequent steps is repeatedly performed.
If it is determined in step S195 that the counted value is equal to or larger than the threshold value, the determination section 18B determines that the frame being processed is a scene change in step S196. In step S197, the storage section 19 stores the result of the determination performed in step S196.
Thus, after the scene change detection has been performed, the vector detection unit 211 extracts a motion vector in step S198. More specifically, the vector detection unit 211 determines which of the outputs of the inverse Fourier transform units 52A and 52B is closer to the origin (which of the motion vectors is smaller), and extracts the determination result as a motion vector. In step S199, the determination unit 212 determines whether the motion vector extracted in step S198 is the minimum motion vector.
If the extracted motion vector is the minimum motion vector, the storage section 19 stores a frame number corresponding to the minimum motion vector in step S201. If the extracted motion vector is not the minimum motion vector, the processing of step S201 is skipped. Subsequently, the determination section 18B determines whether all frames have already been subjected to detection processing. If all frames have not yet been subjected to detection processing, the process returns to step S171 and the process from step S171 to the subsequent steps is repeatedly performed. If it is determined that all frames have already been subjected to detection processing, the scene change detection and the representative image detection end.
Thus, not only an image signal of the image received by the image input section 11 but also a scene change and a representative image (most motionless image) of each scene are stored in the storage section 19.
By reducing the sizes of images, the adverse effect of image rotation on the scene change detection can be prevented. However, the adverse effect can be further prevented by using a configuration shown in
That is, in an image processing apparatus shown in
On the other hand, the output of the Fourier transform unit 31B included in the non-image detection sections 14B is supplied to the rotation/scaling transformation section 304. The rotation/scaling transformation section 304 performs rotation or scaling transformation upon the image signal transmitted from the Fourier transform unit 31B in accordance with a control signal transmitted from a rotation/scaling detection section 303, and outputs the processed signal to a Fourier transform unit 341 in the computation section 15. The Fourier transform unit 341 performs a Fourier transform upon the signal transmitted from the rotation/scaling transformation section 304, and supplies the processed signal to the other one of the input terminals of the cross power spectrum detection unit 51.
The output of the Fourier transform unit 31A included in the non-image detection section 14A is also supplied to the amplitude spectrum computation unit 311A included in a computation section 301A. The amplitude spectrum computation unit 311A computes an amplitude spectrum of the signal transmitted from the Fourier transform unit 31A. The log-polar coordinate transformation unit 312A in the computation section 301A transforms the computation result into log-polar coordinates and supplies the processed signal to a Fourier transform unit 331A included in a computation section 302. The operation of the amplitude spectrum computation unit 311A is controlled in accordance with the output of the determination unit 33A included in the non-image detection section 14A.
Similarly, the amplitude spectrum computation unit 311B included in a computation section 301B computes an amplitude spectrum of the output of the Fourier transform unit 31B included in the non-image detection sections 14B, and outputs the computation result to the log-polar coordinate transformation unit 312B included in the computation section 301B. The log-polar coordinate transformation unit 312B transforms the signal transmitted from the amplitude spectrum computation unit 311B into log-polar coordinates, and outputs the processed signal to a Fourier transform unit 331B included in the computation section 302. The operation of the Fourier transform unit 331B is controlled in accordance with the output of the determination unit 33B included in the non-image detection sections 14B.
A cross power spectrum detection unit 332 included in the computation section 302 that performs computation compliant with SPOMF detects a cross power spectrum using outputs of the Fourier transform units 331A and 331B. An inverse Fourier transform unit 333 performs a fast inverse Fourier transform upon the cross power spectrum output from the cross power spectrum detection unit 332. The rotation/scaling detection section 303 detects rotation or scaling of the image from the output of the inverse Fourier transform unit 333, and controls the rotation/scaling transformation section 304 on the basis of the detection result.
Like the case of the image processing apparatus shown in
Next, scene change detection perform by the image processing apparatus shown in
In step 234, the Fourier transform units 31A and 31B perform two-dimensional fast Fourier transforms upon the signals output from the image reducing sections 13A and 13B, respectively. More specifically, the following equations (7) and (8) are computed.
In step S235, the alternating component detection units 32A and 32B detect alternating components from the outputs of the Fourier transform units 31A and 31B, respectively. In step S236, the determination units 33A and 33B determine whether the values of the alternating components detected in step S235 are equal to or larger than a threshold value that has been set in advance. If the values of the alternating components are not equal to or larger than the threshold value, each of the determination units 33A and 33B determines that a frame being processed is not a scene change in step S252. In step S253, the determination units 33A and 33B determine whether all frames have already been subjected to detection processing. If all frames have not yet been subjected to detection processing, the process returns to step S231 and the process from step S231 to the subsequent steps is repeatedly performed.
If it is determined in step S236 that the values of the alternating components are equal to or larger than the threshold value, the amplitude spectrum computation units 311A and 311B compute amplitude spectrums of the outputs of the Fourier transform units 31A and 31B, respectively. More specifically, the following equations (9) and (10) are computed.
P
F(fx,fy)=√{square root over (AF(fx,fy)2+BF(fx,fy)2)}{square root over (AF(fx,fy)2+BF(fx,fy)2)} (9)
P
G(fx,fy)=√{square root over (AG(fx,fy)2+BG(fx,fy)2)}{square root over (AG(fx,fy)2+BG(fx,fy)2)} (10)
Next, in step S238, the log-polar coordinate transformation units 312A and 312B transform the outputs of the amplitude spectrum computation units 311A and 311B into log-power coordinates, respectively. More specifically, equations (9) and (10) are transformed into PF (ρ, θ) and PG (ρ, θ) using the following equations (11) and (12).
x=ρ cos(θ) (11)
y=ρ sin(θ) (12)
In step S239, the Fourier transform units 331A and 331B perform two-dimensional fast Fourier transforms upon the outputs of the log-polar coordinate transformation units 312A and 312B, respectively. More specifically, the following equations (13) and (14) are computed.
In step S240, the cross power spectrum detection unit 332 detects a cross power spectrum using the outputs of the Fourier transform units 331A and 331B. That is, one of the following equations (15) and (16) is computed.
In step S241, the inverse Fourier transform unit 333 performs a two-dimensional fast inverse Fourier transform upon the cross power spectrum output from the cross power spectrum detection unit 332. More specifically, the following equation (17) is computed.
In step S242, the rotation/scaling detection section 303 calculates a scaling ratio and a rotation angle from a signal output from the inverse Fourier transform unit 333. In the output of the inverse Fourier transform unit 333, ρ denotes a scaling ratio, and θ denotes a rotation angle. In step S243, the rotation/scaling transformation section 304 performs scaling and rotation control upon the signal transmitted from the Fourier transform unit 31B on the basis of the scaling ratio ρ and the rotation angle θ which have been transmitted from the rotation/scaling detection section 303. Consequently, the scaling and rotation of the output of the Fourier transform unit 31B is controlled so as to correspond to the output of the Fourier transform unit 31A.
In step S244, the Fourier transform unit 341 performs a Fourier transform upon the output of the rotation/scaling transformation section 304. In step S245, the cross power spectrum detection unit 51 detects a cross power spectrum using signals transmitted from the Fourier transform unit 31A and the Fourier transform unit 341. In step S246, the inverse Fourier transform unit 52 performs a two-dimensional fast inverse Fourier transform upon the cross power spectrum output from the cross power spectrum detection unit 51.
In step S247, the normalization section 16 normalizes the output of the inverse Fourier transform unit 52. That is, the following equation (18) is computed.
In step S248, the counter section 17 counts the number of amplitudes having a value equal to or larger than a threshold value. In step S249, the determination section 18 determines whether the value counted in step S248 is equal to or larger than a threshold value that has been set in advance. If the counted value is not equal to or larger than the threshold value, the determination section 18 determines that one of the frames being processed is not a scene change in step S252. Subsequently, in step S253, the determination section 18 determines whether all frames have already been subjected to detection processing. If all frames have not yet been subjected to detection processing, the process returns to step S231 and the process from step S231 to the subsequent steps is repeatedly performed.
If it is determined in step S249 that the counted value is equal to or larger than the threshold value, the determination section 18 determines that one of the frames being processed is a scene change. Subsequently, in step S251, the storage section 19 stores the determination result. In step S253, the determination section 18 determines whether all frames have already been subjected to detection processing. If all frames have not yet been subjected to detection processing, the process returns to step S231. If it is determined that all frames have already been subjected to detection processing, the scene change detection ends.
Thus, since scaling or a rotation angle of an image is controlled, the image processing apparatus shown in
The average computation units 361A and 361B perform computation of an average for the outputs of the image reducing sections 13A and 13B, respectively. The difference computation units 362A and 362B compute differences between the outputs of the image reducing sections 13A and 13B and the outputs of the average computation units 361A and 361B, respectively.
A computation section 352 performs computation upon the outputs of the difference computation units 362A and 362B. The computation section 352 is provided with the matching unit 371 and a multiplying unit 372. The matching unit 371 computes the sum of absolute differences of outputs of the difference computation units 362A and 362B. The multiplying unit 372 multiplies the sum of absolute differences computed by the matching unit 371 by −1.
Like the above-described cases, the output of the computation section 352 is processed by the normalization section 16, the counter section 17, the determination section 18, and the storage section 19.
Next, scene change detection performed by the image processing apparatus shown in
In step S301, the image input sections 11A and 11B receive images forming different frames. In step S302, the region extraction sections 12A and 12B extract image signals corresponding to predetermined regions of the images represented by image signals transmitted from the image input sections 11A and 11B, respectively. In step S303, the image reducing sections 13A and 13B reduce the sizes of the regions represented by the image signals extracted by the region extraction sections 12A and 12B, respectively.
In step S304, the average computation unit 361A perform computation of an average for a single image corresponding to a size-reduced image represented by the image signal transmitted from the image reducing section 13A. Similarly, the average computation unit 361B perform computation of an average for a single image corresponding to a size-reduced image represented by the image signal transmitted from the image reducing section 13B. These average values are represented by avg(f(x, y)) and avg(g(x, y)), respectively.
In step S305, the difference computation units 362A and 362B compute differences between the outputs of the image reducing sections 13A and 13B and the outputs of the average computation units 361A and 361B, respectively. More specifically, the following equations (19) and (20) are computed. In the following equations, a variable to which the symbol “′” is added is different from a variable to which the symbol is not added.
f′(x,y)=f(x,y)−avg(f(x,y)) (19)
g′(x,y)=g(x,y)−avg(g(x,y)) (20)
In step S306, the matching unit 371 calculates the sum of absolute differences of outputs of the difference computation units 362A and 362B. More specifically, the following equation (21) is computed. In step S307, the multiplying unit 372 multiplies the sum of absolute differences calculated by the matching unit 371 by −1. That is, the following equation (22) is computed.
In step S308, the normalization section 16 normalizes the output of the multiplying unit 372. More specifically, the following equation (23) is computed.
In step S309, the counter section 17 counts the number of amplitudes having a value equal to or larger than a threshold value. In step S310, the determination section 18 determines whether the value counted in step S309 is equal to or larger than a threshold value. If the counted value is not equal to or larger than the threshold value, the determination section 18 determines that one of the frames being processed is not a scene change in step S313. Subsequently, in step S314, the determination section 18 determines whether all frames have already been subjected to detection processing. If all frames have not yet been subjected to detection processing, the process returns to step S301 and the process from step 301 to the subsequent steps is repeatedly performed.
If it is determined in step S310 that the counted value is equal to or larger than the threshold value, the determination section 18 determines that one of the frames being processed is a scene change in step S311. In step S312, the storage section 19 stores the result of the determination performed in step S311. In step S314, the determination section 18 determines whether all frames have already been subjected to detection processing. If all frames have not yet been subjected to detection processing, the process returns to step S301 and the process from step S301 to the subsequent steps is repeatedly performed. If it is determined that all frames have already been subjected to detection processing, the scene change detection ends.
In this image processing apparatus, a non-image detection section, a simplified detection section, a dividing section, or a representative image detection section may be added.
The CPU 421 is also connected to an input/output interface 425 via the bus 424. The input/output interface 425 is connected to an input unit 426 configured with a keyboard, a mouse, and a microphone, and an output unit 427 configured with a display and a speaker. The CPU 421 performs various processing operations in accordance with instructions input from the input unit 426, and outputs the result of processing to the output unit 427.
The storage unit 428 connected to the input/output interface 425 is configured with, for example, a hard disk, and stores a program to be executed by the CPU 421 and various pieces of data. A communication unit 429 communicates with an external apparatus via a network such as the Internet or a local area network. A program may be acquired via the communication unit 429 and may be stored in the storage unit 428.
When a removable medium 431 such as a magnetic disk, an optical disc, a magneto-optical disk, or a semiconductor memory is attached to a drive 430 connected to the input/output interface 425, the drive 430 drives the removable medium 431 to acquire a program or data recorded thereon. The acquired program or data is transferred to the storage unit 428 as appropriate, and is then recorded in the storage unit 428.
If the processing flow is performed by software, a program configuring the software is installed from a program recording medium on a computer embedded in a piece of dedicated hardware or, for example, on a general-purpose personal computer that is allowed to perform various functions by installing various programs thereon.
As shown in
In this description, the steps describing the program to be stored in the program recording medium do not have to be executed in chronological order described above. The steps may be concurrently or individually.
In this description, a system denotes an entire apparatus composed of a plurality of devices.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
P2006-132712 | May 2006 | JP | national |