The disclosure of Japanese Patent Application No. 2009-139250, which was filed on Jun. 10, 2009, is incorporated herein by reference.
1. Field of the Invention
The present invention relates to a motion detecting apparatus. More particularly, the present invention relates to a motion detecting apparatus which is applied to a video camera and which detects a motion of an object scene with reference to a repeatedly fetched object scene image.
2. Description of the Related Art
According to one example of this type of apparatus, in order to detect a motion of a subject, four detection blocks, each of which has a representative point are assigned to an imaging surface. In each of the detection blocks, correlation values of a plurality of pixels forming a current field are calculated with reference to the representative pixel of a previous field. A motion vector expressing the motion of the subject is created based on the thus calculated correlation values. Herein, the four detection blocks are assigned to the imaging surface in a respectively overlapping manner. Thereby, even when the size of the subject is small, it is possible to highly accurately execute a motion correcting process such as a subject tracking.
However, in the above-described apparatus, a variation in resolution of image data is not assumed, and when the resolution of the image data is varied, there is a possibility that a motion detecting capability is decreased.
A motion detecting apparatus according to the present invention, comprises: a fetcher which repeatedly fetches an object scene image having a designated resolution; an assigner which assigns a plurality of areas, each of which has a representative point to the object scene image, in a manner to have an overlapping amount different depending on a magnitude of the designated resolution; a divider which divides each of a plurality of images respectively corresponding to the plurality of areas assigned by the assigner, into a plurality of partial images, by using the representative point as a base point; a detector which detects a difference in brightness between a pixel corresponding to the representative point and surrounding pixels, from each of the plurality of partial images divided by the divider; and a creator which creates motion information indicating a motion of the object scene image fetched by the fetcher, based on a detection result of the detector.
The above described features and advantages of the present invention will become more apparent from the following detailed description of the embodiment when taken in conjunction with the accompanying drawings.
With reference to
Each of the plurality of images respectively corresponding to the plurality of areas is divided into the plurality of partial images, by using each of the representative points as the base point, and the difference in brightness between the pixel corresponding to the representative point and the surrounding pixels is detected from each of the partial images. In this way, a situation in which a load charged on the detection of the difference in brightness increases caused due to an increase in overlapping amount of the plurality of areas can be avoided. Thereby; it becomes possible to inhibit a decrease in motion detecting capability caused due to variation in resolution.
With reference to
When a power source is turned on, a CPU 32 starts a driver 20d in order to execute a moving-image fetching process. In response to a vertical synchronization signal Vsync, generated at every 1/60th of a second, the driver 20d exposes the imaging surface and reads out electric charges produced on the imaging surface in a raster scanning manner. From the image sensor 16, raw image data representing the object scene is outputted at a flume rate of 60 fps.
A pre-processing circuit 22 performs various processes, such as digital clamp, pixel defect correction, electronic zoom, and gain control, on the raw image data from the image sensor 18, and outputs raw image data having a resolution corresponding to an electronic zoom magnification. Herein, the electronic zoom process is executed by, in particular, a zoom circuit 22z. The raw image data outputted from the pre-processing circuit 22 is written into a raw image area 36a of an SDRAM 36 through a memory control circuit 34.
When the object scene image corresponding to the raw image data outputted from the pre-processing circuit 22 has expansion shown in
A post-processing circuit 38 detects the attributes of the extraction area EX with reference to the register RGST1, reads out the portion of the raw image data belonging to the extraction area EX, out of the raw image data accommodated in the raw image area 36a, through the memory control circuit 34 at every 1/60th of a second, and performs various processes, such as color separation, white balance adjustment, and YUV conversion, on the read-out raw image data. As a result, image data corresponding to a YUV format is created at every 1/60th of a second. The created image data is written into a YUV image area 36b of the SDRAM 36 through the memory control circuit 34.
An LCD driver 40 repeatedly reads out the image data accommodated in the YUV image area 36b, executes an electronic zoom process in which a difference in resolution between the read-out image data and an LCD monitor 42 is taken into consideration, and drives the LCD monitor 42 based on the image data on which the electronic zoom process has been performed. As a result, a real-time moving image (through image) representing the object scene is displayed on a monitor screen.
A simple Y generating circuit 24 simply converts the raw image data outputted from the pre-processing circuit 22 into Y data. The converted Y data is applied to an AF evaluating circuit 26, an AE evaluating circuit 28, and a motion detecting circuit 30.
From the Y data outputted from the simple Y generating circuit 24, the AF evaluating circuit 26 integrates a high-frequency component of a portion of the Y data belonging to the evaluation area (not shown) at every 1/60th of a second, and applies the integrated value, i.e., a focus evaluation value, to the CPU 32. Based on the applied focus evaluation value, the CPU 32 executes a so-called continuous AF process, and places a focus lens 14 at a focal point. As a result, the clarity of the moving image outputted from the LCD monitor 42 is improved continuously.
From the Y data outputted from the simple Y generating circuit 24, the AE evaluating circuit 28 integrates a portion of the Y data belonging to the evaluation area at every 1/60th of a second, and outputs the integrated value, i.e., a luminance evaluation value. Based on the luminance evaluation value outputted from the AE evaluating circuit 28, the CPU 32 calculates an EV value from which an appropriate exposure amount can be obtained, and sets an aperture amount and an exposure time that define the calculated EV value, to the drivers 20c and 20d, respectively. As a result, the brightness of the moving image outputted from the LCD monitor 42 is adjusted moderately.
When a zoom operation is performed toward a key input device 48, the CPU 32 calculates a target zoom magnification under a zoom control task. The target zoom magnification is calculated with reference to the zoom magnification at the current time point and a zoom operating manner.
The CPU 32 invalidates the zoom operation when the calculated target zoom magnification is impossible to set while adjusting the zoom magnification by using the zoom lens 12 or the zoom circuit 22z when the calculated target zoom magnification is possible to set.
When it is possible to adjust the zoom magnification by an optical zoom process, a position of the zoom lens 12 is adjusted to match the target zoom magnification. When it is not possible to adjust the zoom magnification by the optical zoom process, the electronic zoom magnification set to the zoom circuit 22z is adjusted to the target zoom magnification. Thereby, the magnification of the moving image outputted from the LCD monitor 42 is changed. It is noted that an upper limit value of the electronic zoom magnification is “1.5”.
When a recording start operation is performed toward the key input device 48, the CPU 32 starts up an I/F 44 under an imaging task in order to start a recording process. The I/F 44 reads out the image data accommodated in the YUV image area 36b at every 1/60th of a second, and writes the read-out image data into a moving image file within a recording medium 46. In this case, that image data is in a compressed state. The I/F 44 is stopped by the CPU 32 when a recording end operation is performed toward the key input device 32. As a result, the recording process of the image data is ended.
As described above, the object scene image corresponding to the raw image data outputted from the pre-processing circuit 22 has expansion shown in
According to
The motion detecting circuit 30 detects the motion information of the object scene in each of the motion detection blocks MD_1 to MD_5 at every 1/60th of a second based on the Y data outputted from the simple Y generating circuit 24, and applies the detected motion information to the CPU 32. The CPU 32 executes the following processes under an image stabilizing task.
That is, the CPU 32 creates a total motion vector based on the applied motion information, and determines whether a motion on the imaging surface in a direction orthogonal to an optical axis is caused either by a camera shake or a pan/tilt movement based on the created total motion vector. When the motion on the imaging surface is caused due to the camera shake, the CPU 32 updates the description in the register RGST1 so that the extraction area EX is assigned to a position at which the total motion vector is compensated. Therefore, if the camera shake occurs on the imaging surface, then the extraction area EX moves on the imaging surface as shown in
The motion detecting circuit 30 is configured as shown in
The distributor 52 detects the attributes of the motion detection blocks MD_1 and MD_5 with reference to the register RGST2, and distributes Y_L data to motion-information creating circuits 54 to 58. The Y_L data belonging to the motion detection blocks MD_1 and MD_4 are applied to the motion-information creating circuit 54, the Y_L data belonging to the motion detection block MD_3 is applied to the motion-information creating circuit 56, and the Y_L data belonging to the motion detection blocks MD_2 and MD_5 are applied to the motion-information creating circuit 58.
The motion-information creating circuit 54 notices the motion of the object scene captured in each of the motion detection blocks MD_1 and MD_4, outputs a minimum correlation value MIN_1 (described later) as motion information of the motion detection block MD_1, and at the same time, outputs a minimum correlation value MIN_4 (described later) as motion information of the motion detection block MD_4.
The motion-information creating circuit 56 notices the motion of the object scene captured in the motion detection block MD_3, outputs a minimum correlation value MIN_3 (described later) as motion information of the motion detection block MD_3.
The motion-information creating circuit 58 notices the motion of the object scene captured in each of the motion detection blocks MD_2 and MD_5, outputs a minimum correlation value MIN_2 (described later) as motion information of the motion detection block MD_2, and at the same time, outputs a minimum correlation value MIN_5 (described later) as motion information of the motion detection block MD_5.
Each of the motion-information creating circuits 54 to 58 is configured as shown in
Furthermore, with reference to
Returning to
On the other hand, the distributor 64 defines a horizontal axis and a vertical axis (where the origin is the representative pixel) in each of the minute blocks SBK, SBK, SBK, . . . specified by referring to the register RGST2, and distributes the Y data belonging to a first quadrant to a fourth quadrant to memories 66 to 72. As shown in
Each of difference-absolute-value calculating circuits 74 to 80 calculates a difference absolute value between a Y_L data value of each pixel of a current frame stored in each of the memories 66 to 72 and a Y_L data value of the representative pixel of a previous frame stored in the representative pixel memory 62, corresponding to each of the minute blocks SBK, SBK, . . . .
From the difference-absolute-value calculating circuit 74, a difference absolute value between the Y_L data value of each pixel belonging to the first quadrant and the Y_L data value of the representative pixel is outputted, and from the difference-absolute-value calculating circuit 76, a difference absolute value between the Y_L data value of each pixel belonging to the second quadrant and the Y_L data value of the representative pixel is outputted. Similarly, from the difference-absolute-value calculating circuit 78, a difference absolute value between the Y_L data value of each pixel belonging to the third quadrant and the Y_L data value of the representative pixel is outputted, and from the difference-absolute-value calculating circuit 80, a difference absolute value between the Y_L data value of each pixel belonging to the fourth quadrant and the Y_L data value of the representative pixel is outputted.
With reference to
A correlation-value calculating circuit 82 shown in
A minimum-correlation-value extracting circuit 84 extracts a correlation value indicating a minimum value, out of the previously-calculated correlation values CR (1, 1) to CR (64, 64), and outputs the extracted correlation value as a minimum correlation value MIN N (N: any one of 1 to 5).
The CPU 32 updates the description of the registers RGST1 and RGST2 when the electronic zoom magnification is changed under the zoom control task.
The size of the extraction area EX described in the register RGST1 is adjusted to a size shown in
Moreover, the sizes of the motion detection areas MD_1 to MD_5 described in the register RGST2 are adjusted to sizes shown in
Moreover, a manner of assigning the minute block SBK described in the register RGST2 is adjusted to a manner shown in
The minute blocks SBK, SBK, . . . adjacent in each of a horizontal direction and a vertical direction are placed not to be overlapped to one another when the electronic zoom magnification is “1.0”, placed in a manner to have an overlapping amount of ¼ in each of the horizontal direction and the vertical direction when the electronic zoom magnification is “1.25”, and placed in a manner to have an overlapping amount of ½ in each of the horizontal direction and the vertical direction when the electronic zoom magnification is “1.5”. Thus, the overlapping amounts of the two minute blocks SBK and SBK adjacent to each other increase along with an increase in electronic zoom magnification (in other words, along with a decrease in resolution of the object scene image).
The CPU 32 processes a plurality of tasks including an imaging task for a moving image shown in
With reference to
With reference to
With reference to
When the adaptable process is the optical zoom process, the process advances to a step S39, and in this step, the zoom lens 12 is moved to a position corresponding to the target zoom magnification. Upon completion of the process in the step S39, the process returns to the step S31. When the adaptable process is the electronic zoom process, the process advances to a step S41, and in this step, the electronic zoom magnification set to the zoom circuit 22z is adjusted to the target zoom magnification. In a step S43, the description (the size of the extraction area EX) of the register RGST1 is updated so as to be adapted to the adjusted electronic zoom magnification. In a step S45, the description (the sizes of the motion detection areas MD_1 to MD_5 and the manner of assigning the minute block SBK) of the register RGST2 is updated so as to be adapted to the adjusted electronic zoom magnification. Upon completion of the updating process in the step S45, the process returns to the step S31.
As can be seen from the above-described explanation, the simple Y generating circuit 24 repeatedly fetches the object scene image having the designated resolution. The register RGST2 assigns a plurality of minute blocks SBK, SBK, . . . , each of which has a representative point, to an object scene image in a manner to have an overlapping amount different depending on the magnitude of the designated resolution. The distributor 64 divides each of a plurality of images respectively corresponding to a plurality of minute blocks SBK, SBK, . . . , into a plurality of partial images by using the representative point as the base point. Each of the difference-absolute-value calculating circuits 74 to 80 calculates the difference absolute value expressing a difference in brightness between the pixel corresponding to the representative point and surrounding pixels, from each of the plurality of divided partial images. The minimum-correlation-value extracting circuit 84 creates the minimum correlation value MIN N equivalent to the motion information of the object scene image, based on the calculated difference absolute values.
Each of a plurality of images respectively corresponding to a plurality of minute blocks SBK, SBK, . . . is divided into a plurality of partial images by using the representative point as the base point, and the difference absolute value expressing the difference in brightness between the pixel corresponding to the representative point and the surrounding pixels is calculated from each of the partial images. In this way, it becomes possible to avoid a situation in which a load charged on calculation of the difference absolute value increases caused due to an increase in overlapping amount of a plurality of minute blocks SBK, SBK, . . . . Thereby, it becomes possible to inhibit a decrease in motion detecting capability caused due to variation in resolution.
It is noted that in this embodiment, the motion detecting circuit 30 is started up in an imaging mode. However, optionally, the motion detecting circuit 30 may be additionally started up in a reproduction mode in which the moving image recorded on the recording medium 46 is reproduced. In this case, however, the size of the extraction area EX in the reproduction mode needs to be made smaller than the size of the extraction area EX in the imaging mode. Moreover, as shown in
Although the present invention has been described and illustrated in detail, it is clearly understood that the same is by way of illustration and example only and is not to be taken by way of limitation, the spirit and scope of the present invention being limited only by the terms of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2009-139250 | Jun 2009 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
5247586 | Gobert et al. | Sep 1993 | A |
6396538 | Kobayashi et al. | May 2002 | B1 |
20080056613 | Hatanaka et al. | Mar 2008 | A1 |
Number | Date | Country |
---|---|---|
61201587 | Sep 1986 | JP |
61269475 | Nov 1986 | JP |
06046318 | Feb 1994 | JP |
07322126 | Aug 1995 | JP |
09062847 | Mar 1997 | JP |
Number | Date | Country | |
---|---|---|---|
20110103645 A1 | May 2011 | US |