The present application claims priority to Japanese Patent Application JP 2007-239487 filed in the Japan Patent Office on Sep. 14, 2007, the entire contents of which is being incorporated herein by reference.
The present application relates to an image processing apparatus and an image processing method wherein a motion vector between two different screens is detected. In the present specification, the term screen is used to signify an image formed from image data for one frame or one field and displayed on a display apparatus.
A block matching technique for determining a motion vector between two screens from image information itself has an old history. Development of the block matching technique has progressed principally in regard to pan-tilt detection or image pickup object tracking of a television camera and moving picture encoding of the MPEG (Moving Picture Experts Group) system. In the 1990s, application was attempted over a wide range including sensorless camera shake correction or noise removal or noise reduction (hereinafter referred to simply as NR) upon low illuminance image pickup by superposition of images.
The block matching is a method wherein a motion vector between two screens including a reference screen as a noticed screen and a base screen (hereinafter referred to as target screen) based on which a motion of the reference screen is detected is calculated by calculating a correlation between the reference image and the base screen with regard to blocks of a rectangular region of a predetermined size. Two cases are available including a case wherein the base screen precedes in time to the reference image (for example, in the case of motion detection according to the MPEG) and another case wherein the reference screen precedes in time to the base screen (for example, in the case of noise reduction by superposition of image frames hereinafter described).
It is to be noted that, while, in the present specification, the term screen is used to signify an image formed from image data for one frame or one field as described above, for the convenience of description, it is assumed in the following description that the screen is formed from one frame and is hereinafter referred to as frame. Accordingly, the reference screen is hereinafter referred to as reference frame, and the base screen is hereinafter referred to as base frame.
In the block matching, a reference frame 101 is searched to detect a block having a high correlation to a target block 102. A block 103 (refer to
The motion vector 104 corresponding to the positional displacement between the target block 102 and the motion compensation block 103 including the amount of the positional displacement and the direction of the positional displacement corresponds, where a projection image block 109 of the target block 102 is supposed to be positioned at a position of the reference frame 101 same as the position of the target block 102 of the target frame 100, to the positional displacement between the position, for example, the position of the center, of the projection image block 109 of the target block and the position, for example, the position of the center, of the motion compensation block 103, and has a positional displacement amount and a positional displacement directional component.
An outline of the block matching process is described. A projection image block 109 of each target block is supposed to be positioned at a position same as the position of the target block 102 of the target frame 100 on the reference frame 101 as indicated by a broken line in
Then, a block 108 hereinafter referred to as reference block having a size equal to that of the target block 102 is set on the reference frame 101. Then, the position of the reference block 108 is shifted by a unit distance of one pixel or a plurality of pixels, for example, in the horizontal direction and the vertical direction within the search range 106. Accordingly, in the search range 106, a plurality of reference blocks 108 are set.
Here, to shift the reference block 108 in the search range 106 in the present example signifies that, since the origin 105 is the position of the center of the target block, the position of the center of the reference block 108 is shifted within the search range 106. Thus, a pixel which composes the reference block 108 may protrude from the search range 106.
Then, a vector 107 hereinafter referred to as reference vector (refer to
Referring to
For example, where the reference block 108 is at a position displaced by a one-pixel distance in the X direction from the position of the target block 102, the reference vector 107 is a vector (1, 0). Meanwhile, if the reference block 108 is displaced by a three-image distance in the X direction and by a two-image distance in the Y direction as seen in
In particular, if it is assumed that the positions of the target block 102 and the reference block 108 are the positions of the centers thereof, respectively, as seen in
While the reference block 108 moves within the search range 106, in this instance, the position of the center of the reference block 108 moves within the search range 106. Since each reference block 108 includes a plurality of pixels in the horizontal direction and the vertical direction as described hereinabove, the maximum range of the movement of the reference block 108 which is an object of the block matching process with the target block 102 is a matching processing range 110 which is greater than the search range 106.
Then, the position of the reference block 108 detected as a block which has the highest correlation with the image contents of the target block 102 is detected as the position of the target block 102 of the target frame 100 in the reference frame 101, that is, as a position after moved. Then, the positional displacement amount between the position of the detected motion compensation block 103 and the position of the target block 102 is detected as a motion vector 104 as an amount which includes a directional component (refer to
Here, while the correlation value representative of the degree of correlation between the target block 102 and the reference block 108 which moves within the search range 106 is calculated basically using corresponding pixel values of the target block 102 and the reference block 108, a root mean square method and other various methods are available as the calculation method.
As a correlation value used popularly in order to calculate a motion vector, for example, the sum total of absolute values of the difference between the luminance value of the pixels in the target block 102 and the luminance value of corresponding pixels in the search range 106 with regard to all pixels in the target block 102 is used. The sum total mentioned is called difference absolute value sum and hereinafter referred to as SAD (Sum of Absolute Difference).
Where a SAD value is used as a correlation value, a lower SAD value indicates higher correlation. Accordingly, a reference block 108 at a position at which the SAD value is a minimum value from within the reference block 108 moved within the search range 106 is the highest correlation reference block having the highest correlation. The highest correlation reference block is detected as a motion compensation block 103, and the positional displacement amount of the detected motion compensation block 103 from the position of the target block 102 is detected as motion vector.
As described hereinabove, in block matching, the positional displacement amount of each of a plurality of reference blocks 108 set in the search range 106 from the position of the target block 102 is represented by a reference vector 107 as an amount which includes a directional component. The reference vector 107 of each reference block 108 has a value which depends upon the position of the reference block 108 on the target block 102. As described hereinabove, in the block matching, a reference vector of the reference block 108 whose SAD value as a correlation value exhibits the lowest value is detected as a motion vector 104.
Thus, in the block matching, SAD values between a plurality of reference blocks 108 set within the search range 106 and the target block 102 (such SAD values are hereinafter referred to simply as SAD values regarding the reference block 108 for the simplified description) are usually stored in a memory in a corresponding relationship to reference vectors 107 which depend upon the position of the individual reference block 108 (reference vectors 107 which depend upon the position of the reference blocks 108 are hereinafter referred to as reference vectors 107 of the reference blocks 108 for the simplified description). Then, a reference block 108 having the lowest SAD value is detected from among the SAD values regarding all reference blocks 108 stored in the memory to detect the motion vector 104.
A table in which the correlation values, that is, the SAD values, regarding the reference blocks 108 are stored in a corresponding relationship to the reference vectors 107 which depend upon the position of a plurality of reference blocks 108 set within the search range 106 is hereinafter referred to as correlation value table. In the present example, since a SAD value which is a difference absolute value sum is used as a correlation value, the correlation value table is hereinafter referred to as difference absolute value sum table (hereinafter referred to as SAD table.
A SAD table TBL of
It is to be noted that the positions of the target block 102 and the reference block 108 in the foregoing description signify arbitrary particular positions, for example, the positions of the centers, of the blocks. The reference vector 107 indicates the displacement amount including the direction between the position of the projection image block 109 of the target block 102 in the reference frame 101 and the position of the reference block 108.
Then, since the reference vector 107 corresponding to the reference block 108 is the positional displacement of each reference block 108 from the position of the projection image block 109 corresponding to the target block 102 on the reference frame 101, if the position of the reference block 108 is specified, then also the value of the reference vector is specified in accordance with the specified position. Accordingly, if the address of a correlation value table element of the reference block in the memory of the matching processing range 110 is specified, then the corresponding reference vector is specified.
It is to be noted that the SAD value may be calculated for more than two target blocks at the same time. If the number of target blocks to be processed at the same time increases, then the processing speed increases. However, since the scale of hardware for calculating the SAD value increases, the increase of the speed of the processing and the increase of the circuit scale have a tradeoff relationship to each other.
Incidentally, a technique of correcting against camera shake in a sensorless state using the block matching technique described above is proposed, for example, in Japanese Patent Laid-Open No. Hei 6-086149 (hereinafter referred to as Patent Document 1). According to the technique of Patent Document 1, an effective image is set in a picked up image and is shifted in response to a shake of the screen. The technique has been developed principally for moving pictures.
On the other hand, development of a technique for sensorless camera shake correction of a still picture started around the year 2,000. In the sensorless camera shake correction, a plurality of images are picked up by such a high speed shutter operation as does not cause camera shake, and the picked up images of low illuminance are superposed with each other taking an influence of camera shake into consideration to produce a single still image of high illuminance. This technique is disclosed, for example, in Japanese Patent Laid-Open No. 2001-86398 (hereinafter referred to as Patent Document 2).
The technique of Patent Document 2 is based on a concept that, if a gain is applied simply to images of low illuminance, then also noise increases, but if images picked up successively are superposed with each other, then noise which is random components is dispersed. From a viewpoint of noise reduction, the technique is considered to be proximate to a frame NR technique for moving pictures.
The frame NR technique for moving pictures superposes a current frame and a reference frame on the real time basis, and the current frame and the reference frame are superposed always in a 1:1 relationship.
On the other hand, with a sensorless camera shake correction technique for still pictures, the influence of camera shake decreases as the shutter speed increases, and as the number of images to be superposed increases, higher sensitivity can be anticipated. Accordingly, a plurality of reference frames are normally used for one current frame.
If block matching is carried out for a plurality of picked up images picked up successively to detect a motion vector between the images and, then while the motion vectors are used to superpose a target frame image and a reference frame image while the superposition position of the images is compensated, to obtain a picked up image whose noise is reduced, it is necessary to temporarily store the target frame image and the reference image frame image into a frame buffer.
Since, particularly in the case of still pictures described above, higher sensitivity can be anticipated as the number of images to be superposed is increased, an image memory for a plurality of frames for retaining a plurality of images to be superposed is required. Therefore, an increased storage capacity is required for the image memory, and consequently, there is a problem that the cost increases and the circuit scale increases.
Therefore, it is desirable to provide an image processing apparatus and method which can reduce the storage capacity of an image memory where block matching between two screens is carried out.
According to an embodiment there is provided an image processing apparatus wherein a plurality of reference blocks of a size equal to that of a target block which has a predetermined size and includes a plurality of pixels set to a predetermined position in a target screen are set in a search range set on a reference screen and a motion vector is detected based on a positional displacement amount, on the screen, of that one of the reference blocks which has the highest correlation to the target block from the target block, including:
a compression section configured to compress image data of the reference screen in a unit of a divisional block where one screen is divided into a plurality of divisional blocks;
a first storage section configured to store the image data compressed by the compression section;
a decompression decoding section configured to read out, from among the compressed image data of the reference screen stored in the first storage section, those of the compressed image data in the divisional block units which include a matching processing range corresponding to the search range, from the first storage section and decompress and decode the read out image data;
a second storage section configured to store the image data decompressed and decoded by the decompression decoding section; and
a mathematical operation section configured to extract the image data of the reference block from among the image data stored in the second storage section and mathematically operate a correlation value between the reference block and the target block.
According to another embodiment there is provided an image processing apparatus wherein a plurality of successive images are superposed in a unit of a first block where one screen is divided into a plurality of first blocks to obtain an image whose noise is reduced and wherein a target block is set as one of the first blocks in a target screen from between two screens to be superposed; a plurality of reference blocks of a size same as that of the target block are set in a search range set to a reference screen which is the other one of the two screens to be superposed; based on a positional displacement amount, on the screen, of that one of the reference blocks which has the highest correlation to the target block from the target block, a motion vector of the first block unit is detected; and while the detected motion vector of the first block unit is used to compensate for a motion of the image for each of the first blocks, superposition of the images is carried out, including:
a compression section configured to compress image data of the reference screen in a unit of a second divisional block where one screen is divided into a plurality of divisional blocks;
a first storage section configured to store the image data compressed by the compression section;
a decompression decoding section configured to read out, from among the compressed image data of the reference screen stored in the first storage section, those of the compressed image data in the second divisional block units which include a matching processing range corresponding to the search range, from the first storage section and decompress and decode the read out image data;
a second storage section configured to store the image data decompressed and decoded by the decompression decoding section; and
a mathematical operation section configured to extract the image data of the reference block from among the image data stored in the second storage section and mathematically operate a correlation value between the reference block and the target block.
According to yet another embodiment there is provided an image pickup apparatus, including:
an image processing apparatus wherein a plurality of successive images are superposed in a unit of a first block where one screen is divided into a plurality of first blocks to obtain an image whose noise is reduced and wherein a target block is set as one of the first blocks in a target screen from between two screens to be superposed; a plurality of reference blocks of a size same as that of the target block are set in a search range set to a reference screen which is the other one of the two screens to be superposed; based on a positional displacement amount, on the screen, of that one of the reference blocks which has the highest correlation to the target block from the target block, a motion vector of the first block unit is detected; and while the detected motion vector of the first block unit is used to compensate for a motion of the image for each of the first blocks, superposition of the images is carried out, including:
a compression section configured to compress image data of the reference screen in a unit of a second divisional block where one screen is divided into a plurality of divisional blocks;
a first storage section configured to store the image data compressed by the compression section;
a decompression decoding section configured to read out, from among the compressed image data of the reference screen stored in the first storage section, those of the compressed image data in the second divisional block units which include a matching processing range corresponding to the search range, from the first storage section and decompress and decode the read out image data;
a second storage section configured to store the image data decompressed and decoded by the decompression decoding section;
a mathematical operation section configured to extract the image data of the reference block from among the image data stored in the second storage section and mathematically operate a correlation value between the reference block and the target block; and
a recording section configured to record the data of the image whose noise is reduced by the superposition into a recording medium.
According to yet another embodiment there is provided an image processing method wherein a plurality of reference blocks of a size equal to that of a target block which has a predetermined size and includes a plurality of pixels set to a predetermined position in a target screen are set in a search range set on a reference screen and a motion vector is detected based on a positional displacement amount, on the screen, of that one of the reference blocks which has the highest correlation to the target block from the target block, including the steps of:
compressing image data of the reference screen in a unit of a divisional block where one screen is divided into a plurality of divisional blocks and storing the compressed image data into a first storage section;
reading out, from among the compressed image data of the reference screen stored in the first storage section, those of the compressed image data in the divisional block units which include a matching processing range corresponding to the search range, from the first storage section, decompressing and decoding the read out image data, and storing the decompressed and decoded image data into a second storage section; and
extracting the image data of the reference block from among the image data stored in the second storage section and mathematically operating a correlation value between the reference block and the target block.
According to yet another embodiment there is provided an image processing method wherein a plurality of successive images are superposed in a unit of a first block where one screen is divided into a plurality of first blocks to obtain an image whose noise is reduced and wherein a target block is set as one of the first blocks in a target screen from between two screens to be superposed; a plurality of reference blocks of a size same as that of the target block are set in a search range set to a reference screen which is the other one of the two screens to be superposed; based on a positional displacement amount, on the screen, of that one of the reference blocks which has the highest correlation to the target block from the target block, a motion vector of the first block unit is detected; and while the detected motion vector of the first block unit is used to compensate for a motion of the image for each of the first blocks, superposition of the images is carried out, including the steps of:
compressing image data of the reference screen in a unit of a second divisional block where one screen is divided into a plurality of divisional blocks and storing the compressed image data into a first storage section;
reading out, from among the compressed image data of the reference screen stored in the first storage section, those of the compressed image data in the second divisional block units which include a matching processing range corresponding to the search range, from the first storage section, decompressing and decoding the read out image data and storing the decompressed and decoded image data into a second storage section; and
extracting the image data of the reference block from among the image data stored in the second storage section and mathematically operating a correlation value between the reference block and the target block.
With the image processing apparatus, the storage capacity of the image memory used for the mathematical operation of a correlation value in block matching can be reduced, and efficient reading out of image data can be anticipated. Therefore, even if the image size increases or the number of images to be added, for example, in a NR process increases, an image process can be carried out while the cost is suppressed.
Additional features and advantages are described herein, and will be apparent from the following Detailed Description and the figures.
In the following, an image processing apparatus which uses an image processing method according to the present application is described. The image processing apparatus is formed as an image pickup apparatus. Further, the process which is carried out using a detected motion vector by the image processing apparatus is superposition of a plurality of images to achieve noise reduction.
In the image pickup apparatus described below, a plurality of images picked up successively, for example, images P1, P2 and P3, are positioned relative to each other using motion detection and motion compensation and then superposed with each other as seen in
In the description given below, motion detection and motion compensation are used to superpose a plurality of images to reduce noise is referred to as NR (noise Reduction) operation, and an image whose noise is reduced by the NR operation is referred to as NR image.
In the present specification, a screen or image for which noise reduction should be carried out is defined as target screen or target frame, and a screen to be superposed is defined as reference screen or reference frame. Two images picked up successively are displaced in position by camera shake of the image pickup person. Therefore, in order to superpose the two images, positioning of them is significant. Here, what should be taken into consideration is that not only such shake of the entire screen such as camera shake but also movement of an image pickup object within the screen exist.
Therefore, in order to raise the noise reduction effect also with regard to an image pickup object, positioning for each of a plurality of blocks 102 produced by dividing the target frame 100 as seen in
Accordingly, in the present embodiment, a block motion vector 104B which is a motion vector in a unit of a target block 102 is detected for all of the blocks 102, and for each of the blocks 102, a corresponding block motion vector 104B is used to carry out positioning and superposition of the images is carried out.
In the image pickup apparatus of the present embodiment, upon image pickup of a still picture, a plurality of images are picked up at a high speed, and the first images is determined as the target frame 100 while a predetermined number of picked up images are determined as reference frames 101 to carry out superposition of the images. Then, an image obtained by the superposition is recorded as a still picture picked up image. In particular, if an image pickup person depresses a shutter button of the image pickup apparatus, then the predetermined number of images are picked up at a high speed, and on the image or frame picked up first, a plurality of images or frames picked up later in time are superposed.
On the other hand, upon image pickup of moving pictures, an image of a current frame being currently outputted from an image pickup device is determined as an image of the target frame 100, and an image preceding in time to the image is determined as an image of a reference frame 101. Accordingly, in order to carry out reduction of noise of the image of the current frame, the image of the preceding frame is superposed on the current frame.
It is to be noted that, in the superposition methods of
However, where the image pickup apparatus is configured such that picked up images are outputted at a higher speed, for example, at a high frame rate of 240 fps from the image pickup device, it is possible to obtain, also upon moving picture image pickup, a picked up image signal of the frame rate of 60 fps by superposing every four images to produce one moving picture frame. Naturally, also it is possible to superpose two image frames of picked up moving picture images of 240 fps in a similar manner as just described to obtain a picked up image signal, whose noise is reduced, of the frame rate of 240 fps.
It is to be noted that, in the present embodiment, in order to make it possible to carry out superposition of images at a higher degree of accuracy, a block motion vector is detected in higher accuracy than the accuracy of the pixel pitch of a processing object image (the last-mentioned accuracy is hereinafter referred to as pixel accuracy), that is, higher accuracy of a pitch smaller than the pixel pitch of the original screen (target frame) (the higher accuracy is hereinafter referred to as sub pixel accuracy). For motion vector calculation in the sub pixel accuracy, in the present embodiment, a motion vector determined in the pixel accuracy and neighboring reference vectors are used to carry out an interpolation process.
Referring to
If the image signal processing system 10 of the image pickup apparatus of
In the image signal processing system 10, incoming light from an image pickup object through a camera optical system not shown including an image pickup lens 10L is irradiated upon an image pickup device 11 to carry out image pickup. In the present embodiment, the image pickup device 11 is formed from a CCD (Charge Coupled Device) imager. It is to be noted that the image pickup device 11 may be formed from a CMOS (Complementary Metal Oxide Semiconductor) imager.
In the image pickup apparatus, if an image pickup and recording starting operation is carried out, then an image inputted through the image pickup lens 10L is converted into a picked up image signal by the image pickup device 11. Thus, an analog picked up image signal which is a raw signal of the Bayer array formed from the three primary colors of red (R), green (G) and blue (B) is outputted as a signal synchronized with a timing signal from a timing signal generation section 12. The outputted analog picked up image signal is supplied to a pre-processing section 13, by which pre-processes such as defect correction and y correction are carried out. Then, a resulting analog picked up image signal is supplied to A data conversion section 14.
The data conversion section 14 converts the analog picked up image signal as a RAW signal inputted thereto into a digital picked up image signal (YC data) formed from a luminance signal component Y and color difference components Cb/Cr. The digital picked up image signal is supplied to an image correction and resolution conversion section 15. The image correction and resolution conversion section 15 converts the digital picked up image signal into a digital picked up image signal of a resolution designated through the user operation inputting section 3 and supplies the resulting digital picked up image signal to the image memory section 4 through the system bus 2.
If the image pickup instruction received through the user operation inputting section 3 is a still picture image pickup instruction arising from depression of the shutter button, then the digital picked up image signal obtained by the resolution conversion by the image correction and resolution conversion section 15 is written for a plurality of frames described above into the image memory section 4. Then, after the picked up image data of the images for the plural frames are written into the image memory section 4, image data of the target frame and image data of reference frames are read out by a motion detection and motion compensation section 16. Then, such a block matching process as hereinafter described is carried out for the image data read in the motion detection and motion compensation section 16 to detect a motion vector. Then, such an image superposition process as hereinafter described is carried out based on the detected motion vector by an image superposition section 17. As a result of the superposition, image data of an NR image whose noise is reduced is stored into the image memory section 4.
Then, the image data of the NR image of the result of the superposition stored in the image memory section 4 is codec converted by a still picture codec section 18 are stored into a recording medium of the recording and reproduction apparatus section 5 such as, for example, a DVD (Digital Versatile Disk) or a hard disk through the system bus 2. In the present embodiment, the still picture codec section 18 carries out an image compression coding process for a still picture of the JPEG (Joint Photograph Experts Group) system.
Further, upon such still image pickup, before the shutter button is depressed, image data from the image correction and resolution conversion section 15 is supplied to an NTSC (National Television System Committee) encoder 20, by which it is converted into a standard color image signal of the NTSC system. Then, the standard color image signal is supplied to a monitor display apparatus 6 formed, for example, from an LCD (Liquid Crystal Display) apparatus, and a monitor image upon still image pickup is displayed on a display screen of the monitor display apparatus 6.
On the other hand, if the image pickup instruction received through the user operation inputting section 3 is a moving picture image pickup instruction originating from depression of a moving picture recording button, then image data obtained by resolution conversion is written into the image memory section 4 and sent on the real time basis to the motion detection and motion compensation section 16. By the motion detection and motion compensation section 16, such a block matching process as hereinafter described is carried out to detect a motion vector. Then, such a superposition process of images as hereinafter described is carried out based on the detected motion vector by the image superposition section 17. Then, image data of an NR image obtained by noise reduction as a result of the superposition is stored into the image memory section 4.
Then, the image data of the NR image as a result of the superposition stored in the image memory section 4 is outputted to the display screen of the monitor display apparatus 6 through the NTSC encoder 20 and is then codec converted by a moving picture codec section 19. Thereafter, the image data is supplied to the recording and reproduction apparatus section 5 through the system bus 2 and recorded on a recording medium such as a DVD or a hard disk. In the present embodiment, the still picture codec section 18 carries out an image compression coding process for moving pictures of the MPEG system (Moving Picture Experts Group) system.
The picked up image data recorded on the recording medium of the recording and reproduction apparatus section 5 is read out in response to a reproduction starting operation through the user operation inputting section 3 and is supplied to and decoded for reproduction by the moving picture codec section 19. Then, the image data decoded for reproduction is supplied to the monitor display apparatus 6 through the NTSC encoder 20, and a reproduction image of the image data is displayed on the display screen of the monitor display apparatus 6. It is to be noted that, though not shown, an output image signal of the NTSC encoder 20 can be lead out to the outside through an image output terminal.
The motion detection and motion compensation section 16 described above can be configured by hardware and also can be configured using a DSP (Digital Signal Processor). Further, the motion detection and motion compensation section 16 may be configured as software processing by the CPU 1.
Motion Detection and Motion Compensation Section 16
In the present embodiment, the motion detection and motion compensation section 16 carries out motion vector detection basically by carrying out a block matching process using a SAD value described hereinabove with reference to
According to a popular motion vector detection process by block matching of the past, a reference block is successively shifted in a unit of a pixel (in a unit of one pixel or in a unit of a plurality of pixels), and the SAD values regarding the reference blocks at the individual shift positions are calculated. Then, a SAD value which has the lowest value from among the thus calculated SAD values is detected, and a motion vector is detected based on the position of the reference block having the lowest SAD value.
However, since, by such a conventional motion vector detection process as described above, the reference block is shifted in a unit of a pixel within a search range, the number of times of the matching process for calculating a SAD value increases in proportion to the search range. This gives rise to a problem that a long matching processing time period is required and an increased capacity is required for the SAD table.
Thus, in the present embodiment, a reduced image is formed from a target image or target frame and used to carry out block matching, and block matching on the original target image is carried out based on a result of the motion detection on the reduced image. It is to be noted that the reduced image is hereinafter referred to as educed phase and the original image which is not in a reduced form is hereinafter referred to as base phase, respectively. Accordingly, in the present embodiment, block matching is carried out on a reduced phase, and then block matching is carried out on a base phase using a result of the block matching.
Then, the reference frame is reduced in accordance with the image reproduction magnification 1/n of the target frame. In particular, as seen in
It is to be noted that, while, in the example described above, the image reproduction magnifications of the target frame and the reference frame are equal to each other, in order to reduce the amount of mathematical operation, different image reduction magnifications may be applied to the target frame or image and the reference frame or image whereas the numbers of pixels of the two frames are adjusted to each other by a process such as pixel interpolation to carry out matching.
Further, while the reduction magnifications in the horizontal direction and the vertical direction are equal to each other, they may be different from each other. For example, if the reduction magnification in the horizontal direction is set to 1/n and the reduction in the vertical direction is set to 1/m (m is a positive number, and n≠m), the reduced screen has a size of 1/n×1/m of the original screen.
Further, in the present embodiment, reduced phase reference vectors 138 representative of the positional displacement amounts from the motion detection origin 105 on the reduced phase reference frame 135 are set within the reduced phase search range 137. Then, a correlation between reduced phase reference blocks 139 at positions indicated by such reduced phase reference vectors 138 and the reduced phase target block 133 (not shown in
In this instance, since block matching is carried out on the reduced phase, the number of reduced phase reference block positions (reduced phase reference vectors) with regard to which the SAD value is to be calculated by the reduced phase reference frame 135 can be reduced. Consequently, the speed of the processing can be reduced by an amount by which the number of times of calculation of the SAD value, that is, the number of times of matching processing, is reduced, and the scale of the SAD table can be reduced.
As shown in
However, it is apparent that, in the base phase reference frame 134, a base phase motion vector 104 of the one-pixel accuracy exists in the proximity of a motion vector obtained by increasing the reduced phase motion vector 136 to n times.
Therefore, in the present embodiment, a base phase search range 140 is set within a small range of the base phase reference frame 134 supposed to include the base phase motion vector 104 around a position indicated by a motion vector, that is, a base phase reference vector 141, obtained by multiplying the reduced phase motion vector 136 by n, and sets a base phase matching processing rage 144 in response to the thus set base phase search range 140 as seen in
Then, as seen in
The base phase search range 140 and the base phase matching processing rage 144 thus set may be very small ranges in comparison with a reduced phase search range 137′ and a matching processing range 143′ obtained by multiplying the reduced phase search range 137 and the reduced phase matching processing range 143 by n which is a reciprocal to the reduction magnification as seen in
Accordingly, where a block matching process is carried out only with regard to a base phase without carrying out hierarchical matching, it is necessary to set a plurality of reference blocks in the reduced phase search range 137′ and the matching processing range 143′ on the base phase to carry out mathematical operation of determining a correlation value to the target block. However, in the hierarchical matching process, only it is necessary to carry out the matching process within such a very small range as seen in
Therefore, the number of base phase reference blocks to be set in the base phase search range 140 and the base phase matching processing rage 144 which are small ranges is very small. Consequently, an effect that the number of times of execution of the matching process, that is, the number of times of mathematical operation of a correction value, and the SAD values to be retained can be made very low and the processing speed can be raised and besides the scale of the SAD table can be reduced.
After a reduced phase motion vector 136 of the pixel accuracy is detected successfully in the base phase reference frame 134 in this manner, the SAD value of the reference block indicated by the reduced phase motion vector 136, that is, the minimum SAD value, and neighboring SAD values in the proximity of the minimum SAD value are used to carry out a quadratic curve approximation interpolation process to calculate a high-accuracy motion vector of the sub pixel accuracy.
The high-accuracy motion vector of the sub pixel accuracy is described. In the block matching technique described above, a motion vector can be detected only in the pixel accuracy because block matching is carried out in a unit of a pixel. A position at which a matching process is carried out, that is, the position of a reference block, exists in the pixel accuracy, and in order to calculate a motion vector of higher accuracy, a matching process in the sub pixel unit is required.
If the matching process is carried out in a pixel unit as high as N times in order to calculate a motion vector of the pixel accuracy as high as N times (the pixel pitch is reduced to 1/N), then the size of the SAD table increases to approximately N2 times, and a memory of a very great storage capacity becomes to be required. Further, for a block matching process, it is necessary to produce an image upper-sampled to N times, and the scale of the hardware increases tremendously.
Therefore, it is tried to use a quadratic curve to interpolate the SAD table to calculate a motion vector of the sub pixel accuracy from the SAD table for which a matching process is carried out in a pixel unit. In this instance, although not quadratic curve approximation interpolation but linear interpolation or higher order approximation curve interpolation of the third or higher order may be carried out, the quadratic curve approximation interpolation is used in the present example from the equilibrium between the accuracy and the hardware.
In the quadratic curve approximation interpolation, as seen in
Referring to
SXmin=1/2×(Sx2−Sx1)/(Sx2−2Smin+Sx1) (1)
The X coordinate taken on the SAD table by the minimum value SXmin among the SAD values of the sub pixel accuracy determined by the calculation expression (1) above is the X coordinate Vx which provides the minimum value among the SAD values of the sub pixel accuracy.
The division in the calculation expression (1) can be implemented by a number of times of subtraction. If the sub pixel accuracy to be determined is the accuracy of, for example, a pixel pitch equal to 1/4 the original pixel pitch, then the division can be determined only by two times of subtraction. Therefore, the circuit scale is small and the time required for the mathematical operation is short and a performance almost equal to that according to cubic curve interpolation considerably more complicated than second-order approximation curve interpolation can be implemented.
Similarly, the minimum SAD value Smin among the SAD values and the neighboring SAD values Sy1 and Sy2 of two neighboring points in the Y or vertical direction to the minimum SAD value Smin are used to apply a second-order approximation curve. Thus, the Y coordinate which takes the minimum value SYmin is the Y coordinate Vy which provides the minimum value among the SAD values of the sub pixel accuracy. The quadratic curve approximate interpolation is represented by the following expression (2):
SYmin=1/2×(Sy2−Sy2)/(Sy2−2Smin+Sy1) (2)
By carrying out approximation to a quadratic curve twice for the X direction and the Y direction in such a manner as described above, a motion vector (Vx, Vy) of high accuracy of the sub pixel accuracy is determined.
While, in the description above, the minimum value among the SAD values and the SAD values at two neighboring points in each of the X or horizontal direction and the Y or vertical direction are used, the number of neighboring SAD values in each of the X and Y directions may be more than two. Further, approximation to a quadratic curve may be applied not only to the X and Y directions but also to oblique directions.
It is illustrated in
The motion detection and motion compensation section 16 and the image memory section 4 are connected to each other by the system bus 2. In particular, in the example shown, a bus interface 21 and another bus interface 22 are connected between the system bus 2 and the target block buffer unit 161 and reference block buffer unit 162, respectively. Here, the AXI interconnect is used as the protocol for the system bus 2.
Upon still picture image pickup, a reduced phase target block or a base phase target block from a reduced phase target image Prt or a base phase target image Pbt stored in the image memory section 4 is written into the target block buffer unit 161. For the reduced phase target image Prt or the base phase target image Pbt, an image of a first picked up image frame after depression of the shutter button is written as the target frame 102. Then, when image superposition is carried out based on block matching with a reference image, the reduced phase target image Prt or the base phase target image Pbt is rewritten into the NR image obtained by the image superposition. In the present embodiment, image data of the base phase target image Pbt and the reduced phase target image Prt are not reduced.
Into the reference block buffer unit 162, a reduced phase reference block or a base phase reference block from an image frame of a reduced phase reference image Prr or a base phase reference image Pbr stored in the image memory section 4 is written. For the reduced phase reference image Prr or the base phase reference image Pbr, image pickup frames after the first image pickup frame are written as the reference frame 108.
In this instance, where a plurality of picked up images picked up successively are fetched while a superposition process of images is carried out (this is hereinafter referred to as in-pickup addition), image pickup frames after the first image pickup frame are successively fetched one by one as the base phase reference image and the reduced phase reference image. Accordingly, it is necessary to retain only one image pickup frame as the base phase reference image and the reduced phase reference image.
However, where, after a plurality of picked up images picked up successively are fetched, the motion detection and motion compensation section 16 and the image superposition section 17 carry out motion vector detection and then execute superposition of the images (this is called after-pickup addition), it is necessary to store and retain all of the plural picked up images after the first picked up image as the base phase reference image and the reduced phase reference images.
Although the image pickup apparatus can use both of the in-pickup addition and the after-pickup addition, in the present embodiment, a process of the after-pickup addition is adopted for the still picture NR process taking it into consideration that fine images from which noise is reduced are required although rather long processing time is required. Detailed description of the still picture NR process in the present embodiment is hereinafter described.
On the other hand, upon moving picture image pickup, a picked up image frame is inputted as a target frame 102 from the image correction and resolution conversion section 15 to the motion detection and motion compensation section 16. Into the target block buffer unit 161, a target block extracted from a target frame from the image correction and resolution conversion section 15 is written. Meanwhile, a picked up image frame immediately preceding to the target frame and stored in the image memory section 4 is determined as a reference frame 108, and a reference block from the reference frame (base phase reference image Pbr or reduced phase reference image Prr) is written into the reference block buffer unit 162.
Upon moving picture image pickup, it is only necessary to retain an immediately preceding picked up image frame to be used for block matching with the target frame from the image correction and resolution conversion section 15 as the base phase reference image Pbr or reduced phase reference image Prr. The image information to be retained in the image memory section 4 may be only for one frame. Therefore, in the present embodiment, the base phase reference image Pbr or the reduced phase reference image Prr is not reduced.
The matching processing unit 163 carries out the block matching process described hereinabove with reference to
Here, in order to detect the degree of correlation between the target block and the reference block in the block matching, also in this embodiment, luminance information of image data is used to carry out SAD value calculation, and a minimum SAD value is detected and a reference block having the minimum SAD value is detected as the highest correlation reference block.
It is to be noted that, for calculation of SAD values, not luminance information but information of color difference signals or three primary color signals R, G and B may be used. Further, upon calculation of SAD values, although all pixels in a block are normally used, in order to reduce the amount of mathematical operation, only pixel values of pixels at skipped positions by sampling out may be used.
The motion vector calculation unit 164 detects a motion vector of a reference block with respect to the target block from results of the matching process of the matching processing unit 163. In the present embodiment, the motion vector calculation unit 164 further has a function of detecting and retaining a minimum value among the SAD values and besides retaining a plurality of SAD values of different reference vectors in the proximity of the reference vector which exhibits the minimum SAD value and carrying out, for example, a quadratic curve approximation interpolation process to detect a high-accuracy motion vector of the sub pixel accuracy.
The control unit 165 controls the processing operation of the hierarchical block matching process of the motion detection and motion compensation section 16 under the control of the CPU 1.
An example of a configuration of the target block buffer 161 is shown in
The base phase buffer device 1611 is provided for temporarily storing a base phase target block. The base phase buffer device 1611 sends the base phase target block to the image superposition section 17 and supplies the base phase target block to the selector 1616.
The reduced phase buffer device 1612 is provided for temporarily storing a reduced phase target block. The reduced phase buffer device 1612 supplies the reduced phase target block to the selector 1616.
The reduction processing device 1613 is provided for producing, since a target block is sent thereto from the image correction and resolution conversion section 15 as described hereinabove upon moving picture image pickup, a reduced phase target block. The reduced phase target block from the reduction processing device 1613 is supplied to the selector 1615.
The selector 1614 selects and outputs, upon moving picture image pickup, a target block (bottom image target block) from the image correction and resolution conversion section 15, but selects and outputs, upon still picture image pickup, a base phase target block or a reduced phase target block from the image memory section 4, in response to a selection control signal from the control unit 165. The output from the selector 1614 is supplied to the base phase buffer device 1611, reduction processing device 1613 and selector 1615.
The selector 1615 selectively outputs, upon moving picture image pickup, a reduced phase target block from the reduction processing device 1613, but selectively outputs, upon still picture image pickup, a reduced phase target block from the image memory section 4, in response to a selection control signal from the control unit 165. The output of the selector 1615 is supplied to the reduced phase buffer device 1612.
The selector 1616 selectively outputs, upon block matching between reduced phases, a reduced phase target block from the reduced phase buffer device 1612, but selectively outputs, upon block matching between base phases, a base phase target block from the base phase buffer device 1611, in response to a selection control signal from the selector 1615. The reduced phase target block or base phase target block outputted from the selector 1616 is sent to the matching processing unit 163.
An example of a configuration of the reference block buffer unit 162 is shown in
The base phase buffer device 1621 temporarily stores a base phase reference block from the image memory section 4 and supplies the base phase reference block to the selector 1623 and besides sends the base phase reference block as a motion compensation block to the image superposition section 17.
The reduced phase buffer device 1622 temporarily stores a reduced phase reference block from the image memory section 4. The reduced phase buffer device 1622 supplies the reduced phase reference block to the selector 1623.
The selector 1623 selectively outputs, upon block matching between reduced phases, a reduced phase reference block from the reduced phase buffer device 1622, but upon block matching between base phases, a base phase reference block from the base phase buffer device 1621, in response to a selection control signal from the control unit 165. The reduced phase reference block or base phase reference block outputted from the selector 1623 is sent to the matching processing unit 163.
An example of a configuration of the image superposition section 17 is shown in
The image superposition section 17 and the image memory section 4 are connected to each other by the system bus 2. In particular, in the example shown, a bus interface section 23 and another bus interface section 24 are connected between the system bus 2 and the base phase output buffer unit 173 and reference block buffer unit 162, respectively.
The addition ratio calculation unit 171 receives a target block and a motion compensation block from the motion detection and motion compensation section 16 and decides the addition ratio of the two blocks depending upon whether the addition method adopted is a simple addition method or an average addition method. Then, the addition ratio calculation unit 171 supplies the determined addition ratio to the addition unit 172 together with the target block and the motion compensation block.
A base phase NR image of a result of the addition by the addition unit 172 is written into the image memory section 4 through the base phase output buffer unit 173 and the bus interface section 23. Further, the base phase NR image of the result of addition by the addition unit 172 is converted into a reduced phase NR image by the reduced phase production unit 174. The reduced phase NR image from the reduced phase production unit 174 is written into the image memory section 4 through the reduced phase output buffer unit 175 and the bus interface section 24.
Now, the simple addition method and the average addition method in image superposition are described.
Where a plurality of images are to be superposed, if they are superposed in a luminance relationship of 1:1, then the dynamic range increases to twice. Accordingly, if it is intended to superpose images of low illuminance to raise the sensitivity of the image while NR (noise reduction) is applied, the method of adding the luminance in the relationship of 1:1 is preferably used. This method is the simple addition method.
On the other hand, where NR is applied to images picked up in a condition that the illuminance can be assured, a method of such addition that the total luminance becomes 1 without increasing the dynamic range is preferably used. This is the average addition method.
In the simple addition system, as shown in
In the average addition method, where the addition ratio of the motion compensation image MCi (i=1, 2, . . . , K) is represented by α, the multiplication factor for the target image becomes 1−α.
In the average addition method, a maximum addition ratio A is set to the addition ratio α. The maximum addition ratio A of the addition ratio α has a value of 1/2 as an initial value and thereafter changes to 1/K in response to the addition number K of motion compensation images.
An image obtained by addition of N images in a state wherein the addition ratio α of motion compensation images is fixed to the maximum addition ratio does not change dynamic range, but all images are added by 1/K (refer to
In the image pickup apparatus of the present embodiment, the image superposition section 17 selectively uses the simple addition method and the average addition method depending upon a condition upon image pickup or upon a setting information. For example, in response to the illuminance upon image pickup or to a result of detection of luminance information of a picked up image, the image superposition section 17 selectively uses the simple addition method when it is desired to increase the dynamic range but uses the average addition method when sufficient illuminance is assured.
A flow chart of a noise reduction process by superposition of images upon still picture image pickup by the image pickup apparatus of the present embodiment having the configuration described above is shown in
First, if the shutter button is depressed, then the image pickup apparatus carries out high-speed image pickup of a plurality of images at a high speed under the control of the CPU 1. In particular, M (M is an integer equal to or higher than 2) picked up image data, that is, picked up image data for M frames, to be superposed upon still picture image pickup are fetched at a high speed and placed into the image memory section 4 (step S1).
Then, the reference frame is set to an Nth (N is an integer equal to or higher than 2 but is equal to or lower than M) one of the M image frames stored in the image memory section 4. In this instance, the control unit 165 sets the initial value for the value N to N=2 (step S2). Then, the control unit 165 sets the first image frame to a target image or frame and sets the Nth=second image as a reference image or frame (step S3).
Then, the control unit 165 sets a target block in the target frame (step S4), and the motion detection and motion compensation section 16 reads the target block from the image memory section 4 into the target block buffer unit 161 (step S5). Further, the motion detection and motion compensation section 16 reads pixel data within a matching processing range into the reference block buffer unit 162 (step S6).
Thereafter, the control unit 165 reads out reference blocks within the search range from the reference block buffer unit 162, and the matching processing unit 163 carries out a hierarchical matching process. It is to be noted that, in the present example, those SAD values which are hereinafter described from among SAD values calculated by the matching processing unit 163 are sent to the motion vector calculation unit 164 in order that a minimum value and neighboring values among the SAD values are retained to carry out a quadratic curve approximation interpolation process. This is repeated for all reference vectors within the search range. Thereafter, a quadratic curve approximation interpolation processing device carries out the interpolation process described hereinabove and outputs a motion vector of high accuracy (step S7).
Then, the control unit 165 reads out a motion compensation block from the reference block buffer unit 162 in accordance with the high-accuracy motion vector detected in such a manner as described above (step S8) and sends the read out motion compensation block to the image superposition section 17 in synchronism with the target block (step S9).
Thereafter, the image superposition section 17 carries out superposition of the target block and the motion compensation block and stores NR image data of the superposed block into the image memory section 4. In particular, the image superposition section 17 writes the NR image data of the superposed block into the image memory section 4 (step S10).
Referring now to
On the other hand, if it is decided at step S11 that the block matching is completed for all target blocks in the target frame, then the control unit 165 decides whether or not the processing for all reference frames to be superposed is completed, that is, whether or not M=N (step S12).
If it is decided at step S12 that M≠N, then N is incremented by one (that is, N=N+1) (step S13). Then, the control unit 165 sets the NR image produced by the superposition at step S10 as a target image or target frame and sets the N=1th image as a reference image or reference frame (step S14). Thereafter, the processing returns to step S4 of
It is to be noted that image data of an NR image of a result of superposition of M picked up images is compression encoded by the still picture codec section 18 and then supplied to the recording and reproduction apparatus section 5, by which it is recorded on the recording medium.
It is to be noted that, while the noise reduction processing method for still images involves storage of M image data in the image memory section 4, superposition may be carried out every time one image is picked up. In this instance, since the number of image frames to be stored in the image memory section 4 is only one, although a longer image pickup interval is required, the memory cost can be minimized in comparison with the noise reduction processing method of the processing routine of
A flow chart of a noise reduction process by superposition of images upon moving picture image pickup by the image pickup apparatus of the present embodiment is shown in
In the present embodiment, the motion detection and motion compensation section 16 is configured suitable to carry out a matching process in a unit of a target block. Thus, referring to
The image data of the target block sent to the motion detection and motion compensation section 16 is stored into the target block buffer unit 161. Then, the control unit 165 sets a reference vector corresponding to the target block (step S22) and reads image data within a range for a matching process from the image memory section 4 into the reference block buffer unit 162 (step S23).
Then, the matching processing unit 163 and the motion vector calculation unit 164 carry out a motion detection process by hierarchical block matching (step S24). In particular, the matching processing unit 163 first calculates a SAD value between pixel values of a reduced phase target block and pixel values of a reduced phase reference block on the reduced phase and sends the calculated SAD value to the motion vector calculation unit 164. The matching processing unit 163 repeats the processes described for all of the reduced phase reference blocks in the search range. If the calculation of the SAD value is completed for all reduced phase reference blocks in the search range, then the motion vector calculation unit 164 specifies the lowest SAD value to detect a reduced phase motion vector.
The control unit 165 multiplies the reduced phase motion vector detected by the motion vector calculation unit 164 by a reciprocal to the reduction ratio to convert the reduced phase motion vector into a motion vector on the base phase. Then, the control unit 165 determines a region centered at a position indicated by the vector obtained by the conversion on the base phase as a search range on the base phase. Then, the control unit 165 controls the matching processing unit 163 to carry out a block matching process on the base phase within the search range. The matching processing unit 163 calculates the SAD value between pixel values of the base phase target block and pixel values of the base phase reference block, and sends the calculated SAD value to the motion vector calculation unit 164.
After the calculation of the SAD value is completed for all reduced phase reference blocks in the search range, the motion vector calculation unit 164 specifies the lowest SAD value to detect a reduced phase motion vector and specifies SAD values of neighboring reduced phase reference blocks. Then, the motion vector calculation unit 164 uses the SAD values to carry out the quadratic curve approximation interpolation process described hereinabove and outputs a high-accuracy motion vector of the sub pixel accuracy.
Thereafter, the control unit 165 reads out image data of the motion compensation block from the reference block buffer unit 162 in accordance with the high-accuracy motion vector calculated at step S24 (step S25) and sends the image data to the image superposition section 17 at the succeeding stage in synchronism with the target block (step S26).
The image superposition section 17 carries out superposition of the target block and the motion compensation block. Then, the image superposition section 17 outputs image data of an NR image of a result of the superposition to the monitor display apparatus 6 through the NTSC encoder 20 so that moving picture recording monitoring is carried out. Further, the image superposition section 17 sends the image data of the NR image to the monitor display apparatus 6 through the moving picture codec section 19 so as to be recorded on the recording medium (step S27).
Further, the image obtained by the superposition by the image superposition section 17 is stored into the image memory section 4 so as to be used as a reference frame for a next frame, that is, for a next target frame (step S28).
Then, the CPU 1 decides whether or not a moving picture recording stopping operation is carried out by the user (step S29). If it is decided that the moving picture recording stopping operation is not carried out by the user, then the CPU 1 issues an instruction to return the processing to step S21 and repeat the processes at the steps beginning with step S21. On the other hand, if it is decided at step S29 that the moving picture recording stopping operation is carried out by the user, then the CPU 1 ends this processing routine.
While, in the processing routine of the noise reduction process of moving pictures described above, an image frame preceding by one frame is set as a reference frame, another frame preceding by more than one frame may be used as the reference frame. Also it is possible to store an image preceding by one frame and another frame preceding by two frames in the image memory section 4 and selectively use one of the two image frames from the contents of image information of the two images.
By using such means, procedures and system configuration as described above, the still picture noise reduction process and the moving picture noise reduction process can be carried out by a single piece of common hardware for the block matching process.
An example of a configuration and operation of the motion vector calculation unit 164 are described.
In a known example of a motion vector calculation unit, a SAD table TBL which stores all SAD values calculated within a search range is produced, and a minimum SAD value is detected from within the SAD table TBL. Then, a reference vector corresponding to the position of a reference block which exhibits the minimum SAD value is calculated as a motion vector. Then, where quadratic curve approximation interpolation is carried out, a plurality of SAD values, four values in the example illustrated, in the proximity of the minimum SAD value are extracted from the SAD table TBL, and the SAD values are used to carry out an interpolation process. Accordingly, this method requires a memory having a great storage capacity for the SAD table TBL.
In examples described below, the SAD table TBL which stores all SAD values calculated within a search range is not produced so that the circuit scale can be reduced and the processing time can be reduced.
As described above, the block matching process involves setting of the position indicated by a reference vector as the position of a reference block and calculation of the SAD value of each pixel of each of such reference blocks and each pixel of a target block. Then, the calculation process is carried out for reference blocks at positions indicated by all reference vectors in a search range.
Here, where the position of a reference block within a search range is changed to search for a motion compensation block, various methods are available such as a method wherein the search is carried out in order beginning with an end of the screen or frame and another method wherein the search is carried out toward the outside from the center of the screen or frame. However, in the present embodiment, a search method in which the following procedures are repeated is adopted wherein the search direction is set as indicated by an arrow mark 120 in
In particular, as seen in
As described hereinabove with reference to
If, when a SAD value regarding each reference block is calculated, the calculated SAD value and a minimum value among the SAD values till then are compared with each other and then, if the calculated SAD value is lower than the minimum value among the SAD values till then, then the calculated SAD value is retained as the minimum value and the SAD value and the reference vector at that time are retained, then the minimum value among the SAD values and position information of the reference vector which assumes the minimum value, that is, information of the reference vector, can be determined without producing any SAD table.
Then, if the detected minimum value of the SAD value is retained and the SAD values of reference blocks in the proximity of the position of the reference block which exhibits the minimum SAD value are retained as neighboring SAD values, then also the neighboring SAD values can be retained without producing any SAD table.
At this time, since, in the present example, such a search method as described with reference to
Therefore, if the SAD value of a reference block newly calculated is detected as a minimum SAD value, then a SAD value 123 (neighboring SAD value (Sy1)) of the reference block at a position higher by one line than the position of the reference block which exhibits the minimum SAD value 121 and the SAD value 124 (neighboring SAD value (Sx1) of the reference block at a position on the left side of the position of the reference block which exhibits the minimum SAD value 121 on the SAD table TBL can be acquired from the line memory described hereinabove.
Then, as a neighboring SAD value (Sx2) (refer to reference numeral 125 of
Taking the foregoing description into consideration, the first example of the motion vector calculation unit 164 has such a hardware configuration as shown in
Referring to
The SAD value retaining device 1643 includes retaining devices or memory for a minimum SAD value Smin and neighboring SAD values Sx1, Sx2, Sy1 and Sy2. In the present first example, the SAD value retaining device 1643 supplies the minimum SAD value Smin from the minimum SAD value retaining portion to the SAD value comparison device 1642. The SAD value retaining device 1643 further supplies position information (reference vectors) of the reference blocks of, from among the retained neighboring SAD values, the neighboring SAD value Sx2 on the right side of the minimum SAD value Smin and the neighboring SAD value Sy2 on the lower side of the minimum SAD value Smin to the SAD value writing device 1641.
In the present first example, the SAD value comparison device 1642 receives the position information or reference vector of the reference block and the SAD value Sin of the reference block from the matching processing unit 163 and further receives the minimum SAD value Smin from the minimum SAD value retaining portion of the SAD value retaining device 1643.
Then, the SAD value comparison device 1642 compares the SAD value Sin calculated at the point of time from the matching processing unit 163 and the minimum SAD value Smin from the minimum SAD value retaining portion of the SAD value retaining device 1643 with each other. Then, if the SAD value Sin calculated at the point of time from the matching processing unit 163 is lower, then the SAD value comparison device 1642 detects that the SAD value is the minimum SAD value at the point of time. On the other hand, where the SAD value Sin is higher, then the SAD value comparison device 1642 detects that the minimum SAD value Smin from the minimum SAD value retaining portion of the SAD value retaining device 1643 still remains the minimum value at the present point of time. Then, the SAD value comparison device 1642 supplies information DET of a result of the detection to the SAD value writing device 1641 and the SAD value retaining device 1643.
The SAD value writing device 1641 includes a buffer memory for one pixel for temporarily storing the calculated SAD value Sin and the position information or reference vector of the same from the matching processing unit 163. In the present first example, the SAD value writing device 1641 writes the position information or reference vector of the reference block from the matching processing unit 163 and the SAD value Sin of the reference block into the line memory 1647. In this instance, the line memory 1647 carries out operation similar to that of a shift register. When the line memory 1647 does not have a free space, if new position information and SAD value are stored, then the oldest position information and SAD value are abandoned from the line memory 1647.
Further, before the SAD value writing device 1641 writes the calculated SAD value Sin and corresponding position information into the line memory 1647, it carries out the following process.
In particular, if the information DET of a result of the comparison detection from the SAD value comparison device 1642 indicates that the SAD value Sin is the minimum value, then the SAD value writing device 1641 sends the position information or reference vector of the reference block from the matching processing unit 163 and the SAD value Sin of the reference block to the SAD value retaining device 1643.
The SAD value retaining device 1643 detects from the information DET of a result of the comparison detection from the SAD value comparison device 1642 that the SAD value Sin is the minimum value, and stores the position information or reference vector of the reference block sent thereto from the SAD value writing device 1641 and the SAD value Sin of the reference block into the minimum SAD value retaining device.
Further, also when the position information or reference vector of the reference block from the matching processing unit 163 coincides with the position information of the neighboring SAD value Sx2 or Sy2 received from the SAD value retaining device 1643, the SAD value writing device 1641 sends the position information or reference vector of the reference block from the matching processing unit 163 and the SAD value Sin of the reference block to the SAD value retaining device 1643.
The SAD value retaining device 1643 recognizes, from the received position information or reference vector of the reference block, to which neighboring SAD value the information relates, and stores the position information or reference vector of the reference block and the SAD value Sin of the reference block into the corresponding neighboring SAD value retaining device.
If the processes described above are completed for all reference blocks in the search range, then the minimum SAD value and the position information as well as the four neighboring SAD values and corresponding position information are retained into the SAD value retaining device 1643.
Thus, the X direction (horizontal direction) neighboring value extraction device 1644 and the Y direction (vertical direction) neighboring value extraction device 1645 read out the detected minimum SAD value Smin, the corresponding neighboring SAD values Sx1, Sx2, Sy1 and Sy2 retained in the SAD value retaining device 1643 and corresponding position information and send the read out information to the quadratic curve approximation interpolation processing device 1646. The quadratic curve approximation interpolation processing device 1646 receives the information and carries out interpolation by a quadratic curve twice for the X direction and the Y direction to calculate a high-accuracy motion vector of the sub pixel accuracy as described hereinabove.
In the first example, detection of a motion vector of the sub pixel accuracy can be detected by using a line memory for one line of the SAD table TBL in place of the SAD table TBL as described above.
An example of a flow upon block matching processing on the reduced phase in the present first example is illustrated in a flow chart of
First, the example of the flow of the block matching process on the reduced phase is described with reference to
First, an initial value for the minimum SAD value Smin of the SAD value retaining device 1643 of the motion vector calculation unit 164 is set (step S31). The initial value for the minimum SAD value Smin may be, for example, a maximum value of the difference between pixels.
Then, the matching processing unit 163 sets a reference vector (Vx, Vy) of the reduced phase to set the reduced phase reference block position for calculating the SAD value (step S32), and reads in image data of the set reduced phase reference block from the reference block buffer unit 162 (step S33). Further, the matching processing unit 163 reads in pixel data of the reduced phase target block from the target block buffer unit 161 and determines the sum total of absolute values of differences between pixel data of the reduced phase target block and the reduced phase reference block, that is, the SAD value. Then, the matching processing unit 163 signals the determined SAD value to the motion vector calculation unit 164 (step S34).
In the motion vector calculation unit 164, the SAD value writing device 1641 writes the SAD value into the line memory 1647 (step S35).
Then, in the motion vector calculation unit 164, the SAD value comparison device 1642 compares the SAD value Sin calculated by the matching processing unit 163 and the minimum SAD value Smin retained in the SAD value retaining device 1643 with each other to decide whether or not the calculated SAD value Sin is lower than the minimum SAD value Smin retained till then (step S36).
If it is decided at step S36 that the calculated SAD value Sin is lower than the minimum SAD value Smin, then the processing advances to step S37, at which the information of the minimum SAD value Smin retained in the SAD value retaining device 1643 and the position information (reduced phase reference vector) of the minimum SAD value Smin are updated.
On the other hand, if it is decided at step S36 that the calculated SAD value Sin is not lower than the minimum SAD value Smin, then the updating process of the retained information at steps S37 is not carried out, but the processing advances to step S38. At step S38, the matching processing unit 163 decides whether or not the matching process is completed at all of the positions (reduced phase reference vectors) of the reduced phase reference blocks in the search range. If it is decided that an unprocessed reference block still remains in the search range, then the processing returns to step S32 and the processes at the steps beginning with step S32 are repeated.
On the other hand, if it is decided at step S38 that the matching process is completed at all of the positions or reduced phase reference vectors of the reduced phase reference blocks in the search range, then the matching processing unit 163 notifies the motion vector calculation unit 164 of such decision.
The motion vector calculation unit 164 receives the notification from the matching processing unit 163 and outputs the position information or reduced phase reference vector of the minimum SAD value Smin retained in the SAD value retaining device 1643 to the control unit 165 (step S39).
The block matching process on the reduced phase in the present example ends therewith.
Now, an example of the flow of the block matching process on the base phase is described with reference to
First, an initial value for the minimum SAD value Smin of the SAD value retaining device 1643 of the motion vector calculation unit 164 is set (step S41). The initial value for the minimum SAD value Smin may be a maximum value of the difference between pixels.
The matching processing unit 163 sets a reference vector (Vx, Vy) on the base phase to set the base phase reference block position for calculating the SAD value (step S42) and reads in pixel data of the set base phase reference block from the reference block buffer unit 162 (step S43).
Then, the matching processing unit 163 reads in pixel data of the base phase target block from the target block buffer unit 161 and determines the sum total of absolute values of differences between the pixel data between the base phase target block and the base phase reference block, that is, the SAD value. Then, the matching processing unit 163 signals the determined SAD value to the motion vector calculation unit 164 (step S44).
In the motion vector calculation unit 164, the SAD value writing device 1641 writes the SAD value into the line memory 1647 (step S45). Then, in the motion vector calculation unit 164, the SAD value comparison device 1642 compares the SAD value Sin calculated by the matching processing unit 163 and the minimum SAD value Smin retained in the SAD value retaining device 1643 with each other to decide whether or not the calculated SAD value Sin is lower then the minimum SAD value Smin retained till then (step S46).
If it is decided at step S46 that the calculated SAD value Sin is lower than the minimum SAD value Smin, then the processing advances to step S47. At step S47, the information of the minimum SAD value Smin, the SAD values of the reference blocks at positions just above and just leftwardly of the reference block position at which the minimum SAD value Smin is exhibited and the position information or base phase reference vectors of the SAD values, all retained in the SAD value retaining device 1643, are updated.
In particular, the SAD value comparison device 1642 sends the information DET of the result of comparison that the calculated SAD value Sin is lower than the minimum SAD value Smin to the SAD value writing device 1641. Consequently, the SAD value writing device 1641 sends the calculated SAD value Sin and the position information or base phase reference vector of the SAD value Sin as new information of the minimum SAD value Smin to the SAD value retaining device 1643. Further, the SAD value writing device 1641 sends the oldest SAD value and the position information or base phase reference vector of the same as well as the newest SAD value and the position information or base phase reference vector of the same as information of the SAD value Sy1 of the base phase reference block at the position just above the position of the minimum SAD value and the information of the SAD value Sx1 of the base phase reference block at the position just leftwardly of the position of the minimum SAD value as recognized from
Then, the processing advances from step S47 to step S51 of
At step S51, the SAD value writing device 1641 decides whether or not the position indicted by the position information or reference vector of the calculated SAD value Sin is the position of the base phase reference block at the position just below the position of the base phase reference block which exhibits the minimum SAD value Smin. If it is decided that the indicated position is the position just blow the position of the base phase reference block, then the SAD value writing device 1641 sends the calculated SAD value Sin and the position information or reference vector of the SAD value Sin to the SAD value retaining device 1643. The SAD value retaining device 1643 updates the retained information of the neighboring SAD value Sy2 of the base phase reference block at the position just below the position of the base phase reference block with the received SAD information and position information of the SAD information (step S52).
If it is decided at step S51 that the position indicated by the position information or base phase reference vector regarding the calculated SAD value is not the position of the base phase reference block at the position just below the position of the base phase reference block which indicates the minimum SAD value Smin, then the SAD value writing device 1641 decides whether or not the position indicated by the position information or base phase reference vector regarding the calculated SAD value Sin is the position of the base phase reference block immediately rightwardly of the position of the base phase reference block with regard to which the minimum SAD value Smin is retained (step S53).
If it is decided at step S53 that the position indicated by the position information or base phase reference vector regarding the calculated SAD value Sin is the position of the base phase reference block immediately rightwardly of the position of the base phase reference block with regard to which the minimum SAD value Smin is retained, then the SAD value writing device 1641 sends the calculated SAD value Sin and the position information or base phase reference vector of the SAD value Sin to the SAD value retaining device 1643. The SAD value retaining device 1643 receives and uses the SAD value and position information to update the retained information of the neighboring SAD value Sx2 regarding the base phase reference block just rightwardly of the position of the base phase reference block with the received SAD value and position information of the SAD value (Step S54).
On the other hand, if it is decided at step S53 that the position indicated by the position information or base phase reference vector regarding the calculated SAD value Sin is not the position of the base phase reference block immediately rightwardly of the position of the base phase reference block with regard to which the minimum SAD value Smin is retained, then the SAD value writing device 1641 writes the calculated SAD value Sin and position information or base phase reference vector of the SAD value Sin only into the line memory 1647 (step S45 described hereinabove) but does not send the SAD value Sin and the position information to the SAD value retaining device 1643.
Then, the matching processing unit 163 decides whether or not the matching process is completed at all positions or reference vectors of the base phase reference blocks in the search range (step S55). Then, if it is decided that an unprocessed base phase reference block still remains in the search range, then the processing returns to step S42 of
On the other hand, if it is decided at step S45 that the matching process is completed at all positions or base phase reference vectors of the base phase reference blocks in the search range, then the matching processing unit 163 notifies the motion vector calculation unit 164 of such decision.
The motion vector calculation unit 164 receives the notification, and the X direction (horizontal direction) neighboring value extraction device 1644 and the Y direction (vertical direction) neighboring value extraction device 1645 read out the detected minimum SAD value Smin, neighboring SAD values Sx1, Sx2, Sy1 and Sy2 of the minimum SAD value Smin and position information of them, all stored in the SAD value retaining device 1643 and send the read out information to the quadratic curve approximation interpolation processing device 1646. The quadratic curve approximation interpolation processing device 1646 receives the information and carries out interpolation with a quadratic curve twice for the X direction and the Y direction to calculate a high-accuracy motion vector of the sub pixel accuracy in such a manner as described hereinabove (step S56). The block matching process of the first example for a base phase reference frame ends therewith.
As described above, in the present first example, by the configuration that a memory for one line of a SAD table is provided without keeping a SAD table which retains all of calculated SAD values and a SAD value for one pixel is retained in the SAD value writing device 1641, motion vector detection of the sub pixel accuracy by an interpolation process can be carried out.
The method of the first example is common in the number of times of block matching to the conventional technique of the past where all SAD values of the SAD table TBL are retained, and therefore exhibits an effect that the hardware scale can be reduced while the processing time is same.
It is to be noted that, while, in the description of the first example above, the SAD value comparison device 1642 compares a calculated SAD value Sin from the matching processing unit 163 and a minimum SAD value Smin retained in the matching processing unit 163 with each other, the SAD value comparison device 1642 may include a retaining portion for a minimum SAD value. In this instance, the retained minimum SAD value and the calculated SAD value Sin are compared with each other, and when the calculated SAD value Sin is lower, the retained minimum SAD value is updated with the calculated SAD value Sin. Further, the calculated SAD value Sin is sent together with position information thereof to the SAD value retaining device 1643 through the SAD value writing device 1641 so that it is retained into the minimum SAD value retaining portion of the SAD value retaining device 1643.
<Second Example of the Motion vector calculation Unit 164>
In the present second example of the motion vector calculation unit 164, the line memory 1647 in the first example is omitted to achieve further reduction of the hardware scale.
In the present second example, detection and retention of a minimum SAD value Smin among the SAD values of reference blocks in a search range and position information or a reference vector of the minimum SAD value Smin are carried out quite similarly to that in the first example described hereinabove. However, acquisition and retention of neighboring SAD values and position information of them are not carried out simultaneously with detection of the minimum SAD value Smin, different from the first example, but the position information of the detected minimum SAD value Smin is returned to the matching processing unit 163. Thus, the matching processing unit 163 calculates the SAD values regarding the reference positions at the four neighboring positions around the minimum SAD value Smin again and then supplies the calculated SAD values to the motion vector calculation unit 164.
The motion vector calculation unit 164 receives the SAD values of the four neighboring points calculated by the second-time block matching process from the matching processing unit 163 and the position information or reference vectors of the SAD values and stores the received information into respective retaining portions of the SAD value retaining device 1643.
An example of a hardware configuration of the second example of the motion vector calculation unit 164 is shown in
Also in the present second example, the X direction (horizontal direction) neighboring value extraction device 1644 and Y direction (vertical direction) neighboring value extraction device 1645 and the quadratic curve approximation interpolation processing device 1646 carry out operations similar to those of the first example described hereinabove. However, the SAD value writing device 1641, SAD value comparison device 1642 and SAD value retaining device 1643 operate in a different manner from that in the first example described hereinabove.
The SAD value retaining device 1643 includes retaining portions or memories for the minimum SAD value Smin and the neighboring SAD values Sx1, Sx2, Sy1 and Sy2 similarly as in the first example. Also in the second example, the SAD value retaining device 1643 supplies the minimum SAD value Smin from the minimum SAD value retaining portions to the SAD value comparison device 1642. However, in the present second example, the SAD value retaining device 1643 does not supply the position information of the neighboring SAD values to the SAD value writing device 1641, different from the first example.
Also in the present second example, the SAD value comparison device 1642 compares the SAD value Sin from the matching processing unit 163 calculated at the current point of time and the minimum SAD value Smin from the minimum SAD value retaining portion of the SAD value retaining device 1643 with each other. Then, if the calculated SAD value Sin from the matching processing unit 163 calculated at the current point of time is lower, then it is detected that the minimum SAD value Smin from the minimum SAD value retaining portion of the SAD value retaining device 1643 still remains the minimum value at the current point of time. Then, the SAD value comparison device 1642 supplies information DET of the result of detection to the SAD value writing device 1641 and the SAD value retaining device 1643.
Similarly as in the example described above, the SAD value writing device 1641 includes a buffer memory for one pixel for temporarily storing the calculated SAD value Sin and position information of the SAD value Sin from the matching processing unit 163. Further, in the present second example, when the information DET of the result of comparison detection from the SAD value comparison device 1642 indicates that the SAD value Sin is the minimum value, the SAD value writing device 1641 sends the position information or reference vector of the reference block from the matching processing unit 163 and the SAD value Sin of the reference block to the SAD value retaining device 1643.
The SAD value retaining device 1643 recognizes that the SAD value Sin is the minimum value from the information DET of the result of comparison detection from the SAD value comparison device 1642 and stores the position information or reference vector of the reference block and the SAD value Sin of the reference block from the SAD value writing device 1641 into the minimum SAD value retaining portion.
The processes described above are carried out for the SAD values calculated by the matching processing unit 163 from all reference blocks in the search range. Then, in the second example, when the calculation of the SAD value regarding all reference blocks in the search range is completed, the SAD value retaining device 1643 supplies the position information or reference vector Vmin of the minimum SAD value Smin retained therein to the matching processing unit 163 and requests the matching processing unit 163 to re-calculate the SAD value regarding reference blocks at four neighboring points around the position of the minimum SAD value Smin.
When the matching processing unit 163 receives the request for re-calculation of the SAD value regarding the neighboring reference blocks including the position information or reference vector Vmin of the minimum SAD value Smin from the SAD value retaining device 1643, it detects the position of the neighboring reference blocks at the four neighboring points from the position information or reference vector Vmin of the minimum SAD value Smin and then carries out calculation of the SAD value regarding the reference blocks at the detected positions. Then, the matching processing unit 163 successively supplies the calculated SAD values together with the position information or reference vectors of the SAD values to the SAD value writing device 1641.
In this instance, since the matching processing unit 163 carries out the block matching process in order in the search direction, the neighboring SAD values are successively calculated in the order of the SAD values Sy1, Sx1, Sx2 and Sy2. The SAD value writing device 1641 successively supplies the received re-calculated SAD values and the position information or reference vectors of the SAD values to the SAD value retaining device 1643.
The SAD value retaining device 1643 successively writes the re-calculated SAD values and position information or reference vectors of the SAD values into the corresponding storage portions.
When the re-calculation of the SAD value regarding the neighboring reference blocks is completed in this manner, the X direction (horizontal direction) neighboring value extraction device 1644 and the Y direction (vertical direction) neighboring value extraction device 1645 read out the detected minimum SAD value Smin, neighboring SAD values Sx1, Sx2, Sy1 and Sy2 and position information of them, all retained in the SAD value retaining device 1643, and sends the read out information to the quadratic curve approximation interpolation processing device 1646. The quadratic curve approximation interpolation processing device 1646 receives the information and carries out interpolation with a quadratic curve twice for the X direction and the Y direction to calculate a high-accuracy motion vector of the sub pixel accuracy in such a manner as described hereinabove.
In this manner, in the present second example, a motion vector of the sub pixel accuracy can be detected without using the SAD table TBL or a line memory.
An example of a flow of a block matching process on the reduced phase in the present second example is similar to that in the first example described hereinabove with reference to
First, an initial value for the minimum SAD value Smin of the SAD value retaining device 1643 of the motion vector calculation unit 164 is set (step S61). The initial value for the minimum SAD value Smin may be, for example, a maximum value of the difference between pixels.
Then, the matching processing unit 163 sets a reference vector (Vx, Vy) on the base phase to set a base phase reference block position for calculating the SAD value (step S62). Then, the matching processing unit 163 reads in pixel data of the set base phase reference block from the reference block buffer unit 162 (step S63) and reads out pixel data of the base phase target block from the target block buffer unit 161. Then, the matching processing unit 163 determines the sum total of absolute values of differences between pixel data of the base phase target block and the base phase reference block, that is, the SAD value. Then, the matching processing unit 163 signals the determined SAD value to the motion vector calculation unit 164 (step S64).
In the motion vector calculation unit 164, the SAD value comparison device 1642 compares the SAD value Sin calculated by the matching processing unit 163 and the minimum SAD value Smin retained in the SAD value retaining device 1643 with each other to decide whether or not the calculated SAD value Sin is lower than the minimum SAD value Smin retained till them (step S65).
If it is decided at step S65 that the calculated SAD value Sin is lower than the minimum SAD value Smin, then the processing advances to step S66, at which the minimum SAD value Smin and the position information of the minimum SAD value Smin retained in the SAD value retaining device 1643 are updated.
In particular, the SAD value comparison device 1642 sends information DET of the result of detection that the calculated minimum SAD value Smin is lower than the minimum SAD value Smin to the SAD value writing device 1641. Consequently, the SAD value writing device 1641 sends the calculated SAD value Sin and the position information or base phase reference vector of the SAD value Sin as information of the new minimum SAD value Smin to the SAD value retaining device 1643. The SAD value retaining device 1643 updates the minimum SAD value Smin and the position information of the minimum SAD value Smin retained therein with the received new SAD value Sin and position information, respectively.
Thereafter, the processing advances from step S66 to step S67. On the other hand, if it is decided at step S65 that the calculated SAD value Sin is not lower than the minimum SAD value Smin, then the processing advances to step S67 without carrying out the retaining information updating process at step S66.
At step S67, the matching processing unit 163 decides whether or not the matching process is completed at all of the positions or base phase reference vectors of the base phase reference blocks in the search range. If it is decided that an unprocessed base phase reference block still remains in the search range, then the processing returns to step S62 and the processes at the steps beginning with step S62 are repeated.
On the other hand, if it is decided at step S67 that the matching process is completed at all of the positions or base phase reference vectors of the base phase reference blocks in the search range, then the matching processing unit 163 receives the position information of the minimum SAD value Smin from the SAD value retaining device 1643 and carries out re-calculation of the SAD value regarding the base phase reference blocks at the positions of the four neighboring points. Then, the matching processing unit 163 supplies the re-calculated neighboring SAD values to the SAD value retaining device 1643 through the SAD value writing device 1641 so as to be retained into the SAD value retaining device 1643 (step S68).
Then, in the motion vector calculation unit 164, the X direction (horizontal direction) neighboring value extraction device 1644 and the Y direction (vertical direction) neighboring value extraction device 1645 read out the detected minimum SAD value Smin, neighboring SAD values Sx1, Sx2, Sy1 and Sy2 and position information of them, all retained in the SAD value retaining device 1643, and sends the read out information to the quadratic curve approximation interpolation processing device 1646. The quadratic curve approximation interpolation processing device 1646 receives the information and carries out interpolation with a quadratic curve twice for the X direction and the Y direction to calculate a high-accuracy motion vector of the sub pixel accuracy in such a manner as described above (step S69). The block matching process on the base phase of the third example regarding one reference frame is completed therewith.
Where the present second example is compared with the first example described above, although the processing time increases by an amount for re-calculation of the SAD value, since the line memory is not required, the circuit scale can be reduced further from that of the first example. Besides, since the re-calculation of the SAD value is carried out only for the neighboring SAD values, it is carried out, in the example described above, four times to the utmost, and therefore, the processing time does not increase very much.
It is to be noted that, while, in the description of the second example above, the minimum SAD value is detected and retained into the SAD value retaining device 1643, the SAD value comparison device 1642 may detect and retain position information or a reference vector of a reference block which exhibits the minimum SAD value. In this instance, when the first time block matching comes to an end, the SAD value comparison device 1642 may supply the position information of the minimum SAD value to the matching processing unit 163.
In this instance, in the re-calculation of the SAD value by the matching processing unit 163, also the minimum SAD value is re-calculated in addition to the SAD values at the four neighboring points. In this instance, although the number of times of re-calculation of the SAD value becomes 5 and exhibits increase by one, in the first time block matching, only the SAD value comparison device 1642 may operate, and the SAD value writing device 1641 and the SAD value retaining device 1643 may operate so as to retain the re-calculated SAD values. Therefore, there is a merit that the processing operation is simplified.
Further, the processing by the motion detection and motion compensation section 16 can be executed parallelly and concurrently with regard to a plurality of target blocks set in a target frame. In this instance, it is necessary to provide a number of hardware systems for the motion detection and motion compensation section 16 equal to the number of target blocks to be processed parallelly and concurrently.
In the case of a method wherein a SAD table TBL is produced as in the case of the example of the related art, it is necessary to produce a number of SAD tables equal to the number of target blocks, and a memory of a very great capacity is required. However, in the first example, per one target block, the capacity for only one line of a SAD table is required, and therefore, the memory capacity can be reduced significantly. Further, in the second example, since no line memory is required, the memory capacity can be reduced significantly.
An example of operation of the hierarchical block matching process by the motion detection and motion compensation section 16 in the present embodiment is illustrated in flow charts of
It is to be noted that, while the flow of the process illustrated in
First, the motion detection and motion compensation section 16 reads in a reduction image of a target block that is, a reduced phase target block, from the target block buffer unit 161 (step S71 of
Then, the matching processing unit 163 sets a reduced phase search range and sets a reduced phase reference vector (Vx/n, Vy/n: 1/n is a reduction ratio) within the set reduction search range, then set the reduced phase reference block position for calculating the SAD value (step S73). Then, pixel data of the set reduced phase reference block is read in from the reference block buffer unit 162 (step S74). Then, the matching processing unit 163 determines the sum total of absolute values of differences between pixel data of the reduced phase target block and the reduced phase reference block, that is, the reduced phase SAD value, and signals the determined reduced phase SAD value to the motion vector calculation unit 164 (step S75).
In the motion vector calculation unit 164, the SAD value comparison device 1642 compares the SAD value Sin calculated by the matching processing unit 163 and the reduced phase minimum SAD value Smin retained in the SAD value retaining device 1643 with each other to decide whether or not the calculated SAD value Sin is lower than the reduced phase minimum SAD value Smin retained till then (step S76).
If it is decided at step S76 that the calculated SAD value Sin is lower than the reduced phase minimum SAD value Smin, then the processing advances to step S77, at which the reduced phase minimum SAD value Smin and the position information of the reduced phase minimum SAD value Smin retained in the SAD value retaining device 1643 are updated.
In particular, the SAD value comparison device 1642 sends information DET of the result of comparison that the calculated SAD value Sin is lower than the reduced phase minimum SAD value Smin to the SAD value writing device 1641. Consequently, the SAD value writing device 1641 sends the calculated SAD value Sin and the position information or reduced phase reference vector of the SAD value Sin as new reduced phase minimum SAD value Smin to the SAD value retaining device 1643. The SAD value retaining device 1643 uses the received new SAD value Sin and position information to update the reduced phase minimum SAD value Smin and the position information retained therein.
Then, the processing advances from step S77 to step S78. Meanwhile, if it is decided at step S76 that the calculated SAD value Sin is not lower than the reduced phase minimum SAD value Smin, the processing advances to step S78 without carrying out the updating process of the retained information at step S77.
At step S78, the matching processing unit 163 decides whether or not the matching process is completed at all of the positions or reduced phase reference vectors of the reduced phase reference blocks in the reduced phase search range. If it is decided that an unprocessed reduced phase reference block still remains in the reduced phase search range, then the processing returns to step S73 such that the processes at the steps beginning with step S73 are repeated.
On the other hand, if it is decided at step S78 that the matching process is completed at all of the positions or reduced phase reference vectors of the reduced phase reference blocks in the reduced phase search range, then the matching processing unit 163 receives the position information or reduced phase moving vector of the reduced phase minimum SAD value Smin from the SAD value retaining device 1643. Then, the matching processing unit 163 sets a reference image target block at a position centered at position coordinates indicated by a vector obtained by multiplying the received reduced phase moving vector by a reciprocal to the reduction ratio, that is, by n, on the base phase target frame. Further, the matching processing unit 163 sets the base phase search range as the comparatively small range centered at the position coordinates indicated by the vector multiplied by n to the base phase target frame (step S79). Then, the matching processing unit 163 reads in pixel data of the base phase target block from the target block buffer unit 161 (step S80).
Then, the matching processing unit 163 sets the initial value for the base phase minimum SAD value as the initial value for the reduced phase minimum SAD value Smin of the SAD value retaining device 1643 of the motion vector calculation unit 164 (step S81 of
Then, the matching processing unit 163 sets a reference vector (Vx, Vy) within the base phase reduced search range set at step S79 to set the base phase reference block position for calculating the SAD value (step S82). Then, the matching processing unit 163 reads in pixel data of the set base phase reference block from the reference block buffer unit 162 (step S83) and determines the sum total of absolute values of differences between pixel data of the base phase target block and the base phase reference block, that is, the base phase SAD value. Then, the matching processing unit 163 signals the determined reference image SAD value to the motion vector calculation unit 164 (step S84).
In the motion vector calculation unit 164, the SAD value comparison device 1642 compares the SAD value Sin calculated by the matching processing unit 163 and the reduced phase minimum SAD value Smin retained in the SAD value retaining device 1643 with each other to decide whether or not the calculated SAD value Sin is lower than the reduced phase minimum SAD value Smin which has been retained till then (step S85).
If it is decided at step S85 that the calculates SAD value Sin is lower than the reduced phase minimum SAD value Smin, then the processing advances to step S86, at which the reduced phase minimum SAD value Smin and the position information of the reduced phase minimum SAD value Smin retained in the SAD value retaining device 1643 are updated.
In particular, the SAD value comparison device 1642 sends information DET of the result of comparison that the calculated SAD value Sin is lower than the reduced phase minimum SAD value Smin to the SAD value writing device 1641. Consequently, the SAD value writing device 1641 sends the calculated SAD value Sin and the position information or reference vector of the SAD value Sin as new information of the reduced phase minimum SAD value Smin to the SAD value retaining device 1643. The SAD value retaining device 1643 updates the reduced phase minimum SAD value Smin and the position information of the reduced phase minimum SAD value Smin retained therein with the received new SAD value Sin and position information.
Then, the processing advances from step S86 to step S87. On the other hand, if it is decided at step S85 that the calculated SAD value Sin is not lower than the reduced phase minimum SAD value Smin, then the processing advances to step S87 without carrying out the updating process of the retained information at step S86.
At step S87, the matching processing unit 163 decides whether or not the matching process is completed at all of the positions of the base phase reference blocks or base phase reference vectors in the base phase search range. If it is decided that an unprocessed base phase reference block still remains in the base phase search range, then the processing returns to step S82 such that the processes at the steps beginning with step S82 are repeated.
On the other hand, if it is decided at step S87 that the matching process is completed at all of the positions of the reduced phase reference blocks or base phase reference vectors in the search range, then the matching processing unit 163 receives the position information or base phase vector of the reduced phase minimum SAD value Smin from the SAD value retaining device 1643. Then, the matching processing unit 163 carries out re-calculation of the base phase SAD value regarding the base phase reference blocks at the positions of the neighboring four points and supplies the re-calculated neighboring base phase SAD values to the SAD value retaining device 1643 through the SAD value writing device 1641 so as to be retained into the SAD value retaining device 1643 (step S88).
Then, in the motion vector calculation unit 164, the X direction (horizontal direction) neighboring value extraction device 1644 and the Y direction (vertical direction) neighboring value extraction device 1645 read out the detected reduced phase minimum SAD value Smin, neighboring SAD values Sx1, Sx2, Sy1 and Sy2 and position information of them, all retained in the SAD value retaining device 1643, and send them to the quadratic curve approximation interpolation processing device 1646. The quadratic curve approximation interpolation processing device 1646 receives the information and carries out interpolation with a quadratic curve twice for the X direction and the Y direction to calculate a high-accuracy base phase motion vector of the sub pixel accuracy in such a manner as described hereinabove (step S89). The block matching process of the present example regarding one reference frame ends therewith.
Now, effects of the image processing method which uses the hierarchical block matching technique by the present embodiment are described in connection with particular examples.
As a comparative example, for example, as shown in
Consequently, in a reduced phase wherein an image is reduced to 1/n=1/4 in both of the horizontal direction and the vertical direction, as shown in
Where block matching is carried out on the reduced phase in which an image is reduced to 1/4 in the horizontal direction and the vertical direction, the reduced phase motion vector is a motion vector of 4-pixel accuracy, and if only the reduced phase motion vector is quadrupled simply, then an error occurs with the motion vector of the one-pixel accuracy. In particular, where the pixels on the base phase are such as shown in
However, at least it can be estimated that a motion vector of the one-pixel accuracy will exist within a range of 4 pixels around the matching processing point on the reduced phase indicated by the reduced phase motion vector.
Therefore, in the present embodiment, in base phase matching wherein a base phase search range is determined based on a calculated reduced phase motion vector, a base phase target block is set such that it is centered at a pixel position indicated by a reference vector obtained by multiplying the reduced phase motion vector by four which is a reciprocal to the reduction ratio, and a search range for four pixels, that is, a base phase search range 140, is determined to carry out base phase block matching to calculate a motion vector again.
Accordingly, as seen in
The SAD table TBL in the case of the search range of 144×64 pixels before reduction as seen in
In contrast, the reduced phase SAD table in the case of the reduced phase search range of 36×16 pixels as seen in
Further, the base phase SAD table in the case of the base phase search range of 4×4 pixels as seen in
Accordingly, the number of times of a matching process where no hierarchical matching is applied is 145×65=9,425, but the number of times of matching where hierarchical matching is applied is 37×17+5×5=654. Consequently, it can be recognized that the processing time can be reduced significantly.
Then, since the line memory in the case of the first example of the motion vector detection method described above may have a storage capacity for one line of the reduced phase SAD table, a memory having a memory capacity sufficient to store 37 SAD values and position information of them can be used for the line memory. Therefore, the storage capacity of the memory is very small in comparison with that of a memory used for the SAD table TBL which stores 9,425 SAD values and position information of them.
On the other hand, in the case of the second example of the configuration example of the motion vector calculation unit 164 described hereinabove, since also the reduced phase SAD table for storing 37 SAD values and position information of them becomes unnecessary, the circuit scale can be further reduced.
In this manner, according to the embodiment described above, by carrying out an interpolation process on the base phase after a hierarchical matching process is carried out, motion vector detection of the sub pixel accuracy can be carried out within a wide search range.
It is to be noted that
Then, the size of a base phase target block 131 and the size and the base phase reference vector number of the base phase matching processing range 144 are such as illustrated in
[Memory Capacity Reduction and Efficient Memory Accessing upon Still Picture NR Processing]
As described hereinabove, the present embodiment is ready for a still picture NR process and a moving picture NR process. The moving picture NR process requires the real-time property, that is, the speed, rather than the accuracy whereas the still picture NR process requires a clean image free from noise even if somewhat processing time is required.
As described hereinabove, where image superposition methods are roughly classified from the point of view of the system architect, two methods are available including the in-pickup addition method which is a technique of carrying out superposition of images on the real time basis during high speed successive image pickup upon still picture image pickup and the after-pickup addition method which is a technique of carrying out superposition of images after all reference images are prepared after high speed successive image pickup. In order to carry out a NR process in sufficient processing time, not the in-pickup addition method but the after-pickup addition method is desirable, and in the present embodiment, the after-pickup addition method is adopted.
However, according to the after-pickup addition method, reference images must be retained in an image memory, and there is a problem that, as the number of images to be superposed increases, an increased capacity is required for the image memory.
Taking the problem just described into consideration, the present embodiment takes a countermeasure which reduces, when the after-pickup addition method is used upon still picture NR processing, the capacity of the image memory section 4 as much as possible and besides can carry out memory accessing efficiently. This is described below.
First a data format upon memory accessing to the image memory section 4 in the present embodiment is described.
As seen in
The system bus 2 in the present embodiment has a bus width of, for example, 64 bits and has a burst length (which is the number of times by which burst transfer can be carried out successively in a unit of a predetermined plurality of pixel data) which is, for example, 16 bursts in the maximum.
In response to a reading-in request from the target block buffer unit 161 and the reference block buffer unit 162 of the motion detection and motion compensation section 16, the bus interface sections 21 and 22 produce a bus protocol including a start address and a burst length for a predetermined memory of the image memory section 4 and accesses the system bus 2 (AXI interconnect).
Image data for one pixel used in the image pickup apparatus of the present embodiment includes a luminance Y and chroma C (Cr, Cb). Further, the image data has the format of Y:Cr:Cb=4:2:2, and the luminance Y and the chroma C (Cr, Cb) are represented by 8 bits (Cr=4 bits and Cb=4 bits) without a sign. Further, four YC pixels are juxtaposed in a memory access width of 4 bits. The data processing unit of the system bus 2 includes 64 bits and the YC pixel data for one pixel includes 16 bits, and therefore, the data processing unit of the system bus 2 is for 4 pixels.
Where the processing sections of the image pickup apparatus are connected to such a system bus 2 as shown in
Incidentally, in the present embodiment, the maximum burst length which can be processed by the memory controller (not shown) is 16 bursts as described above. For example, even in writing into the memory of the image memory section 4, where transfer of 16 bursts can be used more, the number of times of arbitration of the bus decreases and the efficiency in processing of the memory controller is higher, and consequently, the bus band can be reduced.
Accordingly, in the present embodiment, the efficiency is high where YC pixel data for four pixels (64 bits) are bus accessed by 16 bursts. If this is converted into the number of pixels, then this corresponds to data for 4×16=64 pixels.
Therefore, in the present embodiment, an image for one screen is divided in a one-burst transfer unit of 64 pixels in the horizontal direction so that the image for one screen is composed of a plurality of image units (hereinafter referred to as image rectangles) of the divisional blocks as seen in
It is to be noted that, where the number of image data in the horizontal direction cannot be divided by 64 pixels, a dummy region is provided on the right side of the image in the horizontal direction as indicated by slanting lines in
A related art of the raster scanning method is suitable to read data line by line since addresses of an image memory successively appear in the horizontal direction upon accessing to the image memory. In contrast, the image rectangle accessing method is suitable to read data of blocks each including 64 pixels or less in the horizontal direction because the address increments in the vertical direction in a one-burst transfer unit of 64 pixels.
For example, when a block of an image rectangle type including 64 pixels×64 lines is to be read in, if the memory controller (not shown) of the image memory section 4 bus accesses YC image data for four pixels (64 bits) in 16 bursts, then data for 4×16=pixels are read by the 16 bursts. Therefore, after the address for the first line including 64 pixels in the horizontal direction is set, address setting for pixel data for the remaining 63 lines can be carried out only by successively incrementing the address in the vertical direction.
The image memory section 4 in the present embodiment includes the memory controller (not shown) which in turn includes an address generation unit (not shown) for executing such an image rectangle accessing method as described above.
In the present embodiment, also the image correction and resolution conversion section 15 provided at the preceding stage to the motion detection and motion compensation section 16 is ready for accessing of the image rectangle accessing method described above so that the motion detection and motion compensation section 16 can access the image memory section 4 in accordance with the image rectangle accessing method. Further, in the present embodiment, also the still picture codec section 18 is ready for the image rectangle accessing method so that, in a still picture NR process upon still picture image pickup, a plurality of picked up images obtained by high speed successive image pickup and stored in a compressed form in the image memory section 4 can be read out and decompression decoded as occasion demands to allow the motion detection and motion compensation section 16 to carry out a block matching process.
In the case of the reduction ratio of 1/4 described hereinabove with reference to
When a base phase target block is to be read in from the image memory section 4, in order to make the most of the advantage of the image rectangle accessing method, it is accessed by each 64 pixels×one line to raise the bus efficiency.
Accordingly, as seen in
The image rectangle accessing method is very useful not only in the phase of the bus band but also in an internal process of the circuit. First, the size of the internal line memory used to carry out a filter process in the vertical direction and so forth can be reduced. Further, to a circuit which carries out processing in a block such as a resolution conversion process, there is an advantage that the efficiency is higher where the form is converted from the image rectangle form into an arbitrary block form than where the form is converted from the raster scan form into an arbitrary block form.
As described hereinabove, in the present embodiment, the after-pickup addition process is adopted for an addition process in the still picture NR process upon still picture image pickup. Therefore, in the still picture NR process upon still picture image pickup, it is necessary to retain all reference images in the image memory section 4. In other words, in the present embodiment, if the shutter button is depressed, then a plurality of picked up images greater than two are captured by high speed successive image pickup and stored-into the image memory section 4.
While, in the in-pickup addition method, it is necessary to retain reference images for two frames in the maximum in the image memory section 4, in the after-pickup addition method, it is necessary to retain a greater number of reference images in the image memory section 4. Therefore, in the present embodiment, in order to reduce the storage capacity of the image memory section 4, picked up images captured by high speed successive image pickup are compression coded by the still picture codec section 18 and stored the compression coded image data into the image memory section 4. Then, when the thus captured images are to be used as reference images, the image data in the compression coded form are decompressed and decoded and then used.
An outline of a flow of the still picture NR process upon still picture image pickup is described hereinabove with reference to
At step S1 of
Referring to
Thereafter, the picked up images are read out in the image rectangle form from the image memory section 4 and converted from the RAW signal into YC pixel data by the data conversion section 14, whereafter they are subjected to image correction, resolution conversion and so forth by the image correction and resolution conversion section 15. Thereafter, the picked up images are written into the image memory section 4 by the still picture codec section 18 without the intervention of the image memory section 4 while they remain in the form compression coded by the JPEG system. Accordingly, the unit here in which the images are to be compressed is not an image but an image rectangle. Upon high speed image pickup, the procedure described above is carried out repetitively for a plurality of picked up images, and compression data for the number of picked up image data are stored into and retained in the image memory section 4.
After compression coding and storage into the image memory section 4 of all picked up images captured upon high speed image pickup are completed, the target image setting process in the process at step S3 of
In particular, compressed image data of a picked up image captured first after the shutter button is depressed is read out in a unit of an image rectangle from the image memory section 4 and supplied to and decompression decoded by the still picture codec section 18. The decompressed decoded image data is written as a base phase target image Pbt in a unit of an image rectangle into the image memory section 4.
Then, the image data of the base phase target image Pbt in the decompressed decoded form written in the image memory section 4 is read out in a unit of an image rectangle and supplied to the image superposition section 17 through the motion detection and motion compensation section 16. Then, the image data is reduced by the reduced phase production unit 174 of the image superposition section 17 shown in
After the process for the target image is completed in such a manner as described above, a setting process for a reference image in the process at step S3 of
In particular, from among a plurality of picked up images in the compressed and coded form stored in the image memory section 4, the second picked up image which is to be used as a reference image in block matching is read out in a unit of an image unit from the image memory section 4 and is supplied to and decompressed and decoded by the still picture codec section 18. The decompressed and decoded image data is written in a unit of an image rectanble as a base phase reference image Pbr into the image memory section 4.
Then, the image data of the base phase reference image Pbr in the form of decompressed and decoded form written in the image memory section 4 is read out in a unit of an image rectanble and supplied to the image superposition section 17 through the motion detection and motion compensation section 16. Then, the image data is reduced by the reduced phase production unit 174 of the image superposition section 17 shown in
Thereafter, motion detection and motion compensation are carried out between the decompressed decoded target image and the decompressed decoded reference image. This corresponds to the processes at steps S4 to S9 of
First, a reduced phase block matching process is carried out. In particular, image data of a set reduced phase target block is read out from the reduced phase target image Prt of the image memory section 4 and stored into the target block buffer unit 161 of the motion detection and motion compensation section 16. Meanwhile, a reduced phase matching processing range corresponding to the search range of the reduced phase target block is read out from the reduced phase reference image Prr and stored into the reference block buffer unit 162.
Then, the image data of the reduced phase target block and the reduced phase reference block are read out from the buffer sections 161 and 162, respectively, and are subjected to a reduced phase block matching process by the matching processing unit 163 to carry out detection of a reduced phase block matching process regarding the set reduced phase target block.
Then, a base phase target block and a base phase search range are set in such a manner as described hereinabove based on the reduced phase motion vector. Then, the set base phase target block is read out from the base phase target image Pbt and stored into the target block buffer unit 161, and the base phase matching processing range corresponding to the set base phase search range is read out and stored into the reference block buffer unit 162.
Then, the image data of the base phase target block and the base phase reference block are read out from the buffer sections 161 and 162, respectively, and are subjected to a base phase block matching process by the matching processing unit 163. Thereafter, detection of a base phase motion vector of the pixel accuracy regarding the set base phase target block is carried out by the motion vector calculation unit 164.
Then, the motion detection and motion compensation section 16 reads out a motion compensation block from the base phase reference image based on the detected base phase motion vector and supplies the motion compensation block to the image superposition section 17 together with the base phase target block. The image superposition section 17 carries out image superposition in a unit of a block and places a reduced phase NR image Pbnr of a unit of a block of a result of the superposition into the image memory section 4. Further, the image superposition section 17 places also a reduced phase NR image Prnr produced by the reduced phase production unit 174 into the image memory section 4.
The procedure described above is carried out for all target blocks of the target image. Then, if the block matching for all target blocks is completed, then a block matching process similar to that described above is carried out repetitively (step S4 of
As described above, in the present embodiment, a NR image obtained by superposition of a target image and a reference image is used as a target image when subsequent superposition is carried out. Further, the base phase target block is not needed any more after the block matching process is completed.
From the characteristics described above, in the present embodiment, the image superposition section 17 overwrites a block after NR obtained by superposition of a base phase target block and a motion compensation block into an address in which the original base phase target block is placed.
For example, as seen in
By carrying out memory accessing of the image rectangle accessing method by means of the image correction and resolution conversion section 15 and setting also the unit of processing in compression coding regarding still pictures to a form of an image rectangle in such a manner as described above, image data can be transferred directly from the image correction and resolution conversion section 15 to the still picture codec section 18 without the intervention of the image memory section 4. Consequently, images picked up by high-speed successive image pickup can be compression coded efficiently and stored into the image memory section 4.
Now, reduction of the storage capacity of the image memory upon decompression decoding of reference images is described.
In a still picture NR process upon still picture image pickup in the present embodiment, a target image from among images captured in a compression coded form and stored in the image memory section 4 is all decompressed and decoded. However, as seen from an image data flow illustrated in
This is described in connection with an example. For example, it is assumed that a base phase target image is divided into image rectangles T0 to T9 of 64 pixels in the horizontal direction as seen in
Now, if it is assumed that block matching is being carried out for a target block in the image rectangle T4, then those image rectangles which may possibly be accessed as a matching processing range from among the image rectangles R0 to R9 of the base phase reference image can be restricted to several image rectangles around the image rectangle R4.
For example, if it is assumed that the reduction ratio is 1/4, then in order to take a matching production range (refer to
Here, it is assumed that the matching process for all target blocks included in the image rectangle T4 is completed and a matching process for target blocks included in the image rectangle T5 is started as seen in
Therefore, in the present embodiment, when the image rectangle R7 is decompressed and decoded, the decompressed decoded image rectangle R7 is overwritten into the address at which the image rectangle R2 has been placed on the reference image memory of the image memory section 4. Consequently, the reference image data in the decompressed and decoded form on the reference image memory of the image memory section 4 are always only for five image rectangles. Consequently, the reference image memory of the image memory section 4, that is, the working area of the image memory section 4, can be efficiently saved.
Where decoding of reference image data is carried out in accordance with the method described above, the line of the address on the reference image memory of the image memory section 4 varies in such a manner as seen from
On the other hand, where the reference images in the reference image memory corresponding to the target block in the image rectangle T5 are those among which the image rectangle R5 is positioned centrally as seen in
It is to be noted that the reference coordinates Sc0 to Sc9 of the image rectangles R0 to R9 are an example of address pointers to be used when the reference image memory is to be accessed. The memory address pointers are not limited to such address positions as in the example of
Where an image rectangle is used also as a unit for compression coding of still images as described above, pixels only within a range necessary for a block matching process can be decoded, and the working area of the image memory section 4 can be cut down. Further, by decoding an image rectangle of a reference image to be used next and overwriting the decoded image rectangle at the address of an image rectangle of an unnecessary reference image in the image memory, the size of the decoded reference image in the working area can always be fixed.
It is to be noted that, in the example described hereinabove with reference to
Then, if the size of the matching processing range in the horizontal direction is equal to an integral number of times the size of an image rectangle in the horizontal direction and the matching processing range and the delimiting positions of an image rectangle coincide with each other, then useless data accessing is eliminated, and the efficiency is further raised.
If the size in the horizontal direction of the matching processing range set on the reference images is smaller than the size in the horizontal direction of an image rectangle which is a unit of compression, the matching processing range has a size with which it is included in one image rectangle, and if this is included in one image rectangle, then naturally it is necessary to decode only the one image rectangle. However, where the matching processing range extends over two image rectangles, the two image rectangles should be decoded.
Also in this instance, if the position of the target block changes to change the position of the matching processing range until the matching processing range comes to extend over two image rectangles, only that one of the image rectangles which newly enters the matching processing range may be decoded similarly as in the example described hereinabove.
Various still picture NR processes are available depending upon in what manner a target image and a reference image are selectively set or in what order different images are superposed. Several examples are described below.
Where a plurality of images are to be superposed, an image to be used as a reference for motion compensation is required. In the embodiment described above, the first picked up image is determined as a reference image considering that an image at an instant at which the shutter is operated is an image intended by the image pickup person. In other words, on a photograph picked up first when the image pickup person operates the shutter, an image picked up later in time is used to carry out superposition as described hereinabove with reference to
In the process described hereinabove with reference to the flow charts of
A concept of image superposition of the target addition method described above is illustrated in
The picked up images are denoted by Org0, Org1, Org2 and Org3 in an ascending order of the time interval from the point of time at which the shutter is depressed. First, the picked up image Org0 is set to a target image and the picked up image Org1 is set to a reference image, and a motion compensation image MC1 is produced from them as seen in
Then, the NR image NR1 is determined as a target image and the picked up image Org2 is determined as a reference image, and a motion compensation image MC2 is produced from them. Then, the NR image NR1 and the motion compensation image MC2 are superposed to produce a NR image NR2.
Then, the NR image NR2 is determined as a target image and the picked up image Org3 is determined as a reference image, and a motion compensation image MC3 is produced from them. Then, the NR image NR2 and the motion compensation image MC3 are superposed to produce a NR image NR3. The NR image NR3 is a finally synthesized NR image.
First, the picked up image Org2 is determined as a target image and the picked up image Org3 is determined as a reference image to produce a motion compensation image MC3. Then, the picked up image Org2 and the motion compensation image MC3 are superposed to produce a NR image NR3.
Then, the picked up image Org1 is determined as a target image and the NR image NR3 is determined as a reference image to produce a motion compensation image MC2. The picked up image Org1 and the motion compensation image MC2 are superposed to produce a NR image NR2.
Thereafter, the picked up image Org0 is determined as a target image and the NR image NR2 is determined as a reference image to produce a motion compensation image MC1. Then, the picked up image Org0 and the motion compensation image MC1 are supposed to produce a NR image NR1. The NR image NR1 is a finally synthesized NR image.
A processing procedure according to the reference addition method is illustrated in flow charts of
First, if the shutter button is depressed, then in the image pickup apparatus of the present example, a plurality of images are picked up at a high speed under the control of the CPU 1. In the present example, picked up image data of M images or M frames (M is an integer equal to or greater than 2) to be superposed upon still picture image pickup are captured and placed into the image memory section 4 (step S91).
Then, the reference frame is set to an Nth (N is an integer equal to or greater than 2 and has a maximum value of M) one of the M image frames accumulated in the image memory section 4, and to this end, the control unit 165 sets the initial value to the value N to N=M−1 (step S92). Then, the control unit 165 determines the Nth image frame as a target image or target frame and determines the (N+1)th image as a reference image or reference frame (step S93).
Then, the control unit 165 sets a target block to the target frame (step S94), and the motion detection and motion compensation section 16 reads the target frame from the image memory section 4 into the target block buffer unit 161 (step S95). Then, the motion detection and motion compensation section 16 reads pixel data within a matching processing range into the reference block buffer unit 162 (step S96).
Thereafter, the control unit 165 reads out a reference block in the search range from the reference block buffer unit 162 and carries out the hierarchical matching process in the present embodiment. It is to be noted that, in the present example, a SAD value hereinafter described among SAD values calculated by the matching processing unit 163 is sent to the motion vector calculation unit 164 so that a quadratic curve approximation interpolation process is carried out retaining a minimum value of the SAD values and SAD values in the proximity of the minimum value. After this process is repeated for all reference vectors in the search range, the interpolation process described hereinabove is executed by the quadratic curve approximation interpolation processing device, and a high-accuracy motion vector is outputted (step S97).
Then, the control unit 165 reads out a motion compensation block from the reference block buffer unit 162 in accordance with the high-accuracy motion vector detected in such a manner as described above (step S98) and sends the motion compensation block to the image superposition section 17 in synchronism with the target block (step S99).
Then, the image superposition section 17 carries out superposition of the target block and the motion compensation block and places NR image data of the block obtained by the superposition into the image memory section 4 under the control of the CPU 1. In particular, the image superposition section 17 writes the NR image data of the superposed block into the image memory section 4 (step S100).
Then, the control unit 165 decides whether or not the block matching is completed for all target blocks in the target block (step S101). Then, if it is decided that the block matching process for all target blocks is not completed as yet, then the processing returns to step S94, at which a next target block in the target frame is set so that the processes at steps S94 to S101 are repeated.
If it is decided at step S101 that the block matching regarding all target blocks in the target frame are completed, then the control unit 165 decides whether or not the process for all reference frames to be superposed is completed, that is, whether or not M=N is satisfied (step S102).
If it is decided at step S102 that M=N is not satisfied, then the value N is decremented to N=N−1 (step S103). Then, the NR image produced by the superposition at step S100 is determined as a reference image or reference frame and the N=N−1th image is determined as a target image or target frame (step S104). Thereafter, the processing returns to step S94 so that the processes at the steps beginning with step S94 are repeated. Then, if it is decided at step S102 that M=N is satisfied, then the present processing routine is ended.
It is to be noted that the image data of the NR image of the result of the superposition of the M picked up images are compression coded by the still picture codec section 18 and supplied to the recording and reproduction apparatus section 5, by which they are recorded on the recording medium.
The target addition method is advantageous in that it can be applied to both of the in-pickup addition method and the after-pickup addition method. On the other hand, while the reference addition method can be applied only to the after-pickup addition, since the distance in the temporal direction between the target image and the reference image is fixed, the reference addition method is advantageous in that, even if the search range is same, the range over which a motion can be covered is wider than that by the target addition method.
Now, a unit of superposition is described.
In the still picture processing technique illustrated in the flow charts of
On the other hand, it is possible to set a target block first, carry out a block matching process for all reference images and the target block and select, after the target block and motion compensation blocks of all reference images are superposed, a next target block (hereinafter referred to as collective addition method).
First, a target block TB0 is set within the picked up image Org0, and a motion compensation block MCB1 is produced from a matching processing range of the picked up image Org1. The target block TB0 and the motion compensation block MCB1 are superposed to produce a NR image block NRB1.
Then, the NR image block NRB1 is determined as a target block, and a motion compensation block MCB2 is produced from the matching processing range of the picked up image Org1. Then, the NR image block NRB1 and the motion compensation block MCB2 are superposed to produce a NR image block NRB2.
Then, the NR image block NRB2 is determined as a target block, and a motion compensation block MCB3 is produced from the matching processing range of the picked up image Org3. Then, the NR image block NRB2 and the motion compensation block MCB3 are superposed to produce a NR image block NRB3. This NR image block NRB3 is one block finally obtained by the synthesis of the NR images. The procedure described above is carried out for each target block to complete one NR image.
Since the sequential addition method described above involves accessing to reference images one by one, it is compatible with the method of decompressing and decoding a reference image in a unit of an image rectangle in the embodiment described hereinabove and has an advantage that the image memory capacity can be reduced.
While the sequential addition method involves rewriting of an image into the image memory section 4 every time superposition of images is carried out, the collective addition method can successively carry out superposition of all images without writing back any image into the image memory section 4. Accordingly, there is an advantage that the word length of addition data can be set long within the motion detection and motion compensation section 16 to carry out addition of high accuracy.
A processing procedure of operation according to the collective addition method is illustrated in
First, if the shutter button is depressed, then in the image pickup apparatus in the present example, image pickup of a plurality of images is carried out at a high speed under the control of the CPU 1. In the present example, picked up image data of M images or M frames (M is an integer equal to or greater than 2) to be superposed upon still picture image pickup are captured and placed into the image memory section 4 (step S111).
Then, the control unit 165 sets a target block in the target frame (step S112). Then, whereas the reference frame is set to an Nth image frame (N is an integer equal to or greater than 2 and has a maximum value of M) in time from among the M image frames accumulated in the image memory section 4, the control unit 165 sets an initial value for the value N to N=2 (step S113).
Then, the control unit 165 sets the first image frame as a target image or target frame and sets the N=2nd image as a reference image or reference frame (step S114).
Then, the control unit 165 reads in the target block from the image memory section 4 into the target block buffer unit 161 (step S115) and reads pixel data in the matching processing range into the reference block buffer unit 162 (step S116).
Then, the control unit 165 reads out the reference block in the search range from the reference block buffer unit 162, and the matching processing unit 163 carries out a hierarchical matching process. It is to be noted that, in the present example, a SAD value hereinafter described from among SAD values calculated by the matching processing unit 163 is sent to the motion vector calculation unit 164 in order that the motion vector calculation unit 164 retains a minimum value among the SAD values and SAD values in the proximity of the minimum value and carries out a quadratic curve approximation interpolation process. After this is repeated for all reference vectors in the search range, the quadratic curve approximation interpolation processing device carries out the above-described interpolation process and outputs a high-accuracy motion vector (step S117).
Then, the control unit 165 reads out a motion compensation block from the reference block buffer unit 162 in accordance with the high-accuracy motion vector detected in such a manner as described above (step S118) and sends the motion compensation block to the image superposition section 17 at the succeeding stage in synchronism with the target block (step S119).
Then, the image superposition section 17 carries out superposition of the target block and the motion compensation block and places the NR image data of the superposed block into the image memory section 4 under the control of the CPU 1. In particular, the image superposition section 17 writes the NR image data of the superposed block into the image memory section 4 (step S121 of
Then, the control unit 165 decides whether or not the process is completed for all reference frames to be superposed, that is, whether or not M=N is satisfied (step S122). If it is decided at step S122 that M=N is not satisfied, then the value N is incremented to N=N+1 (step S123). Then, the control unit 165 sets the NR image produced by the superposition at step S119 as a target image or target frame, and sets the N+1th image as a reference image or reference frame (step S124). Thereafter, the processing returns to step S115 so that the processes at the steps beginning with step S115 are repeated.
Then, if it is decided at step S122 that M=N is satisfied, then it is decided whether or not the block matching is completed for all target blocks in the target frame (step S125). Then, if it is decided that the block matching process is not completed for all target blocks, then the processing returns to step S112, at which a next target block in the target frame is set. Then, the processes at the steps beginning with step S112 are repeated.
Further, if it is decided at step S125 that the block matching for all target blocks in the target frame is completed, then the control unit 165 ends this processing routine.
Then, image data of the NR image of a result of the superposition of the M picked up images are compression coded by the still picture codec section 18 and supplied to the recording and reproduction apparatus section 5, by which they are recorded on the recording medium.
If the image pickup condition, the required NR accuracy, the setting information and so forth are changed to change the selection method for a target image and a reference image, the order of supervision or the calculation method of supervision described hereinabove with reference to
According to the embodiment described above, since the image correction and resolution conversion section 15 carries out memory accessing in a format wherein an image is divided into image rectangles and also the compression coding by the still picture codec section 18 is carried out in a unit of an image rectangle similarly, data can be transferred directly from the image correction and resolution conversion section 15 to the still picture codec section 18 without the intervention of a memory. Consequently, images picked up successively at a high speed can be compression coded efficiently and stored into the image memory section 4.
Further, since, when the compression coded images stored in the image memory section 4 are used as reference images, each image is decompressed and decoded in a unit of an image rectangle obtained by division of the image, data only within a range necessary for a block matching process can be decoded. Consequently, the working area of the image memory section 4 can be cut down. Further, since a reference image rectangle to be used subsequently is decoded and overwritten at an address of an unnecessary reference image rectangle of the image memory section 4, the size of decoded reference image in the working area can always be kept fixed.
Further, as described hereinabove, according to the present embodiment, where the selection method for a target image and a reference image, the order of superposition or the calculation method of supervision is changed depending upon setting information or the like, a desired NR image can be obtained.
While, in the embodiment described above, the JPEG system is used as a compression coding method in the still picture codec section 18, this is a mere example, but the present application naturally allows application of any image compression coding system.
Further, while, in the embodiment described above, compression coding of picked up images upon block matching is carried out by the compression coding processing section for compression coding images recorded upon still picture image pickup, different methods may be used for the compression method for the recording process and the compression method for picked up images used upon block matching.
Further, while, in the embodiment described above, the present application is applied where three or more picked up images are superposed in the still picture NR process, the present application can be applied also where two picked up images are superposed. Accordingly, the present application may be applied also where a moving picture NR process is carried out. Although the compression coding method for the reference image in this instance may be the JPEG system, a simpler image data compression method may be used instead.
Further, while an image is divided for each plurality of pixels in the horizontal direction so as to allow memory accessing to the image as data in a unit of an image rectangle, this is because the direction of reading out and writing of image data from and into the image memory is decided with reference to the horizontal direction. However, where the direction of reading out and writing of image data from and into the image memory is a vertical direction, if an image is divided for each plurality of lines in the vertical direction so as to allow memory accessing to the image as data in a unit of a horizontal image rectangle, then similar operation and working effects to those described above can naturally be achieved. Also it is a matter of course that an image may be divided in both of the horizontal direction and the vertical direction.
Further, while, in the embodiment described above, not only a reference image but also a target image are compression coded and fetched once into the image memory, the target image may not necessarily be compression coded because it is necessary for the image processing apparatus to compression code and retain at least a reference image.
Further, while, in the first example of the motion vector calculation unit in the embodiment described above, the searching direction in the search range is set to the horizontal line direction and the search is carried out such that the reference block is moved in order, for example, from the upper corner of the search range and the memory for one line of a SAD table is provided, a different search method wherein the searching direction in the search range is set to the vertical direction and a procedure that the search is started in a vertical direction, for example, from the left upper corner of the search range and, after the search for one column in the vertical direction comes to an end, the position of the reference block is shifted by a one-unit distance, for example, by a one-pixel distance to a vertical column on the right side and the search is carried out in the vertical direction from the top end of the column is repeated may be adopted. Where the search is carried out in such a manner that the reference block is shifted in the vertical direction in order from the left upper corner of the search range in this manner, a memory for one vertical column of the SAD table may be provided.
Here, whether the searching direction should be set to a horizontal direction or a vertical direction is preferably determined such that the matching processing unit and the motion vector calculation unit can be achieved in a comparatively reduced circuit scale.
It is to be noted that, as described hereinabove, the movement of the reference block may be carried out not only for each one pixel or each one line but for each plurality of pixels or each plurality of lines. Accordingly, the memory for one line in the horizontal direction in the former case may have a capacity for moving positions of the reference block in the horizontal direction, and the memory for one vertical column in the latter case may have a capacity for moving positions of the reference block in the vertical direction. In particular, where the movement of the reference block is carried out for each one pixel or for each one line, the memory for one line must have a capacity for the number of pixels of one line, but the memory for one vertical column must have a capacity for the number of the lines. However, where the reference block is moved for each plurality of pixels or for each plurality of lines, the memory for one line or the memory for one column may have a smaller capacity than that required where the reference block is moved for each one pixel or for each one line.
Further, the method for an interpolation process is not limited to the quadratic curve approximation interpolation process described hereinabove, but an interpolation process wherein a cubic curve or a higher order curve is used may be carried out.
Further, while, in the embodiment described above, the image processing apparatus according to the present application is applied to an image pickup apparatus, the present application can be applied not only to an image pickup apparatus but also to any apparatus wherein a motion between image frames is detected.
Further, while, in the embodiment described above, the present application is applied where a motion vector is detected in a unit of a block in order to carry out a noise reduction process by superposition of images, the application is not limited to this, but the present application can naturally be applied to detection of a motion vector, for example, by camera shake upon image pickup. A motion vector by camera shake can be determined, for example, as an average value of a plurality of block motion vectors.
It should be understood that various changes and modifications to the presently preferred embodiments described herein will be apparent to those skilled in the art. Such changes and modifications can be made without departing from the spirit and scope of the present subject matter and without diminishing its intended advantages. It is therefore intended that such changes and modifications be covered by the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2007-239487 | Sep 2007 | JP | national |