IMAGE CAPTURING APPARATUS HAVING SUBJECT CUT-OUT FUNCTION

Information

  • Patent Application
  • 20100149210
  • Publication Number
    20100149210
  • Date Filed
    December 14, 2009
    14 years ago
  • Date Published
    June 17, 2010
    14 years ago
Abstract
An image capturing apparatus includes: a first image capturing unit configured to capture a subject-including image including a subject image under a first capturing condition; a second image capturing unit configured to capture a background image under a second capturing condition that is substantially the same with the first capturing condition; a positioning unit configured to perform positioning of the subject-including image and the background image; a difference generating unit configured to generate differential information indicating difference between each corresponding pair of pixels in the subject-including image and in the background image as positioned by the positioning unit; and a subject extracting unit configured to extract a subject region including the subject image from the subject-including image based on the differential information generated by the difference generating unit.
Description
CROSS-REFERENCE TO THE RELATED APPLICATION(S)

The present application is based upon and claims priority from prior Japanese Patent Application No. 2008-319633, filed on Dec. 16, 2008, from prior Japanese Application No. 2009-001484, filed on Jan. 7, 2009, and from prior Japanese Application No. 2009-278378, filed on Dec. 8, 2009, the entire contents of which are incorporated herein by reference.


TECHNICAL FIELD

The present invention relates to an image capturing apparatus, an image processing method, and a computer program for extracting a subject region from a captured image.


BACKGROUND

Conventionally, there is known an image capturing apparatus provided with an application program for extracting a subject image mainly including an image of a subject. The application program causes the image capturing apparatus to capture an image of the subject in a background, and then to capture a background image without the subject in the background by using the image capturing apparatus being unmoved. The application program then generates differential information from the captured image including the subject and the captured background image, and extracts the subject image from the captured image. An example of such image capturing apparatus is disclosed in JP-A-10-021408.


However, if the background image is taken with the image capturing apparatus held in a hand after taking the image including the subject, a movement in angle-of-view tends to occur during that process. This results in a disadvantage that differences occur between the pixel values of the background images while extracting a subject region, in which case a portion of the background is likely to be erroneously recognized as the subject image.


SUMMARY

An object of the present invention is to provide an image capturing apparatus, an image processing method, and a computer readable medium including a program capable of increasing the accuracy of extraction of a subject region.


According to a first aspect of the present invention, there is provided an image capturing apparatus including: a first image capturing unit configured to capture a subject-including image including a subject image being an image of a subject and an image of a background under a first capturing condition; a second image capturing unit configured to capture a background image including the image of the background with no image of the subject under a second capturing condition that is substantially the same with the first capturing condition; a positioning unit configured to perform positioning of the subject-including image and the background image; a difference generating unit configured to generate differential information indicating difference between each corresponding pair of pixels in the subject-including image and in the background image as positioned by the positioning unit; and a subject extracting unit configured to extract a subject region including the subject image from the subject-including image based on the differential information generated by the difference generating unit.


According to a second aspect of the present invention, there is provided an image processing method including: capturing a subject-including image including a subject image being an image of a subject and an image of a background under a first capturing condition; capturing a background image including the image of the background with no image of the subject under a second capturing condition that is substantially the same with the first capturing condition; performing positioning of the subject-including image and the background image; generating differential information indicating difference between each corresponding pair of pixels in the subject-including image and in the background image as being positioned; and extracting a subject region including the subject image from the subject-including image based on the differential information.


According to a third aspect of the present invention, there is provided a computer readable medium storing a software program that causes a computer to perform image processing including: capturing a subject-including image including a subject image being an image of a subject and an image of a background under a first capturing condition; capturing a background image including the image of the background with no image of the subject under a second capturing condition that is substantially the same with the first capturing condition; performing positioning of the subject-including image and the background image; generating differential information indicating difference between each corresponding pair of pixels in the subject-including image and in the background image as being positioned; and extracting a subject region including the subject image from the subject-including image based on the differential information.





BRIEF DESCRIPTION OF THE DRAWINGS

A general configuration that implements the various features of the present invention will be described with reference to the drawings. The drawings and the associated descriptions are provided to illustrate embodiments of the invention and not to limit the scope of the invention.



FIG. 1 is a block diagram showing a general configuration of an image capturing apparatus according to an embodiment of the present invention.



FIG. 2 is a flowchart of an example subject cutting-out process of the image capturing apparatus of FIG. 1.



FIG. 3 is a flowchart that follows the flowchart of FIG. 2.



FIGS. 4A and 4B show example image conversion models used in the subject cutting-out process of FIGS. 2 and 3.



FIG. 5 is a flowchart of an example subject extracting process which is part of the subject cutting-out process of FIGS. 2 and 3.



FIG. 6 is a flowchart of an example subject cut-out image generating process which is part of the subject cutting-out process of FIGS. 2 and 3.



FIGS. 7A-7C show schematic images for description of the subject cutting-out process of FIGS. 2 and 3.



FIGS. 8A-8C show other schematic images for description of the subject cutting-out process of FIGS. 2 and 3.



FIG. 9 is a flowchart of an example subject combined image generating process of the image capturing apparatus of FIG. 1.



FIG. 10 is a flowchart of an example image combining process which is part of the subject combined image generating process of FIG. 9.



FIG. 11 is a flowchart of another example subject cutting-out process of the image capturing apparatus of FIG. 1.



FIG. 12 is a flowchart that follows the flowchart of FIG. 11.





DETAILED DESCRIPTION

The embodiments according to the present invention will be described in detail with reference to the accompanying drawings. The scope of the claimed invention should not be limited to the examples illustrated in the drawings and those described below.



FIG. 1 is a block diagram showing a general configuration of an image capturing apparatus 100 according to an embodiment of the invention.


The image capturing apparatus 100 according to the embodiment captures (takes) a subject-including image P1 (see FIG. 7A) in which a subject image S is contained on a background under given capturing conditions that are suitable for a situation with a subject S. Then, the image capturing apparatus 100 captures a background image P2 (see FIG. 7C) including no subject image S under the same conditions as the subject-including image P1 is captured. After positioning of the subject-including image P1, differential information between each corresponding pair of pixels of the subject-including image P1 and the background image P2 is generated and a subject region including the subject image S is extracted from the subject-including image P1 based on the pieces of differential information.


A more specific description will be made below. As shown in FIG. 1, the image capturing apparatus 100 is provided with a lens unit 1, an image capturing unit 2, a capturing control unit 3, an image data generating unit 4, a frame buffer 5, a feature amount calculating unit 6, a block matching unit 7, an image processing unit 8, a recording medium 9, a display control unit 10, a display unit 11, a user interface 12, a gyro sensor 14, and a processor (CPU) 13.


The capturing control unit 3, the feature amount calculating unit 6, the block matching unit 7, the image processing unit 8, and the CPU 13 may be designed as a single custom LSI 1A.


The lens unit 1 is provided with a plurality of lenses such as a zoom lens and a focusing lens.


Although not shown in any drawing, the lens unit 1 may be equipped with, for example, a zoom actuator and a focus actuator for actuating the zoom lens and the focusing lens, respectively, in the optical-axis direction in capturing the subject image S of a subject.


The image capturing unit 2, which is an image sensor such as a CCD (charge-coupled device) or a CMOS (complementary metal-oxide-semiconductor) sensor, converts an optical image obtained after passage of light through the various lenses of the lens unit 1 into a two-dimensional image signal.


Although not shown in the drawings, the capturing control unit 3 is provided with a timing generator and a driver. The capturing control unit 3 causes the image capturing unit 2 to convert an optical image into a two-dimensional image signal every given cycle by scan-driving the image capturing unit 2 using the timing generator and the driver. The capturing control unit 3 thus causes an image frame to be read from the imaging area of the image capturing unit 2 (one image frame at a time) and output to the image data generating unit 4.


The capturing control unit 3 also performs controls for adjusting the subject capturing conditions. More specifically, the capturing control unit 3 is provided with an AF module 3a which performs an automatic focusing control for adjusting the focusing conditions by moving the lens unit 1 in the optical-axis direction.


Furthermore, the capturing control unit 3 performs, as other controls for adjusting the capturing conditions, an AE (automatic exposure) control and an AWB (automatic white balance) control.


Where a subject cutting-out mode (described later) is set as a shooting mode, in response to a first shooting operation on a shutter button 12a by the user, the capturing control unit 3 causes the image capturing unit 2 to convert an optical image (that has passed through the lens unit 1) for a subject-including image P1 (see FIG. 7A) in which a subject image S (e.g., automobile image) is located on a background to into a two-dimensional image signal under given conditions and causes an image frame of the subject-including image P1 to be read from the imaging area of the image capturing unit 2.


After the subject-including image P1 is taken, the capturing control unit 3 keeps a state that the capturing conditions are fixed to those of the subject-including image P1. Then, in response to a second shooting operation on the shutter button 12a by the user, the capturing control unit 3 causes the image capturing unit 2 to convert an optical image (that has passed through the lens unit 1) for a background image P2 (see FIG. 7C) having no subject image S on the same background as the lo background of the subject-including image P1 into a two-dimensional image signal under the above-mentioned fixed conditions and causes an image frame of the background image P2 to be read from the imaging area of the image capturing unit 2.


The imaging lens unit 1, the image capturing unit 2, and the capturing control unit 3 are configured to serve as a first image capturing unit for capturing a subject-including image P1 in which a subject image S is located on a background into a two-dimensional image signal under given conditions and as a second image capturing unit for capturing a background image P2 having no subject image Son the same background as the background of the subject-including image P1.


After performing appropriate gain adjustments on respective color components (R, G, and B) of an analog-value signal of the image frame transferred from the image capturing unit 2, the image data generating unit 4 subjects resulting signals to sample-and-hold by sample-hold circuits (not shown), conversion into digital data by A/D converters (not shown), and color processing including pixel interpolation and γ-correction by a color processing circuit (not shown). Then, the image data generating unit 4 generates a digital-value luminance signal Y and color-difference signals Cb and Cr (YUV data).


The luminance signal Y and the color-difference signals Cb and Cr which are output from the color processing circuit are DMA-transferred to the framebuffer 5 (used as a buffer memory) via a DMA controller (not shown).


A demosaicing module (not shown) for developing A/D-converted digital data may be incorporated in the custom LSI 1A.


The framebuffer 5, which is configured by a DRAM, for example, temporarily stores data to be processed by the feature amount calculating unit 6, the block matching unit 7, the image processing unit 8 and CPU 13.


The feature amount calculating unit 6 performs feature extraction processing for extracting feature points from the background image P2 using the background image P2 itself as a reference. More specifically, the feature amount calculating unit 6 selects a given number (or more) of featuristic blocks (feature points) based on YUV data of a background image P2 and extracts the contents of those blocks as templates (e.g., squares of 16×16 pixels).


The feature extraction processing is processing of selecting featuristic blocks that are suitable for tracking from a large number of candidate blocks. More specifically, this processing calculates a gradient covariance matrix of each candidate block, employs, as an evaluation value, a minimum eigenvalue of or a result of a Harris operator operation on the gradient covariance matrix, and selects candidate blocks having large evaluation values using an absolute threshold value or according to relative ranks. In this manner, the feature extraction processing excludes regions that are not suitable for tracking because they are flat or high in randomness and selects, as templates, regions that are suitable for tracking such as corners of objects or patterns.


The block matching unit 7 performs block matching processing for positioning between a background image (reference image) P2 and a subject-including image (target image) P1. More specifically, the block matching unit 7 determines what region of the subject-including image P1 each template that has been extracted by the feature extraction processing corresponds to, that is, searches the subject-including image P1 for a position (corresponding region) that gives a best match to the pixel values of each template. The block matching unit 7 determines, as a motion vector of each template, a most appropriate offset between the background image P2 and the subject-including image P1 that provides a best evaluation value (e.g., the sum of squares of differences (SSD) or the sum of absolute values of differences (SAD)) of the degree of pixel value difference.


The image processing unit 8 is provided with a positioning module 8a for performing positioning between a subject-including image P1 and a background image P2.


The positioning module 8a performs positioning between a subject-including image P1 and a background image P2 based on feature points extracted from the background image P2. That is, the positioning module 8a is provided with a coordinate conversion formula calculating module 8b for calculating a coordinate conversion formula for coordinate conversion, in relation to the background image P2, of the pixels of the subject-including image P1 based on the feature points extracted from the background image P2. The positioning module 8a positions the subject-including image P1 with respect to the background image P2 by coordinate-converting the subject-including image P1 according to the coordinate conversion formula calculated by the coordinate conversion formula calculating module 8b.


More specifically, the coordinate conversion formula calculating module 8b employs, as a motion vector of the entire image, a motion vector that is statistically determined to cover a given percentage (e.g., 50%) or more of the motion vectors of the templates selected by the block matching unit 7 by performing a majority-decision operation on the motion vectors, and calculates a projective transformation matrix for the subject-including image P1 using a feature point correspondence relating to the motion vector thus employed. Then, the positioning module 8a positions the subject-including image P1 with respect to the background image P2 by coordinate-converting the subject-including image P1 according to the calculated projective transformation matrix.


The coordinate conversion formula calculating module 8b is configured to serve as a coordinate conversion formula calculating unit for determining a coordinate conversion formula for the pixels of a subject-including image P1 based on feature points extracted from a background image P2. The positioning module 8a is configured to serve as a positioning unit for performing positioning between a subject-including image P1 and a background image.


The positioning method described above is an example of methods that are implemented in the image capturing apparatus 100. For example, the reliability of a motion vector of the entire image may be increased by performing processing of determining the effectiveness of a tracking result (feature point correspondence) of each template in calculating a motion vector of the entire image.


More specifically, the block matching unit 7 sets two sub-templates from each template extracted by the feature extraction processing by, for example, direct-sum-dividing it into a checkered pattern. The block matching unit 7 determines what regions of the subject-including image P1 the two sub-templates correspond to; that is, the block matching unit 7 evaluates the degree of pixel value difference for each offset while doing coordinate offsetting in the search region and determines an offset that is evaluated as a best-match offset. The block matching unit 7 calculates an evaluation value of the template by adding together the calculated evaluation values of the two sub-templates, and employs, as sub-motion vectors of the sub-templates and a motion vector of the template, optimum offsets of the two sub-templates and the template between the background image P2 and the subject-including image P1 that provide best evaluation values of the degree of difference.


Then, the positioning module 8a determines the degree of coincidence between the motion vector of the template and the motion vectors of the sub-templates that have been determined by the block matching unit 8a. The feature point correspondence is regarded as effective if they are determined close, and is rendered ineffective and hence eliminated if they are determined not close. In this manner, the reliability of a motion vector of the entire image is increased which is calculated statistically.


The image processing unit is provided with a difference generating module 8c for generating differential information of each corresponding pair of pixels of the subject-including image P1 and the background image P2.


The difference generating module 8c eliminates high-frequency components using a lowpass filter from each of YUV data of the subject-including image P1 that are generated by the positioning module 8a by coordinate-converting its pixels according to the projective transformation matrix and YUV data of the background image P2. Then, the difference generating module 8c generates a degree-of-difference map by calculating the degree D of difference between each corresponding pair of pixels of the two images P1 and P2 according to the following equation (1).





Degree of difference D=(Y−Y′)2+{(U−U′)2+(V−V′)2}*k   (1)


In the equation (1), Y, U, and V represent the YUV data of the background image P2, Y′, U′, and V′ represent the YUV data of the coordinate-converted subject-including image P1, and k is a coefficient for changing the contribution ratio between the luminance signal Y and the color difference signals U and V.


The difference generating module 8c is configured to serve as a difference generating unit for generating differential information between each corresponding pair of pixels of a subject-including image P1 and a background image P2.


The image processing unit 8 is provided with a subject region extracting module 8d for extracting a subject region that includes a subject image S from the subject-including image P1 based on the generated pieces of differential information between the corresponding pairs of pixels of the subject-including image P1 and the background image P2.


The subject region extracting module 8d binarizes the degree-of-difference map using a given threshold value. Then, if the background image P2 has features of a given amount or more, the subject region extracting module 8d eliminates pixel sets that are smaller than a given value and thin-line pixel sets due to a camera shake by performing erosion processing for eliminating regions where fine noise or differences due to a camera shake exist. Then, the subject region extracting module 8d extracts a largest island pattern as a subject region by performing labeling processing for assigning the same number to pixel sets that constitute the same link component. Subsequently, the subject region extracting module 8d performs dilation processing to compensate for the above erosion, and then performs hole-filling processing by replacing, with the subject region, labeled pixel sets the ratios of the numbers of whose constituent pixels to the number of constituent pixels of the subject region are smaller than a given value by performing labeling processing only in the subject region.


On the other hand, if the features of the background image P2 are less than the given amount, it is considered that a subject region can be determined properly in the subject-including image P1. Therefore, the subject region extracting module 8d does not perform erosion processing, dilation processing, labeling processing, or other processing on the binarized degree-of-difference map.


The subject region extracting module 8d is configured to serve as a subject extracting unit for extracting a subject region including a subject image S from a subject-including image P1 based on pieces of differential information of corresponding pairs of pixels of the subject-including image P1 and a background image P2.


The image processing unit 8 is also provided with a position information generating module 8e for determining a position of the subject region extracted from the subject-including image P1 and generates an alpha map (position information) M indicating the position of the subject region in the subject-including image 21.


The alpha map M is a map of alpha values (0≦α≦1) for the respective pixels included in the subject-including image P1. The alpha values are used as weighted values in a process for alpha-blending the image of the subject region with a given background.


The position information generating module 8e generates alpha values by applying a lowpass filter to a binarized degree-of-difference map in which the largest island is given a value “1” and the other portion is given a value “0” and thereby generating intermediate values in a boundary portion. In this case, in the subject region which is given an alpha value “1,” the transmittance of the subject-including image P1 for a given background is 0%. On the other hand, in the background portion which is given an alpha value “0,” the transmittance of the subject-including image P1 is 100%. In the boundary portion where the alpha value satisfies 0<α<1, the subject-including image P1 and the background image are blended together.


The position information generating module 8e is configured to serve as a position information generating unit for determining a position of a subject region in a subject-including image P1 and generating position information.


The image processing unit 8 is further provided with an image combining module 8f for generating image data of a subject cut-out image P4 by combining a subject image S with a given single-color background image P3 based on the generated alpha map M in such a manner that those pixels of the subject-including image P1 which have the alpha value “1” do not transmit the corresponding pixels of the single-color background image P3 and its pixels having the alpha value “0” transmit the corresponding pixels of the single-color background image P3.


The image combining module 8f generates an image as obtained by cutting away the subject region from the single-color background image P3 using 1's complements (1−α) in the alpha map M and generates a subject cut-out image P4 by combining the above image with the subject image S that has been cut out from the subject-including image P1 using the alpha map M.


The image combining module 8f is configured to serve as a constitute combining unit for generating a subject cut-out image P4 by combining an image of a subject region with a given background based on position information.


The image combining module 8f generates a subject combined image (not shown) by combining the subject cut-out image P4 with a background-use image (not shown) based on the generated alpha map M. The image combining module 8f allows transmission of those pixels of the background-use image which have an alpha value “0” and overwrites the pixel values of its pixels having an alpha value “1” with the pixel values of the corresponding pixels of the subject cut-out image P4. As for each pixel of the background-use image whose alpha value satisfies 0<α<1, the image combining module 8f generates an image ((background-use image)×(1−α)) as obtained by cutting away the subject region from the background-use image using 1's complement (1−α), calculates a value that was blended with the single background color when the subject cut-out image P4 was generated by using 1′ complement (1−α) in the alpha map M, subtracts that value from the subject cut-out image P4, and combines a resulting value with the subject region cut-away image ((background-use image)×(1−α)).


The display control unit 10 performs a control for reading out display image data that is temporarily stored in the framebuffer 5 and causing the display unit 11 to display the display image data.


The display control unit 10 is provided with a VRAM, a VRAM controller, a digital video encoder, etc. Under the control of the CPU 13, the digital video encoder regularly reads, from the VRAM (not shown), via the VRAM controller, a luminance signal Y and color-difference signals Cb and Cr that have been read from the framebuffer 5 and is stored in the VRAM, generates a video signal based on the data of those signals, and causes the display unit 11 to display the video signal.


The display unit 11 may be a liquid crystal display device, for example. The display unit 11 displays, for example, an image taken by the image capturing unit 2 on the display screen based on a video signal supplied from the display control unit 10. More specifically, the display unit 11 displays a live-view image based on plural image frames generated by capturing an image of the subject S with the image capturing unit 2 and the capturing control unit 3 or a recording-view image taken as a main captured image.


In taking a subject-including image P1 in a state that a subject cutting-out mode (described later) is set as a shooting mode, the display unit 11 displays a message (e.g., “Take a subject image to be cut out.”; see FIG. 7A) as an instruction to take a subject-including image P1 while overlapping the message on the live-view image.


In taking a background image P2 in a state that the subject cutting-out mode is set as a shooting mode, the display unit 11 displays a message (e.g., “Take a background image without a subject by positioning the camera using the semi-transparent image.”; see FIG. 7C) as an instruction to take a background image P2, together with a subject-including image P1 in semi-transparent display form to be overlapped on a live-view image.


The term “subject-including image in semi-transparent display form” means a subject-including image that is displayed in a transparency in a range between a transparent and an opaque, that is, a subject-including image that transmits the outlines, colors, light/shade, etc. of a live-view image displayed behind the subject-including image being overlapped.


Controlled by the display control unit 10, the display unit 11 displays a subject cut-out image P4 in which a subject image S is overlapped on a given single-color background image P3 based on image data of a subject cut-out image P4 generated by the image combining module 8f of the image processing unit 8.


The display unit 11 is configured to serve as a display unit for displaying a subject cut-out image P4 generated by the image combining module 8f.


The recording medium 9, which is a nonvolatile memory (flash memory), for example, stores recording image data of a captured image that has been encoded by a JPEG compressing module (not shown) of the image processing unit 8.


The recording medium 9, which serves as a storage unit, stores compressed versions of an alpha map M generated by the position information generating unit 8e of the image processing unit 8 and image data of a subject cut-out image P4 in such manner that they are correlated with each other and the image data of the subject cut-out image P4 is given an extension “.jpe” for example. The alpha map M, which is data having gradations of about 8 bits, for example, is higher in compression efficiency (because it has many regions each having the same value) and hence can be stored with a smaller capacity than the image data of the subject cut-out image P4.


The user interface 12 allows the user to perform an operation on the image capturing apparatus 100. More specifically, the user interface 12 is provided with the shutter button 12a through which to give a command to shoot a subject S, a mode button 12b through which to give an instruction relating to selection of a shooting mode, a function, or the like through a menu picture, a zoom button (not shown) through which to give an instruction relating to adjustment of the zoom amount, and other buttons. The user interface 12 outputs an operation signal to the CPU 13 in response to a manipulation on each of those buttons.


The gyro sensor 14 detects an angular velocity of the image capturing apparatus 100 and outputs resulting gyro data to the CPU 13. Based on the gyro data, the CPU 13 determines whether the image capturing apparatus 100 has been kept unmoved with its angle of view kept unchanged (i.e., kept in a stationary state) from an instant of taking of a subject-including image P1 to an instant of taking of a background image P2.


The gyro sensor 14 and the CPU 13 are configured to serve as a position determining unit for determining whether the position of the main body of the image capturing apparatus 100 has varied relatively between an instant of taking of a subject-including image P1 and an instant of taking of a background image P2.


The CPU 13 serves to control other components provided in the image capturing apparatus 100. The CPU 13 performs various control operations according to various processing programs (not shown) for the image capturing apparatus 100.


Next, a subject cutting-out process of an image processing method of the image capturing apparatus 100 will be described below with reference to FIGS. 2-8C.



FIGS. 2 and 3 are a flowchart of an example subject cutting-out process. FIGS. 4A and 4B show example image conversion models of projective transformation; FIG. 4A shows an example similarity transformation model and FIG. 4B shows an example congruent transformation model. FIG. 5 is a flowchart of an example subject extracting process which is part of the subject cutting-out process. FIG. 6 is a flowchart of an example subject cut-out image generating process which is part of the subject cutting-out process. FIGS. 7A-7C and 8A-8C show schematic images for description of the subject cutting-out process.


The subject cutting-out process is a process that is executed when a subject cutting-out mode is selected from plural shooting modes displayed in a menu picture in response to a given manipulation on the mode button 12b of the user interface 12 by the user.


As shown in FIG. 2, first, at step S1, the CPU 13 controls the display control unit 10 to display a live-view image on the display screen of the display unit 11 based on plural image frames generated by shooting a subject S with the lens unit 1, the electronic imaging section 2, and the capturing control unit 3 and displays a message (e.g., “Take a subject image to be cut out.”; see FIG. 7A) as an instruction to take a subject-including image P1 on the display screen of the display unit 11 to be overlapped on the live-view image.


If the shutter button 12a of the user interface 12 is half-pressed by the user, at step S2 the CPU 13 controls the AF module 3a of the capturing control unit 3 to adjust the focusing position of the focusing lens and calculate a subject distance and acquires it. At this time, the CPU 13 may cause the capturing control unit 3 to adjust the capturing conditions such as the exposure conditions (shutter speed, aperture, amplification factor, etc.) and the white balance.


At step S3, the CPU 13 controls the electronic imaging section 2 to shoot an optical image to take a subject-including image P1 under given capturing conditions with timing that the user makes a shooting operation (full pressing) on the shutter button 12a of the user interface 12. Immediately before taking of a subject-including image P1, the CPU 13 calls an initialization function for a stationary state determination and starts acquisition of gyro data from the gyro sensor 14. More specifically, every time a frame sync signal is received, the CPU 13 calls a stationary state determination monitoring task and acquires gyro data that is output from the gyro sensor 14. In this manner, the CPU 13 continuously monitors whether the image capturing apparatus 100 is in a stationary state.


At step S4, the CPU 13 generates YUV data of the subject-including image P1 (see FIG. 7B) based on an image frame of the subject-including image P1 transferred from the image capturing unit 2 and stores the generated YUV data in the framebuffer 5 (temporary storage).


At step S5, the CPU 13 controls the capturing control unit 3 to maintain a state that the capturing conditions such as the focusing position, exposure conditions, and white balance that were employed when the subject-including image P1 was taken are fixed.


At step S6, the CPU 13 controls the display control unit 10 to display a live-view image on the display screen of the display unit 11 based on plural image frames generated by shooting the subject S with the lens unit 1, the electronic imaging section 2, and the capturing control unit 3 and displays a subject-including image P1 in semi-transparent display form and a message (e.g., “Take a background image without a subject image in such a manner that it is registered with the semi-transparent image.”; see FIG. 7C) as an instruction to take a background image P2 on the display screen of the display unit 11 to be overlapped on the live-view image. Then, the user takes a background image P2 after letting the subject S move out of the angle of view or waiting for the subject S to move out.


At step S7, the CPU 13 controls the image capturing unit 2 to shoot an optical image of a background image P2 under the capturing conditions that have been fixed since the taking of the subject-including image P1 with timing that the camera position has been adjusted by the user so that the background image P2 is registered with the semi-transparent version of the subject-including image P1 and the shutter button 12a of the user interface 12 is shooting-manipulated by the user.


At step S8, the CPU 13 acquires, from the stationary state determination monitoring task, determination information indicating whether the stationary state of the image capturing apparatus 100 has continued from the instant of taking of the subject-including image P1 to the instant of taking of the background image P2.


At step S9, the CPU 13 controls the image data generating unit 4 to generate YUV data of the background image P2 (see FIG. 8A) based on an image frame of the background image P2 transferred from the image capturing unit 2 and stores the generated YUV data in the framebuffer 5 (temporary storage).


At step S10, the CPU 13 acquires a subject distance from the AF module 3a and determines whether the subject distance is shorter than one meter. That is, the CPU 13 determines, as the distance determining unit, whether the distance to the subject S is longer than or equal to the given value.


If determined that the subject distance is shorter than one meter (S10: yes), at step S11 the CPU 13 designates, as an image deformation model of projective transformation, a similarity transformation model (see FIG. 4A) of similarity transformation (deformation having degrees of freedom of enlargement/reduction, rotation, horizontal translation, and vertical translation (four parameters)). The reason why a similarity model is used for projective transformation if the subject distance is less than one meter is as follows. It is a typical case that when the user lets a subject go out of the angle of view after taking of a subject-including image P1 (first taking), the image capturing apparatus 100 is moved so much that it becomes difficult to restore the same position (particularly in the front-rear direction) even if a semi-transparent image is referred to.


On the other hand, if determined that the subject distance is not shorter than one meter (S10: no), at step S12 the CPU 13 designates, as an image deformation model of projective transformation, a congruent transformation model (see FIG. 4B) of congruent transformation (deformation having degrees of freedom of rotation (the rotation angle is assumed to be about zero degree to two degrees and cos θ is approximated at one), horizontal translation, and vertical translation (three parameters)). The reasons why a congruent transformation model is used for projective transformation if the subject distance is not shorter than one meter are as follows. When the subject distance is long, a movement of the image capturing apparatus 100 in the front-rear direction has almost no influence on shooting. Avoiding use of an unnecessary degree of freedom makes it possible to eliminate influences of low-reliability motion vectors produced by noise or influence by a movement of the subject and thereby obtain a positioning transformation matrix that is higher in accuracy.


At step S13, the CPU 13 controls the feature amount calculating unit 6, the block matching unit 7, and the image processing unit 8 to calculate a projective transformation matrix for projective transformation of the YUV data of the subject-including image P1 according to the designated image conversion model using, as a reference, the YUV data of the background image P2 which is stored in the framebuffer 5.


More specifically, the feature amount calculating unit 6 selects a given number (or more) of featuristic blocks (feature points) based on the YUV data of the background image P2 and extracts the contents of those blocks as templates. The block matching unit 7 searches the subject-including image P1 for a position that gives a best match to the pixel values of each template extracted by the feature extraction processing and calculates, as a motion vector of the template, an optimum offset between the background image P2 and the subject-including image P1 that provides a best evaluation value of the degree of pixel value difference. The coordinate conversion formula calculating module 8b of the image processing unit 8 statistically calculates a motion vector of the entire image based on the motion vectors of the plural templates calculated by the block matching unit 7, and calculates a projective transformation matrix for the subject-including image P1 using a feature point correspondence of the calculated motion vector.


At step S14 (see FIG. 3), the CPU 13 controls the image processing unit 8 to determine whether the calculation of a projective transformation matrix has succeeded. That is, the processing section 8 determines whether step S13 succeeded in statistically calculating a motion vector of the entire image from motion vectors of plural templates and calculating a projective transformation matrix for the subject-including image P1 using a feature point correspondence of the calculated motion vector.


If determined that the calculation of a projective transformation matrix has succeeded (S14: yes), at step S15 the CPU 13 controls the positioning module 8a of the image processing unit 8 to perform positioning between the YUV data of the subject-including image P1 and that of the background image P2 by performing projective transformation on the subject-including image P1 according to the calculated projective transformation matrix.


On the other hand, if determined that the calculation of a projective transformation matrix has failed (S14: no), the CPU 13 determines at step S16 whether the stationary state of the image capturing apparatus 100 has continued from the instant of taking of the subject-including image P1 to the instant of taking of the background image P2 based on determination information acquired from the stationary state determination monitoring task.


If determined that the stationary state has continued from the instant of taking of the subject-including image P1 to the instant of taking of the background image P2 (S16: yes) because, for example, the subject-including image P1 and the background image P2 were taken in a state that the image capturing apparatus 100 is fixed to a tripod, the CPU 13 recognizes that almost no change has occurred in the background position and omits the projective transformation processing for positioning to be performed by the positioning module 8a of the image processing unit 8.


On the other hand, if determined that the stationary state has not continued (S16: no) because, for example, the subject-including image P1 and the background image P2 were taken with the image capturing apparatus 100 held by the user with a hand, at step S17 the CPU 13 controls the feature amount calculating unit 6 to determine whether the background is featureless or not based on the image data of the background image P2. More specifically, the feature amount calculating unit 6 determines the featurelessness of the background image P2 according to the ratio of a total area of blocks (in the entire background image P2) whose feature quantities are greater than or equal to a given value to the area of the entire background image P2. If the featurelessness is higher than or equal to a given value, the feature amount calculating unit 6 determines that the background is featureless. That is, the feature amount calculating unit 6 determines, as featurelessness determining unit, whether the featurelessness of the background image P2 is higher than or equal to the given value.


If determined that the background is featureless (S17: yes), the positioning module 8a of the image processing unit 8 does not perform projective transformation processing for positioning. For example, if the background is flat and has no pattern, positioning is difficult to perform but no disadvantage arises even if the processing of positioning between the subject-including image P1 and the background image P2 is omitted because positional deviations, if any, have only small influences on the extraction of a subject region. That is, where the background has patterns, positional deviations cause differences in the positional deviation positions and discrimination of a subject region is made difficult. In contrast, where the background has no pattern, even if positional deviations occur, featureless patterns are compared with each other in the background portion and no difference values appear.


On the other hand, if determined that the background is not featureless (S17: no), at step S18 the CPU 13 displays a given message indicating a failure in the cutting-out of a subject region (e.g., “The cutting-out of a subject region is failed.”) on the display screen of the display unit 11. Then, the CPU 13 finishes the subject cutting-out process.


At step S19, the CPU 13 controls the image processing unit 8 to execute a subject extracting process for extracting a subject region that includes a subject image S from the subject-including image P1.


The subject extracting process will be described below in detail with reference to FIG. 5.


As shown in FIG. 5, at step S211, for the purpose of extracting a subject region, the difference generating module 8c of the image processing unit 8 applies a lowpass filter to each of the YUV data of the subject-including image P1 and that of the background image P2 to eliminate their high-frequency components. At step S212, the difference generating module 8c calculates a degree-of-difference map by calculating the degree D of difference for each corresponding pair of pixels of the lowpass-filtered subject-including image P1 and background image P2 according to the following equation (2).





Degree of difference D=(Y−Y′)2+{(U−U′)2+(V−V′)2}*k   (2)


At step S213, the subject region extracting module 8d of the image processing unit 8 binarizes the generated degree-of-difference map using a given threshold value. At step S214, the CPU 213 determines whether the background image P2 has features that are greater than or equal to a given amount based on the feature points extracted by the feature amount calculating unit 6.


If determined that the background image P2 has features that are greater than or equal to the given amount (S214: yes), at step S215 the subject region extracting module 8d calculates a erosion amount to be employed in erosion processing based on a given value or the projective transformation matrix for the subject-including image P1. Difference regions may be caused by not only original image noise but also a camera shake. In that event, the erosion amount is changed according to the camera shake. That is, the erosion amount is increased if the output of the gyro sensor 14 is large, and vice versa.


At step S216, the subject region extracting module 8d performs erosion processing according to the calculated erosion amount to eliminate regions where differences have been caused by fine noise or a camera shake from the degree-of-difference map.


In this manner, the degree-of-difference map can be eroded and dilated taking a camera shake amount into consideration, whereby regions where differences have been caused by a camera shake can be eliminated properly from the degree-of-difference map.


The subject region extracting module 8d eliminates regions that are smaller than or equal to a given value or regions other than a maximum region by performing labeling processing at step S217, and determines a largest island pattern as a subject region at step S218.


At step S219, the subject region extracting module 8d performs dilation processing to compensate for the erosion.


At step S220, the subject region extracting module 8d replaces, with effective regions, regions the ratios of the numbers of whose constituent pixels to the number of constituent pixels of the subject region are smaller than or equal to a given value by performing labeling processing only in the subject region by performing labeling processing only in the subject region.


At step S221, the subject region extracting module 8d applies an averaging filter to the subject region to thereby give combining gradation to a peripheral portion of the subject region. If determined that the background portion P2 does not have features that are greater than or equal to the given amount (S214: no), the subject region extracting module 8d executes step S221. Then, the subject extracting process is finished.


Returning to FIG. 3, at step S20, the CPU 13 controls the position information generating module 8e of the image processing unit 8 to generate an alpha map M indicating a position of the extracted subject region in the subject-including image P1 (see FIG. 8B).


At step S21, the CPU 13 controls the image processing unit 8 to execute a subject cut-out image generating process for generating a subject cut-out image P4 by combining a subject image S with a given single-color background image P3.


The subject cut-out image generating process will be described below in detail with reference to FIG. 6.


As shown in FIG. 6, at step S231, the image combining module 8f of the image processing unit 8 reads the subject-including image P1, a single-color background image P3, and the alpha map M and develops them in the framebuffer 5.


The image combining module 8f specifies one pixel (e.g., the top-left corner pixel) of the subject-including image P1 at step S232, and causes the process to branch off according to the alpha value of the specified pixel in the alpha map M at step S233. More specifically, if the alpha value of the specified pixel of the subject-including image P1 is “0” (S233: α=0), at step S234 the image combining module 8f gives that pixel the given single color, that is, allows transmission of the given single color. If the alpha value of the specified pixel satisfies 0<α<1 (S233: 0<α<1), at step S235 the image combining module 8f blends the pixel value of that pixel with the given single color. If the alpha value of the specified pixel is “1” (S233: α=1), the image combining module 8f does nothing, that is, does not allow transmission of the given single color.


At step S236, the image combining module 8f determines whether all pixels of the subject-including image P1 have been processed or not.


If determined that not all pixels have been processed (S236: no), at step S237 the image combining module 8f moves the subject pixel to the next pixel. Then, the process returns to step S233.


The above described loop is performed repeatedly until it is determined that all pixels have been processed (S236: yes), whereby the image combining module 8f generates image data of a subject cut-out image P4 which is a combined image of the subject image S and the given single-color background image P3. Then, the subject cut-out image generating process is finished.


Returning to FIG. 3, at step S22, the CPU 13 controls the display control unit 10 to display the subject cut-out image P4 in which the subject image S is overlapped on the given single-color background image P3 on the display screen of the display unit 11 based on the image data of the subject cut-out image P4 generated by the image combining module 8f (see FIG. 8C).


At step S23, the CPU 13 stores, in a given storage area of the storage medium 9, as a single file (extension: .jpe), the alpha map M generated by the position information generating unit 8e of the image processing unit 8 and the image data of the subject cut-out image P4 in such manner that they are correlated with each other. Then, the subject cutting-out process is finished.


Next, a subject combined image generating process will be described in detail with reference to FIGS. 9 and 10. FIG. 9 is a flowchart of an example subject combined image generating process. The subject combined image generating process is a process that the CPU 13 controls the image combining module 8f of the image processing unit 8 to generate a subject combined image by combining a background-use image (not shown) with a subject cut-out image.


As shown in FIG. 9, at step S31, a background-use image (not shown) to be used as a background is specified (selected) in response to a given manipulation on the user interface 12 by the user. At step S32, the image combining module 8f reads out the image data of the specified background-use image and develops it in the framebuffer 5.


At step S33, a subject cut-out image P4 which is stored so as to have an extension “.jpe” is specified (selected) in response to a given manipulation on the user interface 12 by the user. At step S34, the image combining module 8f reads out the image data of the specified subject cut-out image P4 and develops it in the framebuffer 5.


At step S35, the image combining module 8f executes an image combining process using the background-use image and the subject cut-out image P4 which are developed in the framebuffer 5.


The image combining process will be described below in detail with reference to FIG. 10. FIG. 10 is a flowchart of an example image combining process.


As shown in FIG. 10, at step S351, the image combining module 8f reads out an alpha map M which is stored so as to have an extension “.jpe” and develops it in the framebuffer 5.


The image combining module 8f specifies one pixel (e.g., the top-left corner pixel) of the background-use image at step S352, and causes the process to branch off according to the alpha value of the specified pixel in the alpha map M at step S353. More specifically, if the alpha value of the specified pixel of the background-use image is “1” (S353: α=1), at step S354 the image combining module 8f overwrites the pixel value of that pixel with the pixel value of the corresponding pixel of the subject cut-out image P4, that is, allows transmission of the given single color. If the alpha value of the specified pixel satisfies 0<α<1 (S353: 0<α<1), at step S355 the image combining module 8f generates an image ((background-use image)×(1−α)) that is obtained by cutting away the subject region from the background-use image using 1′ complement (1−α), calculates a value that was blended with the single background color when the subject cut-out image P4 was generated by using 1's complement (1−α) in the alpha map M, subtracts that value from the subject cut-out image P4, and combines a resulting value with the subject region cut-away image ((background-use image)×(1−α)). If the alpha value of the specified pixel is “0” (S353: α=0), the image combining module 8f does nothing, that is, allows transmission of the background-use image.


At step S356, the image combining module 8f determines whether all pixels of the background-use image have been processed or not.


If determined that not all pixels have been processed (S356: no), at step S357 the image combining module 8f moves the subject pixel to the next pixel. Then, the process returns to step S353.


The above loop is executed repeatedly until it is determined that all pixels have been processed (S356: yes), whereby the image combining module 8f generates image data of a subject combined image which is a combined image of the subject cut-out image P4 and the background-use image. Then, the image combining process is finished.


Returning to FIG. 9, at step S36, the CPU 13 controls the display control unit 10 to display the subject combined image in which the subject image S is overlapped on the background-use image on the display screen of the display unit 11 based on the image data of the subject combined image generated by the image combining module 8f.


As described above, in the image capturing apparatus 100 according to the embodiment, first, a subject-including image P1 in which a subject image S is located on a background is taken under given capturing conditions that are suitable for a state that the subject S exists. Then, with the capturing conditions fixed to the given ones that were employed when the subject-including image P1 was taken, a background image P2 having no subject image S is taken. After positioning of the subject-including image P1, a degree-of difference map between corresponding pairs of pixels of the subject-including image P1 and the background image P2 is generated and a subject region including the subject image S is extracted from the subject-including image P1 based on the degree-of difference map.


Since a background image P2 is taken under the same, given capturing conditions as a subject-including image P1 was taken, the subject-including image P1 and the background image P2 can be made approximately identical in the brightness and hue of the background portion as well as in the degree of defocusing of the background portion and the ratio of its size to the size of the entire image, whereby the accuracy of extraction of a subject region from the subject-including image P1 can be increased. Since the fixed capturing conditions are set when a subject-including image P1 is taken, a cut-out subject image is made more appropriate than in the case where the capturing conditions are set for a background image P2.


When a background image P2 is taken, a subject-including image P1 is displayed on the display unit 11 in semi-transparent display form. Therefore, the user can more easily take a background image P2 which is the same as the background image of the subject-including image P1 by adjusting the camera position so that the background is registered with that of the semi-transparent version of the subject-including image P1.


Since positioning between the subject-including image P1 and the background image P2 is performed based on feature points that are extracted from the background image P2, an event that feature points are selected in a subject image S can be avoided.


That is, it is possible to calculate a projective transformation matrix for the pixels of the subject-including image P1 based on feature points extracted from the background image P2 and generate a degree-of-difference map between corresponding pairs of pixels of the background image P2 and the subject-including image P1 whose pixels have been subjected to projective transformation according to the calculated projective transformation matrix. Since feature points are extracted from the background image P2, an event that feature points are selected in a subject image S can be avoided and an event that coordinate conversion is performed based on apparently different image portions of the two images can also be avoided. Since highly accurate coordinate conversion can be performed, the pixels of the subject-including image P1 can properly be correlated with those of the background image P2, whereby a subject region can be extracted more properly.


Since a subject region is determined and extracted from a subject-including image P1 by eliminating pixel sets that are smaller than a given value by performing binarization processing, erosion/dilation processing, and labeling processing on a degree-of-difference map, a largest island pattern in the degree-of-difference map can be determined as a subject region. A subject region can thus be extracted more properly.


Whether to perform erosion/dilation processing and labeling processing on a binarized degree-of-difference map is determined depending on whether a background image P2 has features that are greater than or equal to a given amount. Therefore, erosion/dilation processing etc. are not performed if the background image P2 does not have features that are greater than or equal to the given amount, because in that case a subject image S should be determined properly in the subject-including image P1. The processing speed of the subject extracting process can thus be increased.


Since in the subject cutting-out process an alpha map M is generated by determining a position of an extracted subject region in the subject-including image P1, a position of the extracted subject region in the subject-including image P1 can thereafter be determined properly using the alpha map M.


Since a subject cut-out image P4 which is a combined image of a given single-color background image P3 and the image of the subject region is generated based on the alpha map M and displayed on the display screen of the display unit 11, the user can check a cut-out subject image S and hence can recognize whether or not the subject image S has been cut out properly. Since only the subject image S is cut out from the subject-including image P1 and the subject cut-out image P4 is composed using, as a background, the single-color background image P3 which is entirely different from the actual background, the user can enjoy a change from photographs already taken and stored. Displaying the composed subject cut-out image P4 on the display screen of the display unit 11 allows the user to recognize whether or not the cutting-out of the subject image S has been performed properly.


Since the subject cut-out image P4 is generated in such a manner that a boundary portion between the background region and the subject region is blended with the single color of the single-color background image P3, the cut-out subject image S can be made a natural image in which its boundary portion is blended with the single color properly.


Since the subject cut-out image P4 and the alpha map M are correlated with each other and stored as one file, it is not necessary to determine a subject region every time cutting-out/composing is performed using a background image that is entirely different from the single-color background image P3 that was used for the composing. This makes it possible to shorten the processing time.


If the subject-including image P1 in which the subject image S is located on the background and the background image P2 having no subject image S on the background were stored separately in the recording medium 9, a disadvantage would arise in combining the subject cut-out image P4 with one of images already taken and stored. That is, a subject image S needs to be cut out every time such combining is performed and hence the cutting-out processing takes time. There would be another disadvantage that the quality of a cut-out subject image S is not known in advance.


In contrast, in the embodiment, since the subject cut-out image P4 and the alpha map M are correlated with each other and stored as one file, it is not necessary to determine a subject region every time cutting-out/composing is performed using a given background image as a background and hence the processing time can be shortened. Furthermore, since the subject cut-out image P4 is stored as a file that is different from the file of the subject-including image P1 from which the subject region was extracted, the quality of the cut-out subject image S can be checked by displaying, before composing, the subject cut-out image P4 or the subject-including image P1 on the display screen of the display unit 11. This increases the convenience of the subject cutting-out/composing.


The invention is not limited to the above embodiment and various improvements and design modifications may be made without departing from the spirit and scope of the invention.


For example, although in subject cutting-out process (see FIGS. 2 and 3) of the embodiment the determination as to a subject distance (step S10), the determination as to continuation of a stationary state (step S16), and the determination as to whether the background is featureless (step S17) are performed in this order, the order of execution of these determination steps is not limited to it.


That is, as shown in FIGS. 11 and 12, the order of execution may be such that determination as to continuation of a stationary state (step S102) is performed after determination as to whether the background is featureless (step S101) and determination as to a subject distance (step S103) is performed last.


More specifically, as shown in FIG. 11, after the YUV data of the background image P2 (see FIG. 8A) has been stored in the framebuffer 5 (temporary storage) at step S9, at step S101 the CPU 13 controls the feature amount calculating unit 6 to determine whether the background is featureless or not based on the image data of the background image P2.


If determined that the background is featureless (S101: yes), the CPU 13 omits projective transformation processing for positioning to be performed by the positioning module 8a of the image processing unit 8 and causes the image processing unit 8 to execute the subject extracting process (step S19 in FIG. 12) for extracting a subject region that includes a subject image S from the subject-including image P1.


On the other hand, if determined that the background is not featureless (S101: no), at step S102 the CPU 13 determines whether the stationary state has continued from the instant of taking of the subject-including image P1 to the instant of taking of the background image P2 based on determination information acquired from the stationary state determination monitoring task.


If determined that the stationary state has continued (S102: yes), the CPU 13 omits projective transformation processing for positioning to be performed by the positioning module 8a of the image processing unit 8 and causes the process to proceed to step S19.


On the other hand, if determined that the stationary state has not continued (S102: no), at step S103 (see FIG. 12) the CPU acquires a subject distance from the AF module 3a and determines whether the subject distance is shorter than one meter.


If determined that the subject distance is shorter than one meter (S103: yes), at step S104 the CPU 13 designates a similarity transformation model (see FIG. 4A) as an image conversion model of projective transformation. On the other hand, if determined that the subject distance is not shorter than one meter (S103: no), at step S105 the CPU 13 designates a congruent transformation model (see FIG. 4B) as an image conversion model of projective transformation.


At step S106, the CPU 13 controls the block matching unit 7 and the image processing unit 8 to calculate a projective transformation matrix for projective transformation of the YUV data of the subject-including image P1 using, as a reference, the YUV data of the background image P2 which is stored in the framebuffer 5.


At step S107, the CPU 13 controls the image processing unit to determine whether the calculation of a projective transformation matrix has succeeded. If determined that the calculation of a projective transformation matrix has succeeded (S107: yes), at step S108 the CPU 13 controls the positioning module 8a of the image processing unit 8 to perform processing for positioning between the YUV data of the subject-including image P1 and that of the background image P2 by performing projective transformation on the subject-including image P1 according to the calculated projective transformation matrix.


Then, the CPU 13 controls the process to proceed to step S19.


On the other hand, if determined that the calculation of a projective transformation matrix has failed (S107: no), at step S109 the CPU 13 displays a given message indicating a failure in the cutting-out of a subject region (e.g., “The cutting-out of a subject region has failed.”) on the display screen of the display unit 11. Then, the CPU 13 finishes the subject cutting-out process.


Since the determination as to whether the background is featureless is made first, it is not necessary to perform projective transformation on the subject-including image P1 if determined that the background is featureless. As a result, the subject cutting-out process can be performed faster.


The determination as to continuation of a stationary state of the image capturing apparatus 100 may be made before the determination as to whether the background is featureless. As in the above case, it is not necessary to perform projective transformation on the subject-including image P1 if determined that the stationary state has continued. The subject cutting-out process can be performed faster.


Although in the above embodiment the determination as to whether the image capturing apparatus 100 is in a stationary state is made based on gyro data that is output from the gyro sensor 14, the invention is not limited to such a case. For example, this determination may be made based on whether a tripod is fixed to a tripod fixing potion (not shown) of the image capturing apparatus 100. More specifically, a determination “the stationary state of the image capturing apparatus 100 has continued” is made if a tripod was fixed to the tripod fixing potion when the subject-including image P1 and the background image P2 were taken. And a determination “the stationary state of the image capturing apparatus 100 has not continued” is made if a tripod was not fixed to the tripod fixing potion.


The above-described featurelessness may be calculated by measuring variation, such as dispersion or standard deviation, of each color components of the pixels in each blocks. The featurelessness may also be calculated from a sum of absolute values or square vales of differences in the color components of the pixels that are adjacent to one another.


The featurelessness may be defined as grayscale data having no color components. Also in this case, the grayscale featurelessness may be calculated by measuring variation, such as dispersion or standard deviation, of each color components of the pixels in each blocks or may be calculated from a sum of absolute values or square values of differences in the color components of the pixels that are adjacent to one another.


When an amount of noise included in the image is predictable or definable, adverse effect by the noise may be eliminated by ignoring a value lower than the noise in calculating the featurelessness.


Although in the above embodiment a still image of a subject S (automobile) is taken by two times of still image taking, the invention is not limited to such a case. For example, subject-including images P1 maybe taken by consecutive shooting. In this case, a subject who moves fast such as a person who makes a golf swing is shot consecutively and then a background image P2 is taken after the person goes out of the angle of view. The process may be modified in such a manner that a subject region is extracted for each of the subject-including images P1 taken consecutively (step S19) and an alpha map M is generated for each subject region (step S20). Consecutively taken images are combined with a given single-color background image sequentially in order of taking and a motion JPEG image is generated to sequentially display images of the person who made a golf swing that have been cut out from the consecutively taken images. Alternatively, an image like what is called a strobe shot may be composed by superimposing images of the person who made a golf swing that have been cut out from the consecutively taken images, on each other to form a single image.


Although in the embodiment an alpha map M and image data of a subject cut-out image P4 are correlated with each other and stored as one file, the alpha map M and image data of a subject-including image P1 may be correlated with each other and stored as one file in the recording medium (storage unit) 9. In this case, two modes may be provided for reproduction of this file, that is, a mode in which the subject-including image P1 is reproduced and a mode in which a subject cut-out image P4 is composed at the time of reproduction by using the alpha map M.


A face detecting module may be provided in the image capturing apparatus 100 and the process may be modified in the following manner. Face detection is performed for each of a subject-including image P1 and a background image P2. If a face is detected only in the subject-including image P1, an offset value may be added so that the degree of difference (the degree of non-coincidence) of a degree-of-difference map is made high in a face-detected region. With this measure, an image of the face which is the most important part of a subject is included in a cut-out region more reliably. Furthermore, a face cutting-out mode maybe provided and the process maybe modified in such a manner that only a face-detected region is cut out by labeling and the conventional labeling process is omitted. This makes it possible to obtain a face cut-out image by lighter processing.


Although in the above embodiment the erosion processing (step S216) and the dilation processing (step S219) are performed after the binarization processing (step S213) on a degree-of-difference map, the invention is not limited to such a case. The binarization processing may be performed after the erosion processing and the dilation processing.


That is, the subject extracting process of the above embodiment is just an example and the subject extracting process may be modified arbitrarily as appropriate as long as a subject image S can be extracted properly from a subject-including image P1.


Although in the above embodiment the display unit 11 is employed as an example of output unit, the invention is not limited to such a case. For example, the output unit may be an external output terminal (not shown) to which a printer or the like can be connected. In this case, image data of a combined image is output to the printer via the external output terminal to print the combined image.


The configuration of the image capturing apparatus 100 described in the above embodiment is just an example and the invention is not limited to it.


For example, where the image capturing unit 2 is a CMOS image sensor, focal plane distortion may occur when a fast-moving subject is shot because the individual pixels are different from each other in the start timing of charge storage. To prevent this phenomenon, a mechanical shutter (not shown) which is drive-controlled by a shutter control section (not shown) may be provided to control the exposure time.


In addition, in the above embodiment, the functions of the first image capturing unit and the second image capturing unit are realized in such a manner the lens unit 1, the image capturing unit 2, and the capturing control unit 3 operate under the control of the CPU 13 and the functions of the positioning unit, difference generating unit, subject extracting unit, coordinate conversion formula calculating unit, position information generating unit, and combining unit are realized in such a manner the image processing unit 8 operates under the control of the CPU 13. However, the invention is not limited to such a case. They may be realized in such a manner that a given program or the like is run by the CPU 13.


More specifically, a program including a first shooting control routine, a second shooting control routine, a positioning routine, a difference generating routine, a subject extracting routine, a coordinate conversion formula calculating routine, a position information generating routine, and a combining routine may be stored in advance in a program memory (not shown) for storing a program. The first shooting control routine causes the CPU 13 to function as the first shooting control unit for causing the image capturing unit to take a subject-including image P1 in which a subject image S is located on a background. The second shooting control routine causes the CPU 13 to function as the second shooting control unit for causing the image capturing unit to take a background image P2 having no subject image Son the same background as the subject-including image P1 was taken. The positioning routine causes the CPU 13 to function as the positioning unit for performing positioning between the subject-including image P1 and the background image P2. The difference generating routine causes the CPU 13 to function as the difference generating unit for generating differential information between each corresponding pair of pixels of the subject-including image P1 and the background image P2. The subject extracting routine causes the CPU 13 to function as the subject extracting unit for extracting a subject region including the subject image S from the subject-including image P1 based on the generated pieces of differential information. The coordinate conversion formula calculating routine causes the CPU 13 to function as the coordinate conversion formula calculating unit for calculating a coordinate conversion formula for the pixels of the subject-including image P1 based on feature points extracted from the background image P2. The position information generating routine causes the CPU 13 to function as the position information generating unit for generating position information by determining a position of the extracted subject region in the subject-including image P1. The combining routine causes the CPU 13 to function as the combining unit for generating a subject cut-out image P4 which is a combined image of an image of the subject region and a given background, based on the generated position information.


It is to be understood that the present invention is not limited to the specific embodiments described above and that the invention can be embodied with the components modified without departing from the spirit and scope of the invention. The invention can be embodied in various forms according to appropriate combinations of the components disclosed in the embodiments described above. For example, some components may be deleted from all of the components shown in each embodiment. Further, components in different embodiments may be used appropriately in combination.

Claims
  • 1. An image capturing apparatus comprising: a first image capturing unit configured to capture a subject-including image including a subject image being an image of a subject and an image of a background under a first capturing condition;a second image capturing unit configured to capture a background image including the image of the background with no image of the subject under a second capturing condition that is substantially the same with the first capturing condition;a positioning unit configured to perform positioning of the subject-including image and the background image;a difference generating unit configured to generate differential information indicating difference between each corresponding pair of pixels in the subject-including image and in the background image as positioned by the positioning unit; anda subject extracting unit configured to extract a subject region including the subject image from the subject-including image based on the differential information generated by the difference generating unit.
  • 2. The apparatus of claim 1 further comprising: a coordinate conversion formula calculating unit configured to calculate a coordinate conversion formula for pixels included in the subject-including image based on feature points that are extracted from the background image,wherein the difference generating unit is configured to generate the differential information indicating difference between each corresponding pair of pixels of the background image and the subject-including image being applied with a coordinate conversion for each pixels using the coordinate conversion formula calculated by the coordinate conversion formula calculating unit.
  • 3. The apparatus of claim 1, wherein the subject extracting unit is configured to determine the subject region when extracting the subject region from the subject-including image by eliminating pixel sets having regions being smaller than a threshold based on the differential information generated by the difference generating unit.
  • 4. The apparatus of claim 3, wherein the subject extracting unit is configured to determine whether or not to perform eliminating the pixel sets having regions being smaller than the threshold in accordance with a feature amount of the background image when extracting the subject region from the subject-including image.
  • 5. The apparatus of claim 3, wherein the subject extracting unit is configured to eliminate the pixel sets having regions being smaller than the threshold by performing erosion and dilation in accordance with an amount of a camera shake when extracting the subject region from the subject-including image.
  • 6. The apparatus of claim 1, wherein the subject extracting unit is configured to determine, as an effective region of the subject region, a region in the subject region having the number of pixels less than the number of constituent pixels of the subject region by a given ratio.
  • 7. The apparatus of claim 1 further comprising: a distance determining unit configured to determine a distance to the subject,wherein the positioning unit is configured to perform the positioning of the subject-including image and the background image in a simplified mode when the distance determined by the distance determining unit is shorter than a threshold.
  • 8. The apparatus of claim 1 further comprising: a position determining unit configured to determine a change in a relative position between a first position of the apparatus at which the subject-including image is captured and a second position of the apparatus at which the background image is captured,wherein the positioning unit is configured to perform the positioning of the subject-including image and the background image in a simplified mode when the position determining unit determines that the relative position is unchanged.
  • 9. The apparatus of claim 1 further comprising: a featurelessness determining unit configured to determine a featurelessness of the background image,wherein the positioning unit is configured to omit performing the positioning of the subject-including image and the background image when the featurelessness is higher than a threshold.
  • 10. The apparatus of claim 1 further comprising: a position information generating unit configured to generate position information indicating a position of the subject region in the subject-including image.
  • 11. The apparatus of claim 10 further comprising: a storage unit configured to store the subject-including image with the position information.
  • 12. The apparatus of claim 1 further comprising: a combining unit configured to generate a combined image of an alternative background image and an image of the subject region based on subject region information of the subject region extracted by the subject extracting unit.
  • 13. The apparatus of claim 10 further comprising: a combining unit configured to generate a combined image of an alternative background image and an image of the subject region based on subject region information of the subject region extracted by the subject extracting unit,wherein the combining unit is configured to generate the combined image based on the position information generated by the position information generating unit.
  • 14. The apparatus of claim 13, wherein the combining unit is configured to generate, as the combined image, a subject cut-out image including the image of the subject region on a single-colored background.
  • 15. The apparatus of claim 14, wherein the combining unit is configured to generate, as the combined image, a subject combined image being combined with the subject cut-out image and a given background image.
  • 16. The apparatus of claim 13 further comprising: a storage unit configured to store the combined image with the position information.
  • 17. The apparatus of claim 1 further comprising: a display unit configured to display the subject-including image being overlapped in a semi-transparent mode on a live-view image of the background image being captured by the second image capturing unit.
  • 18. The apparatus of claim 12 further comprising: a display unit configured to display the combined image generated by the combining unit.
  • 19. An image processing method comprising: capturing a subject-including image including a subject image being an image of a subject and an image of a background under a first capturing condition;capturing a background image including the image of the background with no image of the subject under a second capturing condition that is substantially the same with the first capturing condition;performing positioning of the subject-including image and the background image;generating differential information indicating difference between each corresponding pair of pixels in the subject-including image and in the background image as being positioned; andextracting a subject region including the subject image from the subject-including image based on the differential information.
  • 20. A computer readable medium storing a software program that causes a computer to perform image processing comprising: capturing a subject-including image including a subject image being an image of a subject and an image of a background under a first capturing condition;capturing a background image including the image of the background with no image of the subject under a second capturing condition that is substantially the same with the first capturing condition;performing positioning of the subject-including image and the background image;generating differential information indicating difference between each corresponding pair of pixels in the subject-including image and in the background image as being positioned; andextracting a subject region including the subject image from the subject-including image based on the differential information.
Priority Claims (3)
Number Date Country Kind
P2008-319633 Dec 2008 JP national
P2009-001484 Jan 2009 JP national
P2009-278378 Dec 2009 JP national