This application claims the benefit of priority under 35 USC 119 of Japanese application no. 2008-038778, filed on Feb. 20, 2008, which is incorporated herein by reference.
The present invention relates to an image processing technique that can be used, for example, for the selection of a photo image.
A large number of images may conventionally be stored in a digital camera, a personal computer, and the like. Sometimes a user demands that some images among such a large number of images stored therein should be selected as target images for the purpose of saving, printing, and the like. In an effort to facilitate image selection, various kinds of techniques have been proposed. An example of image selection techniques of the related art is described in JP-A-2007-334594.
A photo image often includes the face of a human. However, image selection techniques of the related art, including that of JP-A-2007-334594, do not take full advantage of the presence of a face in the image for easier image selection.
The present invention provides a technique for selecting an image from among a plurality of images in a reliable manner by utilizing a face that is included in the image.
The invention provides, as various aspects thereof, an image processing apparatus, an image processing method, and a computer program having the following novel and inventive features, the non-limiting exemplary configuration and operation of which is described in detail in the DESCRIPTION OF EXEMPLARY EMBODIMENTS.
Application Example 1 (First Aspect of the Invention): An image processing apparatus that selects at least one photo image out of a plurality of photo images includes: a face area determining section that detects whether or not there is a face in each photo image and determines the face area of the face, if any, detected in each photo image; an image evaluation processing section that calculates a first edge amount pertaining to the face area detected in each photo image; and an image selecting section that selects a photo image from among the plurality of photo images on the basis of the first edge amount of each photo image. Since the image processing apparatus according to the first aspect of the invention selects a photo image or images based on the first edge amount pertaining to a face area or areas, it is possible to select at least one photo image that contains a large edge amount pertaining to the face area in a good in-focus state.
Application Example 2: In the image processing apparatus according to the first aspect of the invention, the image evaluation processing section preferably calculates a second edge amount pertaining to either an area other than the detected face area in each photo image or the entire area of each photo image and further calculates a total edge amount by weighted averaging the first and second edge amounts; and the image selecting section performs the selection using the calculated total edge amount. Since the image processing apparatus performs the selection using the total edge amount, which is calculated for each candidate photo image in consideration of the contributions of both the first edge amount of the face area and the second edge amount of the other area, that is, an area other than the face area, it is possible to select at least one photo image that is in a good in-focus state not only in the face area but also in the other area.
Application Example 3; In the image processing apparatus described above, the second edge amount is further preferably an edge amount pertaining to an area other than the face area detected in each photo image; and the weight that is applied to the first edge amount is larger than that applied to the second edge amount in the weighted average calculation. In this configuration, because the weight that is applied to the first edge amount, which pertains to the face area, is larger than that applied to the second edge amount, which pertains to the other area, at least one photo image can be selected with a greater importance being placed on a good in-focus state of the face area than a good in-focus state of the other area while still taking both the in-focus states of the face and other areas into consideration.
Application Example 4: In the image processing apparatus according to the first aspect of the invention, the image evaluation processing section preferably divides the face area into a plurality of sub areas and calculates an edge amount for each of the divided sub areas; and a larger or largest value of the edge amounts of the sub areas is used as the first edge amount if a difference between the calculated edge amounts of the sub areas is not smaller than a predetermined threshold value, whereas the average value of the edge amounts of the sub areas is used as the first edge amount if the difference between the calculated edge amounts of the sub areas is smaller than the predetermined threshold value. An image processing apparatus having this configuration uses, if there is a large difference between the in-focus state of a certain divided sub area corresponding to a part of the face area and the in-focus state of other divided sub area(s), the edge amount of a sub area that is in a better or best in-focus state and has a larger or largest edge amount value is used as the first edge amount, which pertains to the face area. For this reason, for example, when a face shown in a photo image is in profile, that is, for a half-faced image, the edge amount value of a sub area that represents the accurate in-focus state of the face area is used as the first edge amount, which makes it possible to select an appropriate image with improved reliability.
Application Example 5: In the image processing apparatus according to the first aspect of the invention, the image evaluation processing section preferably further calculates one or both of luminance average and luminance variance values for each photo image or each of preselected photo images; and the image selecting section performs the selection on the basis of either one or both of the luminance average and luminance variance values as well as on the basis of the edge amount. With this configuration, since image selection is performed not only with the use of the edge amount but also with the use of the luminance average value and/or the luminance variance value each as an index of image quality, it is possible to select an image or images having preferred image quality.
Application Example 6: In the image processing apparatus according to the first aspect of the invention, if the number of photo images in which a face was detected is N or smaller, where N is a predetermined natural number, all of the photo images in which a-face was detected are preferably selected regardless of the values of the first edge amounts. With this configuration, it is possible to perform image selection with preference being given to an image that includes a face or faces.
Application Example 7: In the image processing apparatus according to the first aspect of the invention, if a difference in the size of the face areas between the photo images is not smaller than a predetermined threshold value, the image selecting section preferably selects a photo image that has a larger or largest face area regardless of the values of the first edge amounts. With this configuration, image selection can be performed with preference being given to a well-photographed image that includes a face shot in a large size.
Application Example 8: In the image processing apparatus according to the first aspect of the invention, where there is more than one face in one photo image, the face area determining section preferably determines a plurality of face areas for the plurality of faces; and the image evaluation processing section uses either the sum of the edge amounts of the plurality of face areas or the average thereof as the first edge amount. With this configuration, the edge amount for an image including more than one face can be appropriately determined.
The present invention can be implemented and/or embodied in a variety of modes. As a few non-limiting examples thereof, the invention can be implemented and/or embodied as, and/or in the form of, an image selection method and/or an image selection apparatus, a method for performing image processing and/or other related processing on a selected image(s), and/or an apparatus for performing image processing and/or other related processing on a selected image(s). As another non-limiting example thereof, the invention can be implemented and/or embodied as, and/or in the form of, a computer program that realizes functions made available by these apparatuses and/or methods, and/or a storage medium that stores such a computer program. In addition, as still another non-limiting example thereof, the invention can be actually implemented and/or embodied as, and/or in the form of, a data signal that contains the content of the computer program and is transmitted via or in the form of a carrier. The above description is provided as non-limiting enumeration for the sole purpose of facilitating the understanding of the invention.
The invention will be described with reference to the accompanying drawings, wherein like numbers reference like elements.
With reference to the accompanying drawings, exemplary embodiments of the invention are explained in the following five sections A, B, C, D, and E.
A. First Embodiment
B. Second Embodiment
C. Third Embodiment
D. Fourth Embodiment
E. Variation Examples
The inner components that make up the image-selection processing unit 400 are illustrated in a tree structure in the lower part of
In step S20, the image selection unit 490 makes a judgment as to whether or not, among the plurality of candidate images, a face(s) was detected in one candidate image only. If a face(s) was detected in one candidate image only, in step S30, the image selection unit 490 selects the one candidate image in which a face(s) was detected and the processing of
On the other hand, if it is judged in the step S20 that there is more than one candidate image in which a face was detected, the image selection unit 490 makes a judgment as to whether or not a difference in the size of face areas between or among the candidate images is greater than a predetermined threshold value (step S40). A more detailed explanation of the face areas is given later. If the difference in the size of face areas between the candidate images is greater than the predetermined threshold value, the image selection unit 490 selects the candidate image that has the larger or largest face area in step S50 and the processing of
On the other hand, if it is judged in the step S40 shown in
For example, the total edge amount, which is denoted as “EdgeAll” in the following description and drawings, can be calculated using the following formula (1).
EdgeAll=Edge 1×W1+Edge 2×W2 (1)
In formula (1), Edge 1 denotes the face area edge amount, that is, the edge amount of a face area, whereas Edge 2 denotes the entire image edge amount, that is, the edge amount of the entire image. Each of W1 and W2 denotes a weight.
The values of the weights W1 and W2 may be the same or different. In a case where different values are used as the weights W1 and W2, the weight W1 that is applied to the face area edge amount Edge 1 is preferably a relatively large value. The reason that a larger weight value should be used for a face area is as follows. Usually, a face area has a flesh color with a gentle rise and fall. Accordingly, the edge amount of the face area tends to be smaller than that of other area. Therefore, if a weight having a relatively large value is applied to the face area than that applied to other area for the calculation of the total edge amount, it is possible to obtain a desirable total edge amount that faithfully represents the image quality of the face area, especially, the in-focus state of the face area. For this reason, a larger weight value is preferably used for the face area.
Although a larger weight value is preferably used for the face area, the value of the actual weight that is applied to the face area is substantially twice as large as the value of the weight that is applied to other area even when W1 is equal to W2. This is because the entire image edge amount Edge 2 includes the face area edge amount Edge 1. Therefore, it is understood that an actual weight whose value is substantially twice as large as the value of a weight that is applied to other area is applied to the face area even when W1 is equal to W2 in formula (1). In a case where there is more than one face in one image, it is preferable to use either the sum of the edge amounts of the plurality of face areas or the average thereof as the face area edge amount Edge 1.
As a non-limiting modification of the calculation explained above, the total edge amount EdgeAll may be found on the basis of the face area edge amount Edge 1 only, which means that the edge amount of any area other than the face area is not used in the total edge amount calculation. Although it is possible to adopt such a modified calculation method, it is advantageous to include the edge amount of other area in the calculation of the total edge amount EdgeAll because, if so included, the calculated total edge amount EdgeAll further ensures a good in-focus state of a background image part, which is an image part other than the face part of an image. This means that the calculated total edge amount EdgeAll ensures a good in-focus state of both the face and background parts of an image, thereby making it possible to appropriately select an image having a good overall in-focus state.
EdgeAll=EdgeFace×Wa+EdgeNoFace×Wb (2)
In formula (2), EdgeFace denotes the face block edge amount, that is, the edge amount of blocks that include an area part of a face, whereas EdgeNoFace denotes the non-face block edge amount, that is, the edge amount of blocks that do not include any area part of the face. Wa and Wb denote weights.
In
The weights W1 and W2 of formula (1) or the weights Wa and Wb of formula (2) may be varied depending on the ratio of the area size of the face area(s) to the area size of the entire image. Specifically, for example, the weight W1 or Wa, which is applied to the face area, may be decreased as a percentage value calculated by dividing the area size of the face area(s) by the area size of the entire image increases. In other words, the weight W1 or Wa may be decreased as a face-area occupancy factor increases. With such a variable weight, the contribution of the edge amount of the face area or the face area blocks to the total edge amount can be prevented from being excessively large when the face area occupies a substantially large area part of the image.
Referring back to
In the calculation of the total edge amount EdgeAll according to the present embodiment of the invention, the value of a weight that is multiplied by the edge amount of the face area/blocks is substantially larger than the value of a weight that is multiplied by the edge amount of the non-face area/blocks. For this reason, an image having a relatively large face area/block edge amount is selected in step S80 of the edge-based image selection. Specifically, an image having a relatively good in-focus face state, an image having relatively large face area occupancy, or the like is selected. In most cases, a user chooses such an image as a preferable image. Therefore, image selection according to the present embodiment of the invention has an advantage in that it makes it possible to automatically select a preferable image that is likely to be chosen by a user.
If a difference in the total edge amounts between or among the plurality of candidate images is smaller than a predetermined threshold value, it is possible to adopt, for example, any of the following selection methods.
A1: The first image or the last image of the plurality of candidate images is selected.
A2: When there are three or more candidate images, the center one is selected.
A3: When there are three or more candidate images, the left one and the right one are selected.
As explained in detail above, in the image selection according to the present embodiment of the invention, a predetermined number of images is automatically selected out of a plurality of candidate images on the basis of the presence/absence of a face area, the size of the face area, and the edge amount of the face area, though not necessarily limited thereto. Therefore, a desirable image(s) that is/are suited for subsequent processing can be easily obtained.
In the step T100, the luminance average value calculation unit 470 of
As explained in detail above, in the image selection according to the second embodiment of the invention, final image selection is performed on the basis of luminance average and luminance variance values after the preliminary selection of images on the basis of edge amounts. Thus, it is possible to select an image(s) having preferred image quality.
In the step T200, the image-evaluation processing unit 450 of
Etotal=f(EdgeAll, Lave, Ldiv) (3)
In formula (3), “f (EdgeAll, Lave, Ldiv)” indicates that the total evaluation value Etotal is a function that depends on the total edge amount EdgeAll, the luminance average value, which is denoted as “Lave” herein, and the luminance variance value (i.e., luminance “divergence” value), which is denoted as “Ldiv” herein. The face area edge amount Edge 1 or the face block edge amount EdgeFace may be used in place of the total edge amount EdgeAll.
The following formula (3a) is a specific example of formula (3) shown above.
Etotal=α×EdgeAll+β×Lave+γ×Ldiv (3a)
In formula (3a), each of α, β, and γ is a constant (weight).
In the step T220, the image selection unit 490 selects M images using the calculated total evaluation values Etotal, where M is a natural number. As explained in detail above, in the third embodiment of the invention, final image selection is performed using the calculated total evaluation values Etotal. Therefore, it is possible to select an image(s) having preferred image quality on the basis of the edge amounts of images and luminance distribution. As a modification example, either one of the luminance average value Lave and the luminance variance value Ldiv may be used in the calculation of the total evaluation value Etotal shown in formula (3). Moreover, other image quality evaluation values may be used in addition to or in place of the image quality evaluation values described above.
In step S100 of
As explained above, in the face area edge amount calculation according to the fourth embodiment, the edge amount of the entire face area FA is determined depending on the result of a judgment as to whether or not a difference among the calculated edge amounts of the plurality of sub areas SC1-SC4 is not smaller than the predetermined threshold value. This is because the fact that the in-focus state of a certain face sub area could be different from the in-focus state of another face sub area is taken into consideration. For example, when a face is in profile as shown in
As explained in detail above, in the face area edge amount calculation according to the fourth embodiment of the invention, when a local area part of a face is in a good in-focus state, the edge amount of the face area can be determined on the basis of an edge amount that reflects the in-focus state of the local sub area mentioned above. Thus, it is possible to select an image(s) having preferred image quality.
Although various exemplary embodiments of the present invention are described above, needless to say, the invention is in no case restricted to these exemplary embodiments; the invention may be configured in an adaptable manner in a variety of variations and/or modifications without departing from the spirit thereof. Non-limiting variation examples thereof are explained below.
In the foregoing first embodiment of the invention, image selection is performed with the use of a total edge amount calculated for each candidate photo image in consideration of both the contribution of the edge amount of a face area(s) and the contribution of the edge amount of other area, that is, an area other than the face area. However, the scope of this aspect of the invention is not so limited. For example, image selection may be performed on the basis of the face area edge amount only without using the edge amount of other area at all. Even with such a modification, it is possible to perform image selection that reflects the in-focus state of the face area. Although it is possible to adopt such a modification, it is advantageous to include the edge amount of other area in the calculation of the total edge amount because, if so included, the calculated total edge amount further ensures a good in-focus state of a background image part, which is an image part other than the face part of an image.
In the steps S20 and S30 of the image selection processing flow of
Number | Date | Country | Kind |
---|---|---|---|
2008-038778 | Feb 2008 | JP | national |