This nonprovisional application claims priority under 35 U.S.C. §119(a) on Patent Application No. 2010-168699 filed in Japan on Jul. 27, 2010, the entire contents of which are hereby incorporated by reference.
1. Field of the Invention
The present invention relates to electronic equipment such as a digital camera.
2. Description of Related Art
There is proposed a function of adjusting an in-focus state of a target image by image processing, and a type of processing for realizing this function is also called a digital focus. The in-focus state of the target image means, for example, a depth of field of the target image or an in-focus distance of the target image (e.g., a center of the depth of field).
As a method of determining the in-focus state, there are considered a complete automatic setting method, a method of designating a subject by a user, and a complete manual setting method.
In the complete automatic setting method, the digital camera automatically performs detection of a main subject and setting of an in-focus distance and a depth of field adapted to the main subject.
In the method of designating a subject by a user, the user selects and designates a subject to be focused using a touch panel or the like, and the digital camera sets the in-focus distance and the depth of field in accordance with the designation.
In the complete manual setting method, a user manually inputs all the in-focus distance and the depth of field.
However, in the complete automatic setting method, as illustrated in FIG. 18A, an actual in-focus distance and an actual depth of field may be largely different from the in-focus distance and the depth of field desired by the user.
In addition, in the method of designating a subject by a user, a result image that substantially meets the user's intention can be obtained, but fine adjustment of the in-focus state is difficult. For instance, in a result image obtained as illustrated in
On the other hand, in the complete manual setting method, the in-focus distance and the depth of field can be designated as the user wants. In this case, the digital camera cannot recognize a user's intention unless the user performs an input operation. Therefore, as illustrated in
Electronic equipment according to an aspect of the present invention includes a digital focus portion that adjusts an in-focus state of a target image by image processing, and a unit adjustment amount setting portion that sets a unit adjustment amount in adjustment of the in-focus state based on distance information of the target image.
Electronic equipment according to another aspect of the present invention includes a digital focus portion that adjusts an in-focus state of a target image including a depth of field of the target image by image processing, and a unit adjustment amount setting portion that sets a unit adjustment amount in adjustment of the depth of field based on an in-focus distance as a reference distance in the depth of field.
Hereinafter, examples of embodiments of the present invention are described in detail with reference to the attached drawings. In the diagrams that are referred to, the same portion is denoted by the same numeral or symbol, and overlapping description of the same portion is omitted as a rule.
A first embodiment of the present invention is described.
The imaging apparatus 1 includes an imaging portion 11, an AFE 12, an image processing portion 13, a microphone portion 14, a sound signal processing portion 15, a display portion 16, a speaker portion 17, an operating portion 18, a recording medium 19, and a main control portion 20.
The image sensor 33 performs photoelectric conversion of an optical image expressing a subject, which has entered through the optical system 35 and the aperture stop 32 and output an electric signal obtained by the photoelectric conversion to the AFE 12. The AFE 12 amplifies an analog signal output from the imaging portion 11 (image sensor 33) and converts the amplified analog signal into a digital signal. The AFE 12 outputs the digital signal as RAW data to the image processing portion 13. An amplification degree of the signal amplification in the AFE 12 is controlled by the main control portion 20.
The image processing portion 13 generates image data expressing an image taken by the imaging portion 11 (hereinafter referred to also as a taken image) based on the RAW data from the AFE 12. The image data generated here contains, for example, a luminance signal and a color difference signal. However, the RAW data is also one type of the image data, and the analog signal output from the imaging portion 11 is also one type of the image data.
The microphone portion 14 converts ambient sound around the imaging apparatus 1 into a sound signal and output the result. The sound signal processing portion 15 performs necessary sound signal processing on the output sound signal of the microphone portion 14.
The display portion 16 is a display device having a display screen such as a liquid crystal display panel, and displays a taken image or an image recorded in the recording medium 19 under control of the main control portion 20. The display and the display screen in the following description mean the display and the display screen of the display portion 16 unless otherwise noted. The speaker portion 17 is constituted of one or more speakers, which reproduce and output sounds of various sound signals such as a sound signal generated by the sound signal processing portion 15 and a sound signal read out from the recording medium 19. The operating portion 18 is a part that receives various operations from a user. An operation content of the operating portion 18 is transmitted to the main control portion 20 and the like. The recording medium 19 is a nonvolatile memory such as a card-like semiconductor memory or a magnetic disk, which records the taken image and the like under control of the main control portion 20. The main control portion 20 integrally controls operations of the individual parts in the imaging apparatus 1 in accordance with operation content of the operating portion 18.
Operation modes of the imaging apparatus 1 includes an imaging mode in which a still image or a moving image can be taken, and a reproduction mode in which a still image or a moving image recorded in the recording medium 19 can be reproduced on the display portion 16. In the imaging mode, images of the subject are periodically taken at a predetermined frame period, and the imaging portion 11 (more specifically, the AFE 12) outputs the RAW data expressing a series of taken images of the subject. A series of images such as a series of taken images means a set of images arranged in time series. The image data of one frame period expresses one image. One taken image expressed by the image data of one frame period from the AFE 12 is referred to also as a frame image. It can be interpreted that the frame image is an image obtained by performing a predetermined image processing (a demosaicing process, a noise reduction process, a color correction process, or the like) on the taken image of the RAW data.
The imaging apparatus 1 has a function of adjusting the in-focus state of the target input image by image processing after obtaining the image data of the target input image. The process of realizing this function is referred to as a digital focus. The digital focus can be performed in the reproduction mode. The target input image is a frame image obtained as a still image or a frame image constituting a moving image.
The target input image after the adjustment of the in-focus state by the digital focus is referred to as a target output image. The in-focus state to be adjusted by the digital focus includes the in-focus distance of the target input image and a depth value of the depth of field of the target input image.
The in-focus distance of the target image as the target input image or the target output image is a reference distance in the depth of field of the target image, and is typically a distance at a center in the depth of field of the target image, for example. As known well, subjects having subject distances within the depth of field are focused in the target image. When the in-focus state of the target input image is adjusted by the digital focus, if the user performs the unit operation one time, the in-focus state of the target input image is changed by the unit adjustment amount. For instance, if the user performs the unit operation one time, the in-focus distance is increased or decreased by the unit adjustment amount, or the depth of field is increased or decreased by the unit adjustment amount. In other words, the unit adjustment amount indicates an adjustment amount of the in-focus distance or the depth of field by the unit operation (per unit operation). The unit adjustment amount and the adjustment amount can be read as a unit change amount and a change amount, respectively. In the first embodiment, the unit adjustment amount is set by using subject distance information. Hereinafter, operations of individual parts illustrated in
The subject distance information generating portion 51 detects subject distances of subjects within an imaging range of the imaging portion 11, so as to generate subject distance information indicating subject distances of subjects at positions on the target input image. The subject distance of the subject means a distance in real space between the subject and the imaging apparatus 1 (more specifically the image sensor 33). The image sensor 33 that takes the target input image corresponds to an view point of the target input image. Therefore, the subject distance information of the target input image is information indicating a distance between the subject of each position on the target input image and the view point of the target input image (i.e., information indicating a distance between the subject at each position on the target input image and the imaging apparatus 1 as an apparatus used for taking the target input image). Note that in this specification, the expression that the distance is large and the expression that the distance is long have the same meaning, while the expression that the distance is small and the expression that the distance is short have the same meaning. The subject distance information is a distance image (in other words, range image) of which each pixel value has a measured value (i.e., a detected value) of the subject distance. As a method of detecting the subject distance and a method of generating the subject distance information, any known method (e.g., a method described in JP-A-2009-272799) can be used.
It is possible to generate the subject distance information from image data of the target input image or to generate subject distance information from information other than the image data of the target input image. For instance, it is possible to generate the subject distance information using a distance measuring sensor (not shown) for measuring subject distance of the subject at each position on the target input image. As the distance measuring sensor, any known distance measuring sensor such as a distance measuring sensor based on triangulation can be used. Alternatively, for example, a compound eye camera may be used for generating the distance image. Alternatively, for example, a contrast detection method may be used. In this case, for example, a position of the focus lens 31 is moved step by step by a predetermined amount while image data are obtained sequentially from the AFE 12. The distance image can be generated from high frequency components of spatial frequency components of the obtained image data.
Alternatively, for example, the imaging portion 11 may be formed so that the RAW data contains information indicating the subject distance, and the subject distance information may be generated from the RAW data. In order to realize this, for example, a method called “Light Field Photography” (e.g., a method described in WO/06/039486 Pamphlet or JP-A-2009-224982, hereinafter, referred to as a light field method) may be used. In the light field method, an imaging lens having an aperture stop and a micro lens array are used so that the image signal obtained from the image sensor also contains light propagation direction information in addition to light intensity distribution on a light receiving surface of the image sensor. Therefore, although not illustrated in
A generation timing of the subject distance information is arbitrary. For instance, it is possible to generate the subject distance information when the target input image is taken or just after taking the same. In this case, the image data of the target input image and the subject distance information are associated with each other and are recorded in the recording medium 19. Then, before performing the digital focus, the image data of the target input image together with the subject distance information are read out from the recording medium 19. Alternatively, for example, it is possible to generate the subject distance information just before the digital focus is performed. In this case, after the target input image is taken, the image data of the target input image and information to be a base of the subject distance information are associated with each other and are recorded in the recording medium 19. When the digital focus is performed, the image data of the target input image and the information to be a base of the subject distance information are read out from the recording medium 19. Then, the subject distance information is generated from the read information. Note that as understood from the above-mentioned description, the information to be a base of the subject distance information can be the same as the image data of the target input image or can include the image data of the target input image.
As illustrated in
An image 300 illustrated in
When the distance distribution 320 is formed, each pixel value (i.e., subject distance) in the distance image is classified in one of classes, and each class width of the distance distribution 320 can be a predetermined distance width (e.g., a few centimeters to a few ten centimeters). The i-th class forming the distance distribution 320 is expressed by C[i] (see
It is supposed that frequencies of a predetermined threshold value FTH or larger belong only to each of the classes C[10] to C[15] and C[50] to C[55] among all the classes of the distance distribution 320 (FTH denotes a natural number). A class to which frequencies of the threshold value FTH or larger belong is referred to as a subject presence class, while a class to which frequencies of the threshold value FTH or larger do not belong is referred to as a subject absence class. Therefore, in the distance distribution 320, the classes C[10] to C[15] and C[50] to C[55] are subject presence classes, while other classes are subject absence classes.
In addition, as illustrated in
In addition, a minimum distance and a maximum distance among subject distances of the target input image 300 are denoted by dMIN and dMAX, respectively.
The subject distance information of the target input image 300 is given to the adjustment map generating portion 52 of
Based on the distance distribution, the adjustment map generating portion 52 sets the unit adjustment amount and generates an adjustment map on which content of the setting is reflected. The setting of the unit adjustment amount can be performed by the unit adjustment amount generating portion 53 that is a part of the adjustment map generating portion 52.
The setting method of the unit adjustment amount and the generating method of the adjustment map based on the distance distribution will be described. Meanings of the unit adjustment amount and the adjustment map will be apparent from the following description.
The adjustment map generating portion 52 splits the distance range from the minimum distance dMIN to the maximum distance dMAX into a plurality of unit distance ranges. In this case, by the adjustment map generating portion 52, a width of the unit distance range for the subject presence distance range is set to be smaller than a width of the unit distance range for the subject absence distance range. For instance, in the example of the distance distribution 320, step positions d[1] to d[14] as illustrated in
The subject distance at a step position d[i+1] for an arbitrary integer i is larger than the subject distance at a step position d[i], and subject distances at step positions d[1] and d[14] are equal to the minimum distance dMIN and the maximum distance dMAX, respectively. The subject distance at the step position d[i] means a subject distance of a subject that is supposed to be positioned at the step position d[i]. In addition, step positions d[1] to d[6] belong to the presence range 341, step positions d[7] and d[8] belong to the absence range 342, and step positions d[9] to d[14] belong to the presence range 343. A range from the step position d[i] to the step position d[i+1] for an arbitrary integer i is the unit distance range. Therefore, in the example of
The adjustment map generating portion 52 sets a width of each unit distance range belonging to the presence range 341 and a width of each unit distance range belonging to the presence range 343 to be smaller than a width of each unit distance range belonging to the absence range 342. As for a given unit distance range, if at least a part of the unit distance range belongs to the absence range 342, the unit distance range is considered to belong to the absence range 342. Therefore, the unit distance range (d[6], d[7]) can be said to belong to both the presence range 341 and the absence range 342, but the unit distance range (d[6], d[7]) is regarded to belong to the absence range 342. Similarly, the unit distance range (d[8], d[9]) is also regarded to belong to the absence range 342.
Then, the unit distance ranges belonging to the presence range 341 are unit distance ranges (d[1], d[2]), (d[2], d[3]), (d[3], d[4]), (d[4], d[5]) and (d[5], d[6]). The unit distance ranges belonging to the absence range 342 are unit distance ranges (d[6], d[7]), (d[7], d[8]) and (d[8], d[9]). The unit distance ranges belonging to the presence range 343 are unit distance ranges (d[9], d[10]), (d[10], d[11]), (d[11], d[12]), (d[12], d[13]) and (d[13], d[14]).
As illustrated in
The width of each unit distance range can work as the unit adjustment amount. Then, a process of setting the step positions d[1] to d[14] satisfying the above-mentioned relationship corresponds to the setting process of the unit adjustment amount, and a map of the step positions d[1] to d[14] arranged in a predetermined space corresponds to the adjustment map. In other words, for example, when a user performs once the unit operation of instructing to increase the in-focus distance of the target input image 300, the in-focus distance of the target input image 300 is changed by the digital focus from the subject distance at the step position d[i] to the subject distance at the step position d[i+1]. When the user performs twice the unit operation of instructing to increase the in-focus distance of the target input image 300, the in-focus distance of the target input image 300 is changed by the digital focus from the subject distance at the step position d[i] to the subject distance at the step position d[i+2]. In this way, the unit adjustment amount of the in-focus state is set finely in the distance ranges (341 and 343) in which the subject presence degree is high, while the unit adjustment amount of the in-focus state is set roughly in the distance range (342) in which the subject presence degree is low (see
The in-focus distance and depth of field designation portion 54 sets a designated in-focus distance and a designated depth of field using the adjustment map in accordance with an adjustment instruction operation issued by a user. The adjustment instruction operation is an operation to instruct adjustment (i.e., change) of the in-focus state of the target input image, and is a predetermined operation to the operating portion 18 or a touch panel operation to the display portion 16, for example.
The designated in-focus distance is an in-focus distance after the in-focus state adjustment by the digital focus. In other words, the designated in-focus distance is a target value of the in-focus distance of the target output image obtained by the digital focus. The designated depth of field is a depth of field after the in-focus state adjustment by the digital focus. In other words, the designated depth of field is a target value of the depth of field of the target output image obtained by the digital focus.
The designated depth of field is information designating the maximum distance and the minimum distance in the depth of field. The maximum distance and the minimum distance in the designated depth of field correspond to the maximum distance and the minimum distance in the depth of field of the target output image, respectively. Because a difference between the maximum distance and the minimum distance in the depth of field is a depth value of the depth of field, the designated depth of field specifies the depth value of the depth of field of the target output image. Note that the designated depth of field may be information specifying only a depth value of the depth of field. In this case, the depth of field having a depth value specified by the designated depth of field with the center at the designated in-focus distance is the depth of field after the in-focus state adjustment.
The designated in-focus distance and the designated depth of field at time point when the adjustment instruction operation is never performed are respectively referred to as an initial value of the designated in-focus distance and an initial value of the designated depth of field. It is preferable that the initial value of the designated in-focus distance and the initial value of the designated depth of field are respectively the same as the in-focus distance and the depth of field before the in-focus state adjustment, namely the in-focus distance and the depth of field of the target input image 300. The in-focus distance of the target input image 300 can be determined from a state of each lens of the optical system 35 when the target input image 300 is taken (in particular, a position of the focus lens 31). The depth value of the depth of field of the target input image 300 can be obtained from an aperture stop value and a focal length when the target input image 300 is taken. When the in-focus distance and the depth value of the depth of field are known, the maximum distance and the minimum distance in the depth of field can be determined.
In addition, although not considered particularly in the above description, it is preferable to dispose an in-focus state detection portion 56 for detecting the in-focus distance and the depth of field of the target input image 300 in the imaging apparatus 1 as illustrated in
Note that it is possible that the in-focus state detection portion 56 also detects an in-focus position of the target input image 300. The in-focus position of the target input image 300 means a position, on the target input image 300, in an in-focus area included in the whole image area of the target input image 300. The in-focus area means an image area in which image data of a focused subject exists. The in-focus area and the in-focus position of the target input image 300 can be detected by using the spatial frequency components or the like of the target input image 300.
The digital focus portion 55 performs the digital focus on the target input image so that the in-focus distance of the target output image agrees with the designated in-focus distance and that the depth of field of the target output image agrees with the designated depth of field. The obtained target output image is displayed on the display portion 16.
With reference to
If the designated in-focus distance at the present time point is the subject distance at the step position d[i], and if the unit operation to instruct an increase of the in-focus distance is performed j times by a user, the designated in-focus distance is changed to the subject distance at the step position d[i+j]. If the designated in-focus distance at the present time point is the subject distance at the step position d[i], and if the unit operation to instruct a decrease of the in-focus distance is performed j times by a user, the designated in-focus distance is changed to the subject distance at the step position d[i−j](j denotes an integer). However, an upper limit of the designated in-focus distance is a subject distance at the step position d[14] that agrees with the distance dMAX, and a lower limit of the designated in-focus distance is a subject distance at the step position d[1] that agrees with the distance dMIN. Therefore, the adjustment instruction operation to instruct an increase of the designated in-focus distance to be larger than the subject distance at the step position d[14] and the adjustment instruction operation to instruct a decrease of the designated in-focus distance to be smaller than the subject distance at the step position d[1] are neglected. When the adjustment instruction operation is neglected, a warning display indicating the neglect may be performed.
If the maximum distance and the minimum distance in the designated depth of field are the subject distances of step positions d[iA] and d[iB], respectively, and if the unit operation to instruct an increase of the depth of field is performed j times by a user, the maximum distance and the minimum distance in the designated depth of field are changed to the subject distances of step positions d[iA+j] and d[iB−j], respectively. If the maximum distance and the minimum distance in the designated depth of field are the subject distances of step positions d[iA] and d[iB], respectively, and if the unit operation to instruct a decrease of the depth of field is performed j times by a user, the maximum distance and the minimum distance in the designated depth of field are changed to the subject distances of step positions d[iA−j] and d[iB+j]. Symbols iA and iB denote integers that satisfy “iA>iB>0”.
Alternatively, if the maximum distance and the minimum distance in the designated depth of field are subject distances at the step positions d[iA] and d[iB], respectively, and if the unit operation to instruct an increase of the depth of field is performed j times, the minimum distance in the designated depth of field may be maintained to be the subject distance at the step position d[iB] while the maximum distance in the designated depth of field may be changed to the subject distance at the step position d[iA+j] or, alternatively, the maximum distance in the designated depth of field may be maintained to be the subject distance at the step position d[iA] while the minimum distance in the designated depth of field may be changed to the subject distance in the step position d[iB−j]. Similarly, if the maximum distance and the minimum distance in the designated depth of field are the subject distances at the step positions d[iA] and d[iB], respectively, and if the unit operation to instruct a decrease of the depth of field j times, the minimum distance in the designated depth of field may be maintained to be the subject distance at the step position d[iB] while the maximum distance in the designated depth of field may be changed to the subject distance at the step position d[iA−j] or, alternatively, the maximum distance in the designated depth of field may be maintained to be the subject distance at the step position d[iA] while the minimum distance in the designated depth of field may be changed to the subject distance at the step position d[iB+j].
However, an upper limit of the maximum distance in the designated depth of field is a subject distance at the step position d[14] that agrees with the distance dMAX, and a lower limit of the minimum distance in the designated depth of field is a subject distance at the step position d[1] that agrees with the distance dMIN. Therefore, the adjustment instruction operation to instruct an increase of the maximum distance in the designated depth of field to be larger than the subject distance at the step position d[14] and the adjustment instruction operation to instruct a decrease of the minimum distance in the designated depth of field to be smaller than the subject distance at the step position d[1] are neglected. Further, the adjustment instruction operation to instruct a decrease of the designated depth value of the depth of field to be zero or smaller is also neglected. For instance, if iA=4 and iB=3 are satisfied, the adjustment instruction operation to instruct a decrease of the depth of field is neglected. When the adjustment instruction operation is neglected, a warning display indicating the neglect may be performed.
After Step S14 illustrated in
After that, in Step S16, the main control portion 20 illustrated in
If the adjustment confirming operation is performed in Step S16, the image data of the currently displayed target output image is recorded in the recording medium 19 (Step S17). In this case, additional data may be recorded in the recording medium 19 in association with the image data of the target output image. The additional data may contain a latest designated in-focus distance and a latest designated depth of field.
As illustrated in
Now, it is supposed that the in-focus distance of the target input image 300 agrees with the subject distance at the step position d[4]. In this situation, if the button 61 is pressed once, twice, three times, four times, five times, six times or seven times, the designated in-focus distance is changed to the subject distance of the step position d[5], d[6], d[7], d[8], d[9], d[10] or d[11], respectively. In addition, although different from the situation illustrated in
In addition, it is supposed that an operation of pressing the button 63 one time is the one unit operation to instruct an increase of the depth of field, and that an operation of pressing the button 64 one time is the one unit operation to instruct a decrease of the depth of field. The user can designate by another operation whether only the maximum distance in the depth of field should be changed, or only the minimum distance in the depth of field should be changed, or both of them should be changed by an operation of the button 63. Similarly, the user can designate by another operation whether only the maximum distance in the depth of field should be changed, or only the minimum distance in the depth of field should be changed, or both of them should be changed by an operation of the button 64. Here, for convenience sake of description, an operational example is described, in which only the maximum distance in the depth of field is changed by the operation of the button 63, and only the minimum distance in the depth of field is changed by the operation of the button 64.
It is supposed that the maximum distance in the depth of field of the target input image 300 agrees with the subject distance at the step position d[5]. In this situation, when the button 63 is pressed once, twice, three times, four times, five times or six times, the maximum distance in the designated depth of field is changed to the subject distance of the step position d[6], d[7], d[8], d[9], d[10] or d[11], respectively. In addition, although different from the situation illustrated in
Note that an operation of pressing the button 61 continuously (a so-called long press operation) may be regarded as a plurality of times of the unit operation. The same is true for the buttons 62 to 64.
When trying to adjust the in-focus state by the digital focus, the user usually pays attention to any one subject for adjusting the in-focus state. For instance, the in-focus distance is adjusted in detail so that a specific person in the subject 301 is best focused, or the in-focus distance is adjusted in detail so that a specific tree in the subject 302 is best focused (see
Based on this consideration, in this embodiment, the unit adjustment amount is set in detail in the distance ranges (341 and 343) having a high subject (i.e., object) presence degree so that fine adjustment of the in-focus state can be performed. In the distance range (342) having a low subject presence degree, the unit adjustment amount is set roughly so that rough adjustment of the in-focus state can be performed. Thus, adjustment in accordance with user's intention is realized so that user's operation for the in-focus state adjustment can be simplified.
During execution of the process of Steps S11 to S17 illustrated in
The fine adjustment mode means an adjustment mode in the state where the designated in-focus distance belongs to the subject presence distance range, and the rough adjustment mode means an adjustment mode in the state where the designated in-focus distance belongs to the subject absence distance range (see
During execution of the process of Steps S11 to S17 illustrated in
A second embodiment of the present invention is described. The second embodiment is an embodiment based on the first embodiment, and the description in the first embodiment is applied also to second embodiment unless otherwise noted, as long as no contradiction arises.
In the second embodiment, the structure illustrated in
Similarly to the first embodiment, it is supposed that the target input image is the image 300 illustrated in
As illustrated in
Note that if an inequality “dO−dMIN<dW/2” is satisfied because of any reason such as the reason that the in-focus distance dO is close to the minimum distance dMIN, the fine adjustment range that is once set as the distance range having the center at the in-focus distance dO and having the distance width dW is reduced and corrected so as not to include distances smaller than the minimum distance dMIN. Similarly, if an inequality “dMAX−dO<dW/2” is satisfied because of any reason such as the reason that the in-focus distance dO is close to the maximum distance dMAX, the fine adjustment range that is once set as the distance range having the center at the in-focus distance dO and having the distance width dW is reduced and corrected so as not to include distances larger than the maximum distance dMAX. In the example illustrated in
The adjustment map generating portion 52 sets the width of each unit distance range belonging to the fine adjustment range 402 smaller than the width of each unit distance range belonging to the rough adjustment ranges 401 and 403.
The unit distance ranges belonging to the rough adjustment range 401 are unit distance ranges (d[1], d[2]), (d[2], d[3]) and (d[3], d[4]), the unit distance ranges belonging to the fine adjustment range 402 are unit distance ranges (d[4], d[5]), (d[5], d[6]), (d[6], d[7]), (d[7], d[8]), (d[8], d[9]) and (d[9], d[10]), and the unit distance range belonging to the rough adjustment range 403 are unit distance ranges (d[10], d[11]), (d[11], d[12]), (d[12], d[13]) and (d[13], d[14]).
As illustrated in
The width of each unit distance range can work as the unit adjustment amount. Then, the process of setting the step positions d[1] to d[14] satisfying the above-mentioned relationship corresponds to the setting process of the unit adjustment amount, and a map in which the step positions d[1] to d[14] are arranged in a predetermined space corresponds to the adjustment map. However, the step positions d[1] to d[14] described in this embodiment is used only for adjustment of the depth of field. In other words, the map in which the step positions d[1] to d[14] are arranged as described in this embodiment is the adjustment map for the depth of field. The adjustment map for the in-focus distance is set based on the distance distribution 320 as described above in the first embodiment (see
An operation after generating the adjustment map is the same as that described above in the first embodiment, and the operation flow of generating the target output image from the target input image 300 is the same as that in the first embodiment (see
In other words, in Step S12 of
The setting method of the designated depth of field is also the same as that in the first embodiment, but the step positions illustrated in
When trying to adjust the in-focus state by the digital focus, the user usually pays attention to any one of subjects for adjusting the in-focus state. For instance, the depth of field is decreased by a slight amount so that only the specific person in the subject 301 is focused, or the depth of field is increased by a slight amount so that a person positioned close to the specific person is also focused. On the other hand, such a specific person is usually positioned in the vicinity of the in-focus distance dO when the target input image 300 is taken. Therefore, in the distance range (402) near the in-focus distance dO, it is preferable to set the unit adjustment amount of the depth of field in detail so as to satisfy the user's intention. On the other hand, even if the unit adjustment amount of the depth of field is set in detail in the distance ranges (401 and 403) that are far from the in-focus distance dO, it is less use. It is rather preferable to set the same roughly so that the adjustment operation for setting a subject in a long distant to be in the depth of field can be performed quickly.
Based on this consideration, in this embodiment, the unit adjustment amount for the depth of field is set in detail, in the distance range (402) close to the in-focus distance dO, so that fine adjustment of the depth of field can be performed. In the distance ranges (401 and 403) that are far from the in-focus distance dO, the unit adjustment amount for the depth of field is set roughly so that the depth of field is roughly adjusted. Thus, adjustment in accordance with user's intention is realized, and user's operation for the in-focus state adjustment can be simplified.
Note that the display illustrated in
In addition, the in-focus distance dO and the subject distance information are used for generating the adjustment map for the depth of field in this embodiment, but the fine adjustment range and the rough adjustment range are determined depending mainly on the in-focus distance dO, and the subject distance information is merely used for setting the upper and lower limits of the fine adjustment range or the upper and lower limits of the rough adjustment range. Therefore, if the step positions d[1] and d[14] are determined in advance, it is possible to generate the adjustment map for the depth of field based on only the in-focus distance dO.
A third embodiment of the present invention is described. In the third embodiment, an example of a method of the digital focus performed by the digital focus portion 55 is described.
As a method of changing the in-focus distance and the depth of field of the target input image, the digital focus portion 55 can use any method such as a known method. For instance, the above-mentioned light field method can be used. Using the light field method, a target output image having any in-focus distance and depth of field can be generated from the target input image based on the output signal of the image sensor 33. In this case, a known method based on the light field method (e.g., the method described in WO/06/039486 Pamphlet or JP-A-2009-224982) can be used. As described above, in the light field method, by using an imaging lens having an aperture stop and a micro lens array, the image signal (image data) obtained from the image sensor contains information of the light propagation direction in addition to light intensity distribution on the light receiving surface of the image sensor. The imaging apparatus adopting the light field method performs image processing based on the image signal from the image sensor and can reconstruct an image having any in-focus distance and depth of field. In other words, if the light field method is used, a target output image in which any subject is focused can be constructed freely after taking the target input image.
Therefore, although not illustrated in
The digital focus portion 55 can also perform the digital focus by a method that is not based on the light field method. For instance, the digital focus may include an image restoring process to eliminate deterioration due to a blur in the target input image. As a method of the image restoring process, a known method can be used. When the image restoring process is performed, it is possible to use not only the target input image but also image data of one or more frame images taken at timing close to the target input image. The target input image after the image restoring process is referred to as a whole in-focus image. In the whole in-focus image, the whole image area is the in-focus area.
After the whole in-focus image is generated, the digital focus portion 55 can generate the target output image by performing a filtering process on the whole in-focus image using the designated in-focus distance, the designated depth of field and the subject distance information. In other words, any one pixel in the whole in-focus image is set as a noted pixel so that the subject distance of the noted pixel is extracted from the subject distance information, and a distance difference between the subject distance of the noted pixel and the designated in-focus distance is determined. Then, the filtering process is performed on a micro image area having the center at the noted pixel so that the image in the micro image area is blurred more largely as the distance difference determined for the noted pixel is larger. However, if the distance difference is smaller than or equal to a half of the designated depth value of the depth of field, execution of the filtering process can be stopped. The above-mentioned process is performed on pixels in the whole in-focus image sequentially as the noted pixel, and a result image obtained after the whole filtering process is finished can be used as the target output image.
<<Variations>>
The embodiments of the present invention can be modified variously as necessary within the scope of the technical concept described the claims. The embodiments are merely examples of embodying the present invention, and meanings of terms in the present invention and elements thereof should not be limited to those described in the embodiments. The specific values shown in the above-mentioned description are merely examples and can be changed variously as a matter of course. As annotations that can be applied to the above-mentioned embodiments, Note 1 to Note 3 are described below. The contents of the notes can be combined freely as long as no contradiction arises.
[Note 1]
The setting method of the unit adjustment amount according to the present invention is used for both the in-focus distance and the depth of field in the above-mentioned embodiments, but the setting method of the unit adjustment amount according to the present invention may be used for only one of the in-focus distance and the depth of field.
[Note 2]
The imaging apparatus 1 accepts the adjustment instruction operation and performs the digital focus in the above-mentioned embodiments, but it is possible that electronic equipment (not shown) other than the imaging apparatus 1 accepts the adjustment instruction operation and performs the digital focus. Here, the electronic equipment is, for example, a personal computer or an information terminal such as a personal digital assistant (PDA). Note that the imaging apparatus 1 is a type of the electronic equipment. The electronic equipment is equipped with the portions illustrated in
[Note 3]
The imaging apparatus 1 illustrated in
Number | Date | Country | Kind |
---|---|---|---|
2010-168699 | Jul 2010 | JP | national |