This nonprovisional application claims priority under 35 U.S.C. §119(a) on Patent Application No. 2010-241969 filed in Japan on Oct. 28, 2010, the entire contents of which are hereby incorporated by reference.
1. Field of the Invention
The present invention relates to electronic equipment such as an image pickup apparatus, a mobile information terminal, and a personal computer.
2. Description of Related Art
There is proposed a function of adjusting a focus state of a photographed image by image processing, and a type of processing for realizing this function is called a digital focus.
A depth of field of an output image obtained through the digital focus should satisfy user's desire. However, there is not yet a sufficient user interface for assisting a setting operation of the depth of field and confirmation thereof. If the assistance thereof is appropriately performed, a desired depth of field can be easily set.
An electronic equipment according to a first aspect of the present invention includes a target output image generating portion that generates a target output image by changing a depth of field of a target input image by image processing, a monitor that displays on a display screen a distance histogram indicating a distribution of distance between an object at each position in the target input image and an apparatus that photographed the target input image, and displays on the display screen a selection index that is movable along a distance axis in the distance histogram, and a depth of field setting portion that sets a depth of field of the target output image based on a position of the selection index determined by an operation for moving the selection index along the distance axis.
An electronic equipment according to a second aspect of the present invention includes a target output image generating portion that generates a target output image by changing a depth of field of a target input image by image processing, a touch panel monitor having a display screen that accepts a touch panel operation when a touching object touches the display screen, and accepts a designation operation as the touch panel operation for designating a plurality of specific objects on the display screen in a state where the target input image or an image based on the target input image is displayed on the display screen, and a depth of field setting portion that sets a depth of field of the target output image based on the designation operation.
An electronic equipment according to a third aspect of the present invention includes a target output image generating portion that generates a target output image by changing a depth of field of a target input image by image processing, a touch panel monitor having a display screen that accepts a touch panel operation when a touching object touches the display screen, and accepts a designation operation as the touch panel operation for designating a specific object on the display screen in a state where the target input image or an image based on the target input image is displayed on the display screen, and a depth of field setting portion that sets a depth of field of the target output image so that the specific object is included in the depth of field of the target output image. The depth of field setting portion sets a width of the depth of field of the target output image in accordance with a time length while the touching object is touching the specific object on the display screen in the designation operation.
An electronic equipment according to a fourth aspect of the present invention includes a target output image generating portion that generates a target output image by changing a depth of field of a target input image by image processing, a depth of field setting portion that sets a depth of field of the target output image in accordance with a given operation, and a monitor that displays information indicating the set depth of field.
Hereinafter, examples of an embodiment of the present invention are described in detail with reference to the attached drawings. In the drawings to be referred, the same part is denoted by the same numeral or symbol, and overlapping description of the same part is omitted as a rule. Examples 1 to 6 will be described later. First, common matters to the examples or matters to be referred to in the examples are described.
The image pickup apparatus 1 includes an imaging portion 11, an analog front end (AFE) 12, a main control portion 13, an internal memory 14, a monitor 15, a recording medium 16, and an operating portion 17. Note that the monitor 15 may be also considered to be a monitor of a display apparatus disposed externally of the image pickup apparatus 1.
The image sensor 33 performs photoelectric conversion of an optical image of a subject entering via the optical system 35 and the aperture stop 32, and outputs an electric signal obtained by the photoelectric conversion to the AFE 12. More specifically, the image sensor 33 includes a plurality of light receiving pixels arranged like a matrix in a two-dimensional manner. In each photograph, each of the light receiving pixels stores signal charge having charge quantity corresponding to exposure time. An analog signal from each light receiving pixel having amplitude proportional to the charge quantity of the stored signal charge is output to the AFE 12 sequentially in accordance with a driving pulse generated in the image pickup apparatus 1.
The AFE 12 amplifies the analog signal output from the imaging portion 11 (image sensor 33) and converts the amplified analog signal into a digital signal. The AFE 12 output this digital signal as RAW data to the main control portion 13. An amplification degree of signal amplification in the AFE 12 is controlled by the main control portion 13.
The main control portion 13 is constituted of a central processing unit (CPU), a read only memory (ROM), a random access memory (RAM), and the like. The main control portion 13 generates image data indicating the image photographed by the imaging portion 11 (hereinafter, referred to also as a photographed image), based on the RAW data from the AFE 12. The image data generated here contains, for example, a luminance signal and a color difference signal. However, the RAW data itself is one type of the image data, and the analog signal output from the imaging portion 11 is also one type of the image data. In addition, the main control portion 13 also has a function as a display control portion that controls display content of the monitor 15 and performs control of the monitor 15 that is necessary for display.
The internal memory 14 is constituted of a synchronous dynamic random access memory (SDRAM) or the like and temporarily stores various data generated in the image pickup apparatus 1. The monitor 15 is a display apparatus having a display screen of a liquid crystal display panel or the like and displays a photographed image, an image recorded in the recording medium 16 or the like, under control of the main control portion 13.
The recording medium 16 is a nonvolatile memory such as a card-like semiconductor memory or a magnetic disk and stores photographed images and the like under control of the main control portion 13. The operating portion 17 includes a shutter button 20 and the like for accepting an instruction to photograph a still image, and accepts various external operations. An operation to the operating portion 17 is also referred to as a button operation so as to distinguish from the touch panel operation. The operation content to the operating portion 17 is sent to the main control portion 13.
The monitor 15 is equipped with the touch panel.
As illustrated in
The image pickup apparatus 1 has a function of changing a depth of field of the photographed image after obtaining image data of the photographed image. Here, this function is referred to as digital focus function.
The photographed image before changing the depth of field is referred to as a target input image and the photographed image after changing the depth of field is referred to as a target output image. The target input image is a photographed image based on RAW data, and an image obtained by performing a predetermined image processing (for example, a demosaicing process or a noise reduction process) on the RAW data may be the target input image. In addition, it is possible to temporarily store image data of the target input image in the recording medium 16 and afterward to read the image data of the target input image from the recording medium 16 at an arbitrary timing so as to impart the image data of the target input image to the individual portions illustrated in
[Distance Map Obtaining Portion]
The distance map obtaining portion 61 performs a subject distance detecting process of detecting subject distances of individual subjects within a photographing range of the image pickup apparatus 1, and thus generates a distance map (subject distance information) indicating subject distances of subjects at individual positions on the target input image. The subject distance of a certain subject means a distance between the subject and the image pickup apparatus 1 (more specifically, the image sensor 33) in real space. The subject distance detecting process can be performed periodically or at desired timing. The distance map can be said to be a range image in which individual pixels constituting the image have detected values of the subject distance. An image 310 of
It is possible to adopt a structure in which the subject distance detecting process is performed when the target input image is photographed, and the distance map obtained by the process is associated with the image data of the target input image and is recorded in the recording medium 16 together with the image data of the target input image. By this method, the distance map obtaining portion 61 can obtain the distance map from the recording medium 16 at an arbitrary timing. Note that the above-mentioned association is realized by storing the distance map in the header region of the image file storing the image data of the target input image, for example.
As a detection method of the subject distance and a generation method of the distance map, an arbitrary method including a known method can be used. The image data of the target input image may be used for generating the distance map, or information other than the image data of the target input image may be used for generating the distance map. For instance, the distance map may be generated by a stereo method (stereo vision method) from images photographed using two imaging portions. One of two imaging portions can be the imaging portion 11. Alternatively, for example, the distance map may be generated by using a distance sensor (not shown) for measuring subject distances of individual subjects. It is possible to use a distance sensor based on a triangulation method or an active type distance sensor as the distance sensor. The active type distance sensor includes a light emitting element and measures a period of time after light is emitted from the light emitting element toward a subject within the photographing range of the image pickup apparatus 1 until the light is reflected by the subject and comes back, so that the subject distance of each subject can be detected based on the measurement result.
Alternatively, for example, the imaging portion 11 may be constituted so that the RAW data contains information of the subject distances, and the distance map may be generated from the RAW data. In order to realize this, it is possible to use, for example, a method called “Light Field Photography” (for example, a method described in WO 06/039486 or JP-A-2009-224982; hereinafter, referred to as Light Field method). In the Light Field method, an imaging lens having an aperture stop and a micro-lens array are used so that the image signal obtained from the image sensor contains information in a light propagation direction in addition to light intensity distribution in a light receiving surface of the image sensor. Therefore, though not illustrated in
Still alternatively, for example, it is possible to generate the distance map from the image data of the target input image (RAW data) using axial color aberration of the optical system 35 as described in JP-A-2010-81002.
[Depth of Field Setting Portion]
The depth of field setting portion 62 illustrated in
The depth setting information contains information designating the depth of field of the target output image, and a focus reference distance, a near point distance, and a far point distance included in the depth of field of the target output image are designated by the information. A difference between the near point distance of the depth of field and the far point distance of the depth of field is referred to as a width of the depth of field. Therefore, the width of the depth of field in the target output image is also designated by the depth setting information. As illustrated in
With reference to
Considering in the same manner, as illustrated in
As illustrated in
[Focus State Confirmation Image Generating Portion]
A focus state confirmation image generating portion 64 (hereinafter, may be referred to as a confirmation image generating portion 64 or a generating portion 64 shortly) illustrated in
[Digital Focus Portion]
A digital focus portion (target output image generating portion) 65 illustrated in
The target input image is an ideal or pseudo pan-focus image. The pan-focus image means an image in which all subjects having image data in the pan-focus image are in focus. If all subjects in the noted image are the focused subjects, the noted image is the pan-focus image. Specifically, for example, using so-called pan-focus (deep focus) in the imaging portion 11 for photographing the target input image, the target input image can be the ideal or pseudo pan-focus image. In other words, when the target input image is photographed, the depth of field of the imaging portion 11 should be sufficiently deep so as to photograph the target input image. If all subjects included in the photographing range of the imaging portion 11 are within the depth of field of the imaging portion 11 when the target input image is photographed, the target input image works as the ideal pan-focus image. In the following description, it is supposed that all subjects included in the photographing range of the imaging portion 11 are within the depth of field of the imaging portion 11 when the target input image is photographed, unless otherwise noted.
In addition, when simply referred to as a depth of field, a focus reference distance, a near point distance, or a far point distance in the following description, they are supposed to indicate a depth of field, a focus reference distance, a near point distance, or a far point distance of the target output image, respectively. In addition, it is supposed that the near point distance and the far point distance corresponding to inner and outer boundary distances of the depth of field are distances within the depth of field (namely, they belong to the depth of field).
The digital focus portion 65 extracts the subject distances corresponding to the individual pixels of the target input image from the distance map. Then, based on the depth setting information, digital focus portion 65 classifies the individual pixels of the target input image into blurring target pixels corresponding to subject distances outside the depth of field of the target output image and non-blurring target pixels corresponding to subject distances within the depth of field of the target output image. An image region including all the blurring target pixels is referred to as a blurring target region, and an image region including all the non-blurring target pixels is referred to as a non-blurring target region. In this way, the digital focus portion 65 can classify the entire image region of the target input image into the blurring target region and the non-blurring target region based on the distance map and the depth setting information. For instance, in the target input image 310 of
The blurring processing is a process of blurring images in an image region on which the blurring processing is performed (namely, the blurring target region). The blurring processing can be realized by two-dimensional spatial domain filtering. The filter used for the spatial domain filtering of the blurring processing is an arbitrary spatial domain filter suitable for smoothing of an image (for example, an averaging filter, a weighted averaging filter, or a Gaussian filter).
Specifically, for example, the digital focus portion 65 extracts a subject distance LBLUR corresponding to the blurring target pixel from the distance map for each blurring target pixel, and sets a blurring amount based on the extracted subject distance LBLUR and the depth setting information for each blurring target pixel. Concerning a certain blurring target pixel, if the extracted subject distance LBLUR is smaller than the near point distance Ln, the blurring amount is set so that the blurring amount for the blurring target pixel is larger as a distance difference (Ln-LBLUR) is larger. In addition, if the extracted subject distance LBLUR is larger than the far point distance Lf, the blurring amount is set so that the blurring amount for the blurring target pixel is larger as a distance difference (LBLUR-Lf) is larger. Then, for each blurring target pixel, the pixel signal of the blurring target pixel is smoothed by using the spatial domain filter corresponding to the blurring amount. Thus, the blurring processing can be realized.
In this case, as the blurring amount is larger, a filter size of the spatial domain filter to be used may be larger. Thus, as the blurring amount is larger, the corresponding pixel signal is blurred more. As a result, a subject that is not within the depth of field of the target output image is blurred more as the subject is farther from the depth of field.
Note that the blurring processing can be realized also by frequency filtering. The blurring processing may be a low pass filtering process for reducing relatively high spatial frequency components among spatial frequency components of the images within the blurring target region.
In the next Step S14, the target input image is displayed on the display screen 51. It is possible to display an arbitrary index together with the target input image. This index is, for example, a file name, photographed date, and the setting UI generated by the setting UI generating portion 63 (a specific example of the setting UI will be described later). In Step S14, it is possible to display not the target input image itself but an image based on the target input image. Here, the image based on the target input image includes an image obtained by performing a resolution conversion on the target input image or an image obtained by performing a specific image processing on the target input image.
Next in Step S15, the image pickup apparatus 1 accepts a user's adjustment instruction (change instruction) instructing to change the depth of field or a confirmation instruction instructing to complete the adjustment of the depth of field. Each of the adjustment instruction and the confirmation instruction is performed by a predetermined touch panel operation or button operation. If the adjustment instruction is performed, the process flow goes to Step S16 from Step S15. If the confirmation instruction is performed, the process flow goes to Step S18 from Step S15.
In Step S16, the depth of field setting portion 62 changes the depth setting information in accordance with the adjustment instruction. In the next Step S17, the confirmation image generating portion 64 generates the confirmation image that is an image based on the target input image using the changed depth setting information (a specific example of the confirmation image will be described later in Example 4 and the like). The confirmation image generated in Step S17 is displayed on the display screen 51 and the process flow goes back to Step S15 with this display sustained. In other words, in the state where the confirmation image is displayed, the adjustment operation in Step S15 is received again. In this case, when the confirmation instruction is issued, the process of Steps S18 and S19 is performed. When the adjustment instruction is performed again, the process of Steps S16 and S17 is performed again in accordance with the repeated adjustment instruction. Note that it is possible to display the setting UI generated by the setting UI generating portion 63 together with the confirmation image on the display screen 51.
In Step S18, the digital focus portion 65 generates the target output image from the target input image by the digital focus based on the depth setting information. The generated target output image is displayed on the display screen 51. If the adjustment instruction is never issued in Step S15, the target input image itself can be generated as the target output image. If the adjustment instruction is issued in Step S15, the target output image is generated based on the depth setting information that is changed in accordance with the adjustment instruction. After that, in Step S19, the image data of the target output image is recorded in the recording medium 16. If the image data of the target input image is recorded in the recording medium 16, the image data of the target input image may be erased from the recording medium 16 when recording the image data of the target output image. Alternatively, the record of the image data of the target input image may be maintained.
Note that it is possible to generate the target output image without waiting an input of the confirmation instruction after receiving the adjustment instruction. Similarly to this, after changing the depth setting information in Step S16, instead of generating and displaying the confirmation image, it is possible to generate and display the target output image based on the changed depth setting information without delay and to receive the adjustment operation in Step S15 again in the state where the target output image is displayed.
Hereinafter, Examples 1 to 6 are described as specific examples for realizing the digital focus and the like. As long as no contradiction arises, description in one example and description in another example can be combined. Unless otherwise noted, it is supposed that the target input image 310 of
Example 1 of the present invention is described.
When the slider bar 410 is displayed, the user can move the bar icons 412 and 413 on the distance axis icon 411 by the touch panel operation or the button operation. For instance, after touching the bar icon 412 by finger, while maintaining the contact state between the finger and the display screen 51, the user can move the finger on the display screen 51 along the extending direction of the distance axis icon 411 so that the bar icon 412 can move on the distance axis icon 411. The same is true for the bar icon 413. In addition, if a cross-shaped key (not shown) constituted of first to fourth direction keys is disposed in the operating portion 17, it is possible, for example, to move the bar icon 412 toward the end 415 by pressing the first direction key, or to move the bar icon 412 toward the end 416 by pressing the second direction key, or to move the bar icon 413 toward the end 415 by pressing the third direction key, or to move the bar icon 413 toward the end 416 by pressing the fourth direction key. In addition, for example, if a dial button is disposed in the operating portion 17, it is possible to move the bar icons 412 and 413 by dial operation of the dial button.
As illustrated in
Note that the longitudinal direction of the slider bar 410 is the horizontal direction of the display screen 51 in
When confirming that the bar icons 412 and 413 are at desired positions, the user can issue the above-mentioned confirmation instruction. When the confirmation instruction is issued, the target output image is generated based on the depth setting information at time point when the confirmation instruction is issued (Step S18 of
In addition, a histogram obtained by using the subject distances at pixel positions of the target input image as variable is referred to as a distance histogram.
The distance histogram 430 may be included in the setting UI. When the slider bar 410 of
When the target input image 310 or an image based on the target input image 310 is displayed, the image pickup apparatus 1 may also display the setting UI including the distance histogram 430 and the slider bar 410. In this state, the image pickup apparatus 1 can accept the user's adjustment instruction or confirmation instruction of the depth of field (see
Using the slider bar as described above, it is possible to set the depth of field by a visceral and simple operation. In this case, by displaying the distance histogram together, the user can set the depth of field while grasping distribution of the subject distance. For instance, it is possible to facilitate the adjustment such as including a typical subject distance that is positioned close to the image pickup apparatus 1 and has high frequency (for example, the subject distance L1 corresponding to the subject SUB1) in the depth of field, or excluding a substantially large subject distance having high frequency (for example, the subject distance L3 corresponding to the subject SUB3 like a background) from the depth of field. Thus, the user can easily set the desired depth of field.
When the touch panel operation or the button operation is performed to move the bar icons 412 and 413 on the distance axis icon 411 or on the distance axis 431 of the distance histogram 430, positions of the slide bars 412 and 413 may be continuously moved. However, it is possible to change positions of the slide bars 412 and 413 step by step on the distance axis icon 411 or on the distance axis 431 from a typical distance existing discretely to another typical distance. Thus, when instructing to move the bar icons 412 and 413 by the button operation, in particular, it is possible to set the depth of field more easily and promptly. For instance, it is supposed that only the subject distances L1 to L3 are set to the typical distances with respect to the distance histogram 430. In this case, first to third typical positions corresponding to first to third typical distances L1 to L3 are set on the distance axis icon 411 or on the distance axis 431. Further, when the bar icon 412 is positioned at the second typical position, if the user performs the operation for moving the bar icon 412 by one unit amount, a position of the bar icon 412 moves to the first or the third typical position (the same is true for the bar icon 413).
The setting UI generating portion 63 can set the typical distances from the frequencies of the subject distances in the distance histogram 430. For instance, in the distance histogram 430, the subject distance at which the frequencies are concentrated can be set as the typical distance. More specifically, for example, in the distance histogram 430, the subject distance having a frequency of a predetermined threshold value or higher can be set as the typical distance. In the distance histogram 430, if subject distances having a frequency of a predetermined threshold value or higher exist continuously in a certain distance range, a center distance of the certain distance range can be set as the typical distance. It is possible to adopt a structure in which a window having a certain distance range is set on the distance histogram 430, and if a sum of frequencies within the window is a predetermined threshold value or higher, a center distance of the window is set as the typical distance.
In addition, it is possible to adopt a structure as below. The depth of field setting portion 62 (for example, the setting UI generating portion 63) extracts image data of a subject having a typical distance as the subject distance from image data of the target input image 310. When the adjustment instruction or the confirmation instruction is accepted, the image based on the extracted image data (hereinafter referred to as a typical distance object image) is displayed in association with the typical distance on the distance histogram 430. The typical distance object image may also be considered to be included in the setting UI.
Supposing that the subject distances L1 to L3 are set to the first to third typical distances, a method of generating and displaying the typical distance object image is described. The setting UI generating portion 63 detects an image region having the typical distance L1 or a distance close to the typical distance L1 as the subject distance based on the distance map, and extracts image data in the detected image region from the target input image 310 as image data of a first typical distance object image. The distance close to the typical distance L1 means, for example, a distance having a distance difference with the typical distance L1 that is a predetermined value or smaller. In the same manner, the setting UI generating portion 63 also extracts image data of the second and third typical distance object images corresponding to the typical distances L2 and L3. The typical distances L1 to L3 are associated with the first to third typical distance object images, respectively. Then, as illustrated in
By displaying the typical distance object image together with the slider bar 410 and the distance histogram 430, the user can viscerally and easily recognizes subjects to be positioned within the depth of field of the target output image and subjects to be positioned outside the depth of field of the target output image. Thus, the depth of field can be set to a desired one more easily.
Note that it is possible to include the slider bar 410 and the typical distance object images in the setting UI and to exclude the distance histogram 430 from the setting UI. Thus, in the same manner as illustrated in
In addition, a display position of the setting UI is arbitrary. The setting UI may be displayed so as to be superimposed on the target input image 310, or the setting UI and the target input image 310 may be displayed side by side on the display screen. In addition, the longitudinal direction of the distance axis icon 411 and the direction of the distance axis 431 may be other than the horizontal direction of the display screen 51.
A method of calculating the focus reference distance Lo is described below. It is known that the focus reference distance Lo of the noted image obtained by photographing satisfies the following expressions (1) and (2). Here, δ denotes a predetermined permissible circle of confusion of the image sensor 33, f denotes a focal length of the imaging portion 11 when the noted image is photographed, and F is an f-number (in other words, f-stop number) of the imaging portion 11 when the noted image is photographed. Ln and Lf in the expressions (1) and (2) are the near point distance and the far point distance of the noted image, respectively.
δ=(f2·(Lo−Ln))/(F·Lo·Ln) (1)
δ=(f2·(Lf−Lo))/(F·Lo·Lf) (2)
From the expressions (1) and (2), the following expression (3) is obtained.
Lo=2·Ln·Lf/(Ln+Lf) (3)
Therefore, after setting the near point distance Ln and the far point distance Lf of the target output image, the depth of field setting portion 62 can determine the focus reference distance Lo of the target output image by substituting the set distances Ln and Lf into the expression (3). Note that after setting the near point distance Ln and the far point distance Lf of the target output image, the depth of field setting portion 62 may simply sets the distance ((Ln+Lf)/2) to the focus reference distance Lo of the target output image.
Example 2 of the present invention is described below. Example 2 describes another specific method of the adjustment instruction that can be performed in Step S15 of
The adjustment instruction in Example 2 is realized by designation operation of designating a plurality of specific objects on the display screen 51, and the user can perform the designation operation as one type of the touch panel operation. The depth of field setting portion 62 generates the depth setting information so that the plurality of specific objects designated by the designation operation are included within the depth of field of the target output image. More specifically, the depth of field setting portion 62 extracts the subject distances of the designated specific objects from the distance map of the target input image 310, and sets the distances of both ends (namely, the near point distance Ln and the far point distance Lf) in the depth of field of the target output image based on the extracted subject distances so that all extracted subject distances are included within the depth of field of the target output image. Further, in the same manner as Example 1, the depth of field setting portion 62 sets the focus reference distance Lo based on the near point distance Ln and the far point distance Lf. The set content is reflected on the depth setting information.
Specifically, for example, the user can designates the subjects SUB1 and SUB2 as the plurality of specific objects by touching a display position 501 of the subject SUB1 and a display position 502 of the subject SUB2 on the display screen 51 with a finger (see
When the subjects SUB1 and SUB2 are designated as a plurality of specific objects, subject distances of the pixel positions corresponding to the display positions 501 and 502, namely the subject distances L1 and L2 of the subjects SUB1 and SUB2 are extracted from the distance map, and the near point distance Ln and the far point distance Lf are set and the focus reference distance Lo is calculated so that the extracted subject distances L1 and L2 belong to the depth of field of the target output image. Because L1<L2 is satisfied, the subject distances L1 and L2 can be set to the near point distance Ln and the far point distance Lf, respectively. Thus, the subjects SUB1 and SUB2 are included within the depth of field of the target output image. Alternatively, distances (L1−ΔLn) and (L2+ΔLf) may be set to the near point distance Ln and the far point distance Lf. Here, ΔLn>0 and ΔLf>0 are satisfied.
If three or more subjects are designated as a plurality of specific objects, it is preferred to set the near point distance Ln based on the minimum distance among the subject distances corresponding to the three or more specific objects, and to set the far point distance Lf based on the maximum distance among subject distances corresponding to the three or more specific objects. For instance, when the user touches a display position 503 of the subject SUB3 on the display screen 51 in addition to the display positions 501 and 502 by a finger, the subjects SUB1 to SUB3 are designated as the plurality of specific objects. When the subjects SUB1 to SUB3 are designated as the plurality of specific objects, the subject distances of the pixel positions corresponding to the display positions 501 to 503, namely the subject distances L1 to L3 of the subjects SUB1 to SUB3 are extracted from the distance map. Among the extracted subject distances L1 to L3, the minimum distance is the subject distance L1 while the maximum distance is the subject distance L3. Therefore, in this case, the subject distances L1 and L3 can be set to the near point distance Ln and the far point distance Lf, respectively. Thus, the subjects SUB1 to SUB3 are included within the depth of field of the target output image. Alternatively, distances (L1−ΔLn) and (L3+ΔLf) may be set to the near point distance Ln and the far point distance Lf, respectively.
According to Example 2, the depth of field of the target output image can be easily and promptly set so that a desired subject is included within the depth of field.
Note that when the designation operation of designating the plurality of specific objects is accepted, it is possible to display the slider bar 410 (see
In addition, in order to facilitate the user's designation operation, it is possible to determine the typical distance by the method described above in Example 1, and to display the subject positioned at the typical distance in an emphasized manner, when accepting the designation operation of designating the plurality of specific objects. For instance, when the subject distances L1 to L3 are set to the first to third typical distances, the subjects SUB1 to SUB3 corresponding to the typical distances L1 to L3 may be displayed in an emphasized manner on the display screen 51 where the target input image 310 is displayed. The emphasizing display of the subject SUB1 can be realized by increasing luminance of the subject SUB1 on the display screen 51 or by enhancing the edge of the subject SUB1 (the same is true for the subjects SUB2 and SUB3).
Example 3 of the present invention is described below. Example 3 described still another specific method of the adjustment instruction that can be performed in Step S15 of
The adjustment instruction in Example 3 is realized by the designation operation of designating a specific object on the display screen 51, and the user can perform the designation operation as a type of the touch panel operation. The depth of field setting portion 62 generates the depth setting information so that the specific object designated by the designation operation is included within the depth of field of the target output image. In this case, the depth of field setting portion 62 determines the width of the depth of field of the target output image in accordance with a time length TL while the specific object on the display screen 51 is being touched by the finger in the designation operation.
Specifically, for example, in order to obtain a target output image in which the subject SUB1 is within the depth of field, the user can designate the subject SUB1 as the specific object by touching the display position 501 of the subject SUB1 on the display screen 51 by a finger (see
When the subject SUB1 is designated as the specific object, the depth of field setting portion 62 extracts the subject distance at the pixel position corresponding to the display position 501, namely the subject distance L1 of the subject SUB1 from the distance map, and sets the near point distance Ln, the far point distance Lf, and the focus reference distance Lo in accordance with the time length TL so that the extracted subject distance L1 belongs to the depth of field of the target output image. The set content is reflected on the depth setting information. Thus, the subject SUB1 is within the depth of field of the target output image.
The distance difference (Lf−Ln) between the near point distance Ln and the far point distance Lf indicates the width of the depth of field of the target output image. In Example 3, the distance difference (Lf−Ln) is determined in accordance with the time length TL. Specifically, for example, as the time length TL increases from zero, the distance difference (Lf−Ln) should be increased from an initial value larger than zero. In this case, as the time length TL increases from zero, the far point distance Lf is increased, or the near point distance Ln is decreased, or the far point distance Lf is increased while the near point distance Ln is decreased simultaneously. On the contrary, it is possible to decrease the distance difference (Lf−Ln) from a certain initial value to a lower limit value as the time length TL increases from zero. In this case, as the time length TL increases from zero, the far point distance Lf is decreased, or the near point distance Ln is increased, or the far point distance Lf is decreased while the near point distance Ln is increased simultaneously.
If the subject SUB1 is designated as the specific object, it is possible to set the near point distance Ln and the far point distance Lf so that L1=(Lf+Ln)/2 is satisfied, and to determine the focus reference distance Lo based on the set distances Ln and Lf. Alternatively, it is possible to bring the focus reference distance Lo to be agreed with the subject distance L1. However, as long as the subject distance L1 belongs to the depth of field of the target output image, the subject distance L1 may be other than (Lf+Ln)/2 and the focus reference distance Lo.
According to Example 3, it is possible to generate the target output image having a desired width of the depth of field in which a desired subject is within the depth of field by an easy and prompt operation.
Note that when the designation operation of designating the specific object is accepted, it is possible to display the slider bar 410 (see
In addition, in order to facilitate the user's designation operation, it is possible to determine the typical distance by the method described above in Example 1, and to display the subject positioned at the typical distance in an emphasized manner by the method similar to Example 2, when accepting the designation operation of designating the specific object.
Example 4 of the present invention is described below. Example 4 and Example 5 that is described later can be performed in combination with Examples 1 to 3. Example 4 describes the confirmation image that can be generated by the confirmation image generating portion 64 illustrated in
In Example 4, information JJ indicating the depth of field of the target output image defined by the depth setting information is included in the confirmation image. The information JJ is, for example, the f-number corresponding to the depth of field of the target output image. Supposing that the image data of the target output image is obtained not by the digital focus but by only sampling of the optical image on the image sensor 33, an f-number FOUT in photographing the target output image can be determined as the information JJ.
The distances Ln, Lf, and Lo determined by the above-mentioned method are included in the depth setting information, which is sent to the confirmation image generating portion 64. The generating portion 64 substitutes the distances Ln, Lf, and Lo included in the depth setting information into the above expression (1) or (2) so as to calculate the value of F of the expression (1) or (2) and to determine the calculated value as the f-number FOUT in photographing the target output image (namely as information JJ). In this case, a value of the focal length fin the expression (1) or (2) can be determined from a lens design value of the imaging portion 11 and optical zoom magnification in photographing the target input image, and a value of the permissible circle of confusion 5 in the expression (1) or (2) is set in advance. Note that when the value of F in the expression (1) or (2) is calculated, it is necessary to bring the units of the focal length f and the permissible circle of confusion δ to be matched with each other (for example, they should be matched to be a unit after conversion into 35 mm film or a real scale unit).
When the depth setting information is given, the confirmation image generating portion 64 determines the f-number FOUT and can generate the image in which the f-number FOUT is superimposed on the target input image as the confirmation image. The confirmation image illustrated in Example 4 can be generated and displayed in Step S17 of
In addition, an image in which the f-number FOUT is superimposed on the target output image based on the depth setting information may be generated and displayed as the confirmation image. Instead of superimposing and displaying the f-number FOUT on the target output image, it is possible to display the target output image and the f-number FOUT side by side.
Note that in Step S19 of
Because the f-number FOUT is displayed, the user can grasp a state of the depth of field of the target output image in relationship with normal photography conditions of the camera, and can easily decide whether or not the depth of field of the target output image is set to a desired depth of field. In other words, the setting of the depth of field of the target output image is assisted.
Example 5 of the present invention is described below. Example 5 describes another example of the confirmation image that can be generated by the confirmation image generating portion 64 of
In Example 5, when the depth setting information supplied to the confirmation image generating portion 64, the confirmation image generating portion 64 classifies pixels of the target input image into pixels outside the depth corresponding to the subject distance outside the depth of field of the target output image and pixels within the depth corresponding to the subject distance within depth of field of the target output image by the above-mentioned method using the distance map and the depth setting information. In the same method, pixels of the target output image can also be classified into the pixels outside the depth and the pixels within the depth. An image region including all pixels outside the depth is referred to as a region outside the depth, and an image region including all pixels within the depth is referred to as a region within the depth. The pixels outside the depth and the region outside the depth correspond to the blurring target pixels and the blurring target region in the digital focus. The pixels within the depth and the region within the depth correspond to the non-blurring target pixels and the non-blurring target region in the digital focus.
The confirmation image generating portion 64 can perform image processing IPA for changing luminance, hue, or chroma saturation of the image in the region outside the depth, or image processing IPB for changing luminance, hue, or chroma saturation of the image in the region within the depth, on the target input image. Then, the target input image after the image processing IPA, the target input image after the image processing IPB, or the target input image after the image processings IPA and IPB can be generated as the confirmation image.
The confirmation image of Example 5 can be generated and displayed in Step S17 of
Note that the confirmation image generating portion 64 can generate the confirmation image based on the target output image instead of the target input image. In other words, it is possible to perform at least one of the above-mentioned image processings IPA and IPB on the target output image, so as to generate the target output image after the image processing IPA, the target output image after the image processing IPB, or the target output image after the image processings IPA and IPB, as the confirmation image.
Example 6 of the present invention is described below. The method of using so-called pan-focus for obtaining the target input image as the pan-focus image is described above, but the method of obtaining the target input image is not limited to this.
For instance, it is possible to constitute the imaging portion 11 so that the RAW data contains information indicating the subject distance, and to construct the target input image as the pan-focus image from the RAW data. In order to realize this, the above-mentioned Light Field method can be used. According to the Light Field method, the output signal of the image sensor 33 contains information of the incident light propagation direction to the image sensor 33 in addition to the light intensity distribution in the light receiving surface of the image sensor 33. It is possible to constitute the target input image as the pan-focus image from the RAW data containing this information. Note that when the Light Field method is used, the digital focus portion 65 generates the target output image by the Light Field method. Therefore, the target input image based on the RAW data may not be the pan-focus image. It is because that when the Light Field method is used, the target output image having an arbitrary depth of field can be freely constituted after the RAW data is obtained, even if the pan-focus image does not exist.
In addition, it is possible to generate the ideal or pseudo pan-focus image as the target input image from the RAW data using a method that is not classified into the Light Field method (for example, the method described in JP-A-2007-181193). For instance, it is possible to generate the target input image as the pan-focus image using a phase plate (or a wavefront coding optical element), or to generate the target input image as the pan-focus image using an image restoration process of eliminating blur of an image on the image sensor 33.
<<Variations>>
The embodiment of the present invention can be modified variously as necessary within the scope of the technical concept described in the claims. The embodiment is merely an example of the embodiment of the present invention, and meanings of the present invention and terms of elements are not limited to those described in the above-mentioned embodiment. The specific numeric values mentioned in the above description are merely examples and can be changed to various numeric values as a matter of course. As annotations that can be applied to the above-mentioned embodiment, Notes 1 to 4 are described below. The descriptions in the Notes can be combined freely as long as no contradiction arises.
[Note 1]
There is described the method of setting the blurring amount for every subject distance to zero as the initial setting in Step S13 of
[Note 2]
The individual portions illustrated in
[Note 3]
In the embodiment described above, actions of the image pickup apparatus 1 are mainly described. Therefore, an object in the image or on the display screen is mainly referred to as a subject. It can be said that a subject in the image or on the display screen has the same meaning as an object in the image or on the display screen.
[Note 4]
The image pickup apparatus 1 of
Number | Date | Country | Kind |
---|---|---|---|
2010-241969 | Oct 2010 | JP | national |