The present invention relates to a focus detection apparatus, focus detection method, and image sensing apparatus, and more particularly, to a focus detection apparatus and focus detection method used in an image sensing apparatus, such as a digital still camera, capable of detecting an in-focus state of a photographing lens based on an image obtained from an image sensor for image sensing.
Regarding systems for detecting an in-focus state of a photographing lens in a digital camera which photographs using an image sensor, an apparatus which performs pupil division-based focus detection using a two-dimensional sensor is disclosed in Japanese Patent Laid-Open No. 58-24105, where the two-dimensional sensor has a microlens formed in each pixel. In the apparatus disclosed in Japanese Patent Laid-Open No. 58-24105, a photoelectric converter in each pixel of the image sensor is divided into multiple parts and the divided photoelectric converter is configured to receive a luminous flux passing different areas of a pupil of the photographing lens via the microlens.
Also, Japanese Patent No. 2959142 discloses a solid-state image sensing apparatus which combines an image sensor and in which pixels are arranged two-dimensionally with relative position of a microlens and photoelectric converter shifted. The solid-state image sensing apparatus disclosed in Japanese Patent No. 2959142 detects in-focus state of the photographing lens based on images generated in pixel columns which differ in the relative shift direction of the microlens and photoelectric converter. On the other hand, when capturing an ordinary image, the solid-state image sensing apparatus generates an image by adding signals from pixels which differ in the relative shift direction of the microlens and photoelectric converter.
Also, in Japanese Patent Laid-Open No. 2005-106994, the present inventor discloses a solid-state image sensing apparatus which performs pupil division-based focus detection using a CMOS image sensor (solid-state image sensing apparatus) used in a digital still camera. With the solid-state image sensing apparatus disclosed in Japanese Patent Laid-Open No. 2005-106994, in some of a large number of pixels in the solid-state image sensing apparatus, the photoelectric converter is divided into two parts to detect focus state of the photographing lens. The photoelectric converter is configured to receive a luminous flux passing a predetermined area of a pupil of the photographing lens via the microlens.
Cameras detect focus state of the photographing lens using correlation calculation between an image generated by a luminous flux passing through the area Sα and image generated by a luminous flux passing through the area Sβ on the pupil of the photographing lens. A method for detecting focus using correlation calculation between images generated by a luminous flux passing through different pupil areas of a photographing lens is disclosed in Japanese Patent Laid-Open No. 5-127074.
Also, Japanese Patent Laid-Open No. 5-127074 discloses a technique for detecting focus state after deforming a specific filter contained in a camera according to an aperture ratio, exit pupil position, and amount of image displacement and adapting the deformed filter to a subject image.
When detecting focus state, it is common practice to detect focus of not only a subject located at the center of a photographic screen, but also subjects located on peripheries of the photographic screen. However, on the peripheries of the photographic screen, the areas Sα and Sβ on the pupil of the photographing lens become asymmetric because of vignetting of a luminous flux caused by a lens frame or the like of the photographing lens. This results in low agreement between the image generated by the luminous flux passing through the area Sα on the pupil of the photographing lens and the image generated by the luminous flux passing through the area Sβ on the pupil of the photographing lens. Thus, the inventions disclosed in Japanese Patent Laid-Open No. 58-24105, Japanese Patent No. 2959142, and Japanese Patent Laid-Open No. 2005-106994 have a problem in that accurate focus detection is not possible on peripheries of the photographic screen if correlation calculation is performed based on the image generated by the luminous flux passing through the area Sα on the pupil of the photographing lens and the image generated by the luminous flux passing through the area Sβ on the pupil of the photographing lens.
Also, the technique disclosed in Japanese Patent Laid-Open No. 5-127074 has the disadvantage that images cannot be restored according to vignetting state of the luminous flux even if the specific filter contained in the camera is deformed according to conditions.
The present invention has been made in consideration of the above situation, and has as its object to enable restoring images according to vignetting state of a luminous flux and thereby improve focusing accuracy.
According to the present invention, the foregoing object is attained by providing a focus detection apparatus comprising: image sensing means having a first pixel group which receives a luminous flux passing a first pupil area of an imaging optical system which forms a subject image, and a second pixel group which receives a luminous flux passing a second pupil area different from the first pupil area; storage means storing a first distribution function corresponding to the first pupil area, and a second distribution function corresponding to the second pupil area; calculation means generating a first image signal by performing calculations on a first subject image, obtained from the first pixel group, using the second distribution function, and generating a second image signal by performing calculations on a second subject image, obtained from the second pixel group, using the first distribution function; and focus state detection means detecting a focus state of the imaging optical system based on the first image signal and the second image signal generated by the calculation means.
According to the present invention, the foregoing object is also attained by providing a focus detection method comprising; an image sensing step of reading a first subject image from a first pixel group which receives a luminous flux passing a first pupil area of an imaging optical system which forms a subject image, and reading a second subject image from a second pixel group which receives a luminous flux passing a second pupil area different from the first pupil area, the first pixel group and the second pixel group being included in image sensing means; an acquisition step of acquiring a first distribution function corresponding to the first pupil area, and a second distribution function corresponding to the second pupil area; a calculation step of generating a first image signal by performing calculations on the first subject image using the second distribution function and generating a second image signal by performing calculations on the second subject image using the first distribution function; and a focus state detection step of detecting a focus state of the imaging optical system based on the first image signal and the second image signal generated in the calculation step.
Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
Preferred embodiments of the present invention will be described in detail in accordance with the accompanying drawings. The dimensions, shapes and relative positions of the constituent parts shown in the embodiments should be changed as convenient depending on various conditions and on the structure of the apparatus adapted to the invention, and the invention is not limited to the embodiments described herein.
Reference numeral 105 denotes a third lens group which performs focus adjustment by moving forward and backward along the optical axis. Reference numeral 106 denotes an optical low pass filter which is an optical element used to reduce false colors and moire in shot images. Reference numeral 107 denotes an image sensor which includes a CMOS image sensor and peripheral circuits of the CMOS image sensor. The image sensor 107 uses a two-dimensional single-plate color sensor which has multiple light-receiving pixels, with m pixels arranged in a horizontal direction and n pixels arranged in a vertical direction, over which a Bayer array of primary-color mosaic filters is formed on chip.
Reference numeral 111 denotes a zoom actuator which turns a cam barrel (not shown) and thereby drives the first lens group 101 and second lens group 103 forward and backward along the optical axis, to perform a scaling operation. Reference numeral 112 denotes an aperture-shutter actuator which adjusts an amount of photographic light by controlling the aperture diameter of the aperture-shutter 102 and controls the exposure time during still photography. Reference numeral 114 denotes a focus actuator which performs focus adjustment by moving the third lens group 105 forward and backward along the optical axis.
Reference numeral 115 denotes an electronic flash used to illuminate a subject at the time of photography. A flash lighting system which uses a xenon tube is used preferably, but a lighting system equipped with a LED which emits light successively may be used alternatively. Reference numeral 116 denotes an AF fill flash unit which projects an image of a mask provided with a predetermined open pattern to a subject field via a projection lens to improve focus detection capability with respect to a dark subject or low-contrast subject.
Reference numeral 121 denotes a CPU which performs various types of control over the camera body in the image sensing apparatus. The CPU 121 includes, for example, a calculation unit, ROM, RAM, A/D converter, D/A converter, and communications interface circuit. Based on a predetermined program stored in the ROM, the CPU 121 performs a series of operations including AF, shooting, image processing, and recording operations by driving a various circuits of the image sensing apparatus.
Reference numeral 122 denotes an electronic flash control circuit which performs lighting control of the electronic flash 115 in synchronization with shooting operation. Reference numeral 123 denotes a fill flash driving circuit which performs lighting control of the AF fill flash unit 116 in synchronization with focus detection operation. Reference numeral 124 denotes an image sensor driving circuit which controls image sensing operation of the image sensor 107 as well as performs A/D conversion of an acquired image signal and transmits the resulting image signal to the CPU 121. Reference numeral 125 denotes an image processing circuit which performs y conversion, color interpolation, JPEG compression, and other processes on an image acquired by the image sensor 107.
Reference numeral 126 denotes a focus driving circuit which controls driving of the focus actuator 114 based on results of focus detection, thereby moves the third lens group 105 forward and backward along the optical axis, and thereby performs focus adjustment. Reference numeral 128 denotes an aperture-shutter driving circuit which controls driving of the aperture-shutter actuator 112 and thereby controls opening of the aperture-shutter 102. Reference numeral 129 denotes a zoom driving circuit which drives the zoom actuator 111 in response to a zoom operation performed by a photographer.
Reference numeral 131 denotes a display such as an LCD which displays information about shooting mode of the image sensing apparatus, a preview image before shooting and a confirmation image after shooting, an in-focus state display image brought up when focus is detected, and the like. Reference numeral 132 denotes an operation switch group which includes a power switch, release (shutter trigger) switch, zoom operation switch, shooting mode selector switch, and the like. Reference numeral 133 denotes a detachable flash memory used to record shot images.
In
Next, operation of independent outputs from all pixels in the image sensor 107 shown in
First, in response to a timing output from the vertical scanning circuit 16, a control pulse φL is set High to reset a vertical output line. Also, control pulses φR0, φPG100, and φPGe0 are set High to turn on the reset MOS transistor 4 and set the first polysilicon layer 19 of the photogate 2 to High. At time T0, a control pulse φS0 is set High to turn on the horizontal selector switch MOS transistor 6 and thereby select the pixels on the first and second lines. Next, the control pulse φR0 is set Low to release the FD portion 21 from reset state and put the FD portion 21 into floating state and thereby cause the source follower amplifier MOS transistor 5 to conduct between gate and source. Subsequently, at time T1, a control pulse φTN is set High to cause the FD portion 21 to output a dark voltage to the storage capacitor CTN 10 through source follower operation.
Next, in order to obtain photoelectric conversion outputs from the pixels of the first line, a control pulse φTX00 for the first line is set High to bring the transfer switch MOS transistor 3 into conduction. After that, at time T2, the control pulse φPG00 is set Low. In so doing, preferably voltage relationship is such as to shallow potential wells which are spread below the photogate 2 and completely transfer light-generating carriers to the FD portion 21. Thus, as long as complete transfer is possible, a fixed potential may be used instead of the control pulse φTX.
At time T2, as charge is transferred from the pixels of the first line of a photodiode to the FD portion 21, potential of the FD portion 21 changes according to light. Since the source follower amplifier MOS transistor 5 is in a floating state, a control pulse φTs is set High at time T3 to output the potential of the FD portion 21 to the storage capacitor CTS 11. At this point, a dark signal and image signal of the pixels of the first line are stored in the storage capacitors CTN 10 and CTS 11, respectively. At time T4, a control pulse φHC is temporarily set High to bring the horizontal output line reset MOS transistor 13 into conduction. Consequently, a horizontal output line is reset, causing the dark signal and image signal of the pixels to be output to the horizontal output line in a horizontal transfer period in response to a scan timing signal for the horizontal scanning circuit 15. In so doing, by determining differential output VOUT using the differential amplifier 14 for the storage capacitors CTN 10 and CTS 11, it is possible to obtain a signal free of random pixel noise and fixed-pattern noise and with a good signal-to-noise ratio.
The dark signal and image signal of the pixels of the first line are stored, respectively, in the storage capacitors CTN 10 and CTS 11 connected to respective vertical output lines. Thus, when the horizontal transfer MOS transistors 12 are turned on in sequence, the charges stored in the respective storage capacitors CTN 10 and CTS 11 are read out to the horizontal output line in sequence and output from the differential amplifier 14.
The present embodiment is configured to produce the differential output VOUT in the chip. However, similar effects can be obtained using a conventional external CDS (Correlated Double Sampling) circuit outside the chip.
On the other hand, after the image signal is output from the pixels of the first line to the storage capacitors CTS 11, the control pulse φR0 is set High to bring the reset MOS transistor 4 into conduction and reset the FD portion 21 to a power supply voltage VDD. When horizontal transfer of charges from the first line is finished, charges are read out from pixels on the second line. To read the second line, a control pulse φTXe0 and control pulse φPGe0 are driven first, as in the case of the first line described above. Next, the control pulses φTN and φTS are sequentially set High, and the dark signal and image signal are stored in the respective storage capacitors CTN 10 and CTS 11.
The above process allows the first line and second line to be read independently of each other. Subsequently, if the (2n+1)-th and (2n+2)-th lines (n=1, 2, . . . ) are read similarly by operating the vertical scanning circuit 16, independent outputs can be produced from all pixels. Specifically, when n=1, first a control pulse φS1 is set High, then φR1 is set Low, and subsequently control pulses φTN and φTX01 are set High. Then, a control pulse φPG01 is set Low, the control pulse φTS is set High, and the control pulse φC is temporarily set High to read the dark signal and image signal from each pixel of the third line. Next, control pulses φTXe1 and φPGe1 are applied in addition to the control pulses described above to read the dark signal and image signal from each pixel of the fourth line.
A sectional view taken along A-A in
The on-chip microlens ML of the image sensing pixel and the photoelectric conversion device PD are configured to capture luminous fluxes passing the photographing lens TL as effectively as possible. In other words, the exit pupil EP of the photographing lens TL and the photoelectric conversion device PD are conjugated via the microlens ML. Also, the effective area of the photoelectric conversion device PD is designed to be large. This can be seen from a luminous flux 30 which shows how the entire area of the exit pupil EP is taken into the photoelectric conversion device PD. Although a luminous flux incident upon the R pixel has been described in
A sectional view taken along B-B in
According to the present embodiment, since signals from the focus detection pixels are not used for image formation, a transparent film CFW (white) is placed instead of a color separation color filter. Also, since the image sensor 107 performs pupil division, an opening of the wiring layer CL is deviated to the centerline of the microlens ML in the x direction. Specifically, an opening OPHA of the pixel SHA is deviated to the centerline of the microlens ML by 41HA in the −x direction, and thus receives a luminous flux 40HA passing an exit pupil area EPHA in the +x direction of the photographing lens TL. Similarly, an opening OPHB of the pixel SHB is deviated to the centerline of the microlens ML by 41HB in the +x direction, and thus receives a luminous flux 40HB passing an exit pupil area EPHB in the −x direction of the photographing lens TL. As can be seen from
The pixels SHA configured as described above are arranged regularly in the x direction and a subject image obtained from this pixel group is designated as image A. Also, the pixels SHB are arranged regularly in the x direction and a subject image obtained from this pixel group is designated as image B. Then, by detecting relative position of images A and B thus acquired, it is possible to detect a defocus amount of a subject image which has a luminance distribution in the x direction.
Incidentally, the pixels SHA and SHB described above are useful in detecting focus with respect to a subject which has a luminance distribution in the x direction of a photographic screen, such as a line in a y direction (vertical line), for example, but are not useful in detecting focus with respect to a line in the x direction (horizontal line) which has a luminance distribution in the y direction. Thus, to enable focus detection with respect to a line in the x direction as well, the present embodiment is also provided with pixels used for pupil division in the y direction of the photographing lens.
A sectional view taken along C-C in
The pixels SVC configured as described above are arranged regularly in the y direction and a subject image obtained from this pixel group is designated as image C. Also, the pixels SVD are arranged regularly in the y direction and a subject image obtained from this pixel group is designated as image D. Then, by detecting relative position of images C and D thus acquired, it is possible to detect a defocus amount of a subject image which has a luminance distribution in the y direction.
As described with reference to
In
Regarding the focus detection area AFARv (x3, y3), by calculating an amount of relative y-direction deviation of a isignal of image C AFSIGv (C1) for phase difference detection and a signal of image D AFSIGh (D1) for phase difference detection using a known correlation calculation, a defocus amount of the photographing lens is determined similarly. Then, the two defocus amounts detected in the x-direction and y-direction focus detection areas are compared, and the value with the higher reliability is adopted.
On the other hand, the trunk of the tree on the left side of the screen mainly contains a y-direction component, that is, it has a luminance distribution in the x direction. Therefore, it is determined that the subject is suitable for detecting x-direction deviation, and a focus detection area AFARh (x2, y2) for detection of x-direction deviation is set. Also, ridges of the mountains on the right side of the screen mainly contain an x-direction component, that is, it has a luminance distribution in the y direction. Therefore, it is determined that the subject is suitable for detecting y-direction deviation, and a focus detection area AFARv (x4, y4) for detection of y-direction deviation is set.
Thus, the present embodiment, which can set focus detection areas for detection of x-direction deviation and y-direction deviation at any location, is always capable of focus detection even if projected position of the subject or directionality of luminance distribution has wide variations. Principles of deviation detection will be described below. Principles for the x direction and y direction are identical except for the difference in direction, and thus only the deviation detection in the x direction will be described, and description of deviation detection in the y direction will be omitted.
Reference characters Iw1 and Iw2 denote windows of the members which restrict the luminous flux. The luminous flux passes the windows Iw1 and Iw2 of the members. Reference character Me denotes a pupil surface established according to the configuration of the microlens ML. First, vignetting of the luminous flux incident upon the pixel at the center of the image sensor 107 will be described with reference to
Reference characters L1rc and L1lc denote an outer circumference of an exit luminous flux from the window Iw1, where L1rc indicates the right end of the circumference in
Next, vignetting of the luminous flux incident upon the pixel which has an image height from the center of the image sensor 107 will be described with reference to
As described above, the defocus amount of a subject image is detected by detecting the relative position of images A and B which are subject images acquired, respectively, from the pixel SHA group and pixel SHB group arranged regularly in the x direction.
If light distribution of a subject is f(x, y) and light distribution of a subject image is g(x, y),
Thus, a relationship given by the convolution holds, where h (x, y) is a transfer function called a point spread function which represents a degrading state of the subject in an image forming system. Thus, to find a pair of subject images used for focus detection, it is necessary to find the point spread function. In the phase difference focus detection, phase difference between a pair of subject images is detected by paying attention to a one-dimensional direction of the images. Therefore, an image system related to focus detection can be evaluated using a line spread function (line spread function) which is a linear function instead of using the point spread function. Thus, by rewriting the light distribution of the subject as f(x), and the light distribution of the subject image as g(x), Equation (1) above can be rewritten as follows using a line spread function L(a):
Thus, under arbitrary defocus condition, a pair of line spread functions generated by luminous fluxes which pass different pupil areas in a phase shift direction are determined using Equation (2) above. Consequently, a pair of subject images can be found. Once a pair of subject images are found, base length can be determined from distance between centers of gravity of the subject images, and the defocus amount can be calculated based on the amount of image deviation between the pair of subject images and on the base length. The base length can be determined using Equations (3) to (5) below. Let the centers of gravity of the subject images be GA and GB and let the base length be G, then
An intensity distribution of a point image formed on an image plane by light emitted from a point light source and passes an exit pupil of an optical system, that is, a so-called point spread function, is considered to be a reduced projection of an exit pupil shape on the image plane. Similarly, a line spread function can be considered to be the exit pupil shape in the one-dimensional direction, that is, the exit pupil shape integrated in the one-dimensional direction and formed in a reduced form on the image plane via the microlens ML.
Thus, it can be considered that the line spread function corresponds to the pupil intensity distributions shown in
That is, as shown in
Next, a method for correcting asymmetry of subject images will be described.
As described above, the asymmetry between subject image A and subject image B occurs when pupil intensity distributions asymmetric between the pixels SHA and SHB are convoluted.
To begin with, subject image A (ImgA) is obtained using Equation (2) described above.
A corrected image ReImgA(k(x)) is determined by convolution of resulting subject image A (ImgA) and the line image EsdBx as follows:
The corrected image ReImgB is calculated similarly as follows:
ReImgA and ReImgB obtained using Equations (6) and (7) above are equal.
Next, a flow of a focus detection process according to the present embodiment will be described with reference to a flowchart in
In step S1, the CPU 121 reads out lens information to check vignetting state. In step S2, the CPU 121 reads out a focus detection area set by the user, and then goes to step S3.
In step S3, the CPU 121 reads the pupil intensity distribution of each focus detection pixel out of the ROM of the CPU 121 and calculates the line spread function using the acquired information in conjunction with the vignetting information obtained in step S1. In step S4, the CPU 121 calculates the center of gravity of the line spread function obtained in step S3 and determines the base length. Then, the CPU 121 goes to step S5.
Next, in step S5, the CPU 121 reads image signals from focus detection pixels in the focus detection area and forms subject image A and subject image B. In step S6, the CPU 121 performs shading correction by predicting shading of subject image A and subject image B formed in step S5 based on the line spread function obtained in step S3. After the shading correction, the CPU 121 goes to step S7.
In step S7, the CPU 121 determines an amount of image deviation by a known correlation calculation method using subject image A and subject image B subjected to the shading correction in step S6, and determines a tentative defocus amount based on the amount of image deviation in conjunction with the base length determined in step S4. Once the tentative defocus amount is calculated, the CPU 121 goes to step S8.
In step S8, the CPU 121 determines whether or not the tentative defocus amount calculated in step S7 falls within a range defined by thresholds A and B. If it is determined that the tentative defocus amount falls within the defined range, the CPU 121 goes to step S9 to perform an image correction process. On the other hand, if it is determined that the tentative defocus amount falls outside the defined range, the CPU 121 goes to step S13 without performing an image correction process. Reasons for this will be described below.
If an image correction process is performed when the amount of defocus is too large, correlation calculation becomes difficult to perform because the convolution will further blur the images. On the other hand, when the amount of defocus is small, since symmetricalness of the two images is not violated much, there is no need to correct the images. For these two reasons, it is convenient to perform image correction only when the tentative defocus amount is within a certain defocus range.
In step S9, the CPU 121 creates image correction filters. The line spread function obtained in step S3 is adjusted to suit width of image correction filters determined based on the tentative defocus amount calculated in step S7.
A method for determining the width of the image correction filters will be described with reference to
Similarly, a rear-focus situation results in a relationship illustrated in
Next, the CPU 121 increases a gain of the shorter filter so that the image correction filters will have the same height. This is because shading correction has been applied to subject image A and subject image B in the first correlation calculation in step S6.
Next, the CPU 121 moves waveforms to align the centers of gravity of the image correction filters of subject image A and subject image B. This is intended to limit amounts of change in the base length caused in a filtering process in the next step S10 to those caused by deformation of subject image A and subject image B corrected in the filtering process.
In step S10, the CPU 121 performs the convolution of the image correction filters obtained in step S9 and the subject images, and thereby calculates corrected subject images. Then, the CPU 121 goes to step S11. In step S11, the CPU 121 calculates the base length anew using the line spread function obtained in step S3. First, the CPU 121 moves a line image (hereinafter referred to as line image A) corresponding to subject image A and a line image (hereinafter referred to as line image B) corresponding to subject image B in such a way as to bring their centers of gravity into coincidence. If the moved line image A and line image B are designated as line image A0 and line image B0, a corrected line image A is obtained by convoluting the line image A and the line image B0 while a corrected line image B is obtained by convoluting the line image B and the line image A0. The CPU 121 calculates corrected base length from distance between the centers of gravity of the corrected line image A and corrected line image B. This is given by the following equation.
If the corrected line image A is MA(x), the line image A is LA(x), and the line image B0 is LB′(x), an equation used to determine the corrected line image A is given by:
Thus, if the center of gravity of the corrected line image A is denoted by GA′,
Similarly, if the corrected line image B is MB(x), the line image B is LB(x), and the line image A0 is LA′(x), an equation used to determine the corrected line image B is given by:
Thus, if the center of gravity of the corrected line image B is denoted by GB′,
Thus, if the base length to be determined is G′,
G′=|G
A
′
−G
B
′| (13)
Once the base length described above is calculated, the CPU 121 goes to step S12.
In step S12, the CPU 121 determines the amount of image deviation between the two images by a known correlation calculation method using the corrected subject images formed in step S10, detects focus state, and determines the defocus amount based on the amount of image deviation in conjunction with the corrected base length determined in step S11. Once the amount of defocus is determined, the CPU 121 goes to step S13.
In step S13, based on the calculated amount of defocus, the CPU 121 determines whether the subject is in focus. If it is not determined that the subject is in focus, the CPU 121 goes to step S14 to move the third lens group 105 forward or backward based on results of the defocus calculation. Then, the CPU 121 returns to step S5.
On the other hand, if it is determined that the subject is in focus, the CPU 121 finishes the series of focus detection process steps.
The above configuration enables restoring images according to the vignetting state of a luminous flux and thereby improve focusing accuracy.
Incidentally, although a known correlation calculation method based on image deviation is used in the present embodiment, similar results can be obtained using another method. Also, in the present embodiment, the image correction process is performed using correction filters whose heights are adjusted to the line images corresponding to two subject images subjected to shading correction. However, image correction may be performed by convoluting the subject images before shading correction using correction filters whose heights are not adjusted. Furthermore, although in the present embodiment, the necessity for the image correction process is determined depending on a defocus range, focusing accuracy is expected to be improved even when an image correction process is performed throughout the range of defocus.
Aspects of the present invention can also be realized by a computer of a system or apparatus (or devices such as a CPU or MPU) that reads out and executes a program recorded on a memory device to perform the functions of the above-described embodiment(s), and by a method, the steps of which are performed by a computer of a system or apparatus by, for example, reading out and executing a program recorded on a memory device to perform the functions of the above-described embodiment(s). For this purpose, the program is provided to the computer for example via a network or from a recording medium of various types serving as the memory device (e.g., computer-readable medium).
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2008-292609, filed on Nov. 14, 2008 which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2008-292609 | Nov 2008 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2009/069148 | 11/4/2009 | WO | 00 | 3/2/2011 |