This application is based on application No. 2004-154781 filed in Japan, the contents of which are hereby incorporated by reference.
1. Field of the Invention
The present invention relates to a technique of correcting shading in an image captured by an image sensor.
2. Description of the Background Art
In an image captured by an image capturing apparatus such as a digital camera, a phenomenon of decrease in a peripheral light amount called shading occurs. A part of the shading occurs due to the characteristics of an image sensor.
In an image sensor as a collection of fine light sensing pixels, a microlens as a condenser lens is disposed for each of the light sensing pixels. In an image capturing apparatus of recent years strongly demanded to be miniaturized, generally, telecentricity on an image side is low and the incident angle of light increases toward the periphery of the image sensor. Consequently, when the incident angle increases, the condensing position of a light beam by a microlens is deviated from the center of a photosensitive face of a light sensing pixel, and the light reception amount of the light sensing pixel decreases. As a result, shading occurs in the peripheral portion of an image.
Hitherto, a technique is known that the microlenses are disposed near to the optical axis side of an image capturing optical system rather than the positions just above the light sensing pixels in order to suppress sensor system shading which occurs due to the characteristics of the image sensor.
Such sensor system shading has various characteristics. For example, the light amount decrease ratio of the sensor system shading is asymmetrical with respect to the center of an image (position corresponding to the optical axis) on the basis of a manufacture error and the like of the image sensor. Since dispersion occurs in the microlens, the light amount decrease ratio of the sensor system shading varies according to colors.
As described above, the sensor system shading has various characteristics. However, a shading correcting technique considering such characteristics has not been conventionally proposed, and shading in an image captured by the image sensor cannot be properly corrected.
The present invention is directed to an image capturing apparatus.
According to the present invention, the image capturing apparatus comprises: an image capturing optical system; an image sensor having a plurality of light sensing pixels for photoelectrically converting a light image formed by the image capturing optical system; and a corrector for correcting shading in an image made of a plurality of pixels in a two-dimensional array captured by the image sensor by using a plurality of correction factors corresponding to the plurality of pixels. Values of the plurality of correction factors are asymmetrical with respect to a position corresponding to an optical axis of the image capturing optical system.
Since shading is corrected by using correction factors whose values are asymmetrical with respect to the position corresponding to the optical axis of the image capturing optical system, shading in an image can be properly corrected.
According to an aspect of the present invention, the corrector makes the shading correction by using first correction data including a correction factor for correcting shading which occurs due to characteristics of the image sensor.
Thus, shading which occurs due to the characteristics of the image sensor can be properly corrected.
According to another aspect of the present invention, the corrector makes the shading correction by also using second correction data including a correction factor for correcting shading which occurs due to characteristics of the image capturing optical system.
Consequently, shading which occurs due to the characteristics of the image capturing optical system can be properly corrected.
The present invention is also directed to a method of correcting shading in an image capturing apparatus.
The present invention is also directed to a computer-readable computer program product.
Therefore, an object of the present invention is to provide a technique capable of properly correcting shading in an image captured by an image sensor.
These and other objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.
FIGS. 2 to 5 are cross-sectional views of a portion around a light sensing pixel in the image sensor;
In the specification, pixels as basic elements constructing an image sensor will be referred to as “light sensing pixels” and pixels as basic elements constructing an image will be simply referred to as “pixels”.
1. Shading
Prior to description of concrete configurations and operations of preferred embodiments of the present invention, shading which occurs in an image captured by an image capturing apparatus using an image sensor such as a digital camera will be described.
Shading is a phenomenon that a pixel value (light amount) in a peripheral portion of an image decreases. Generally, shading does not occur in the center of an image (position corresponding to the optical axis of an image capturing optical system) and the light amount decrease ratio increases toward the periphery of an image. When the light amount decrease ratio is set as R, an ideal pixel value at which no shading occurs is set as V0, and an actual pixel value at which shading occurs is set as V1, the light amount decrease ratio R in the specification is expressed by the following equation (1). The light amount decrease ratio is a value peculiar to a pixel in an image.
R=(V0−V1)/V0 (1)
Shading is roughly divided into lens system shading and sensor system shading. The lens system shading is shading resulting from characteristics of an image capturing optical system (taking lens) and occurs also in a film camera which does not use the image sensor. On the other hand, the sensor system shading is shading resulting from characteristics of the image sensor and is a phenomenon peculiar to the image capturing apparatus using the image capturing sensor.
1-1. Lens System Shading
Representative causes of the lens system shading are “vignetting” and “cosine fourth law”.
The “vignetting” is a phenomenon which occurs due to the fact that a part of an incident light beam is shielded by a frame for holding the image capturing optical system or the like. That is, the phenomenon corresponds to a phenomenon that the field of view is shielded by the frame of the image capturing optical system or the like when the user sees an object through the image capturing optical system obliquely with respect to the optical axis.
The “cosine fourth law” is a law such that the light amount of a light beam incident on the image capturing optical system at an inclination of an angle “a” from the optical axis of the image capturing optical system is smaller than that of a light beam which is incident in parallel with the optical axis by the fourth power of the cosine a The light amount decreases in accordance with the law.
The lens system shading corresponds to a phenomenon that the light amount decreases because of the characteristics of the image capturing optical system before a light beam reaches the image sensor and is not related with the characteristics of the image sensor.
1-2. Sensor System Shading
On the other hand, the sensor system shading corresponds to a phenomenon that the light amount decreases due to the characteristics of the image sensor after the light beam reaches the image sensor.
It can be regarded that a light beam is incident on each of the light sensing pixels 2 in the image sensor 20 from the position of the exit pupil Ep. Therefore, light is incident on a light sensing pixel 2a in the center of the image sensor 20 along the optical axis “ax” whereas light is incident on a light sensing pixel 2b in a peripheral portion of the image sensor 20 with an inclination from the optical axis “ax”. The incident angle θ of light increases toward the periphery of the image sensor 20 (that is, as the image height increases). When the distance from the image sensor 20 to the exit pupil Ep is set as “exit pupil distance” Ed, the incident angle θ of light depends on the exit pupil distance Ed and increases as the exit pupil distance Ed is shortened. The sensor system shading occurs due to the fact that light is obliquely incident on the light sensing pixel 2.
Specifically, the light sensing pixel 2 has a photodiode 21 for generating and storing a signal charge according to the light reception amount. A channel 23 is provided next to the photodiode 21, and a vertical transfer part 22 for transferring signal charges is disposed next to the channel 23. Above the vertical transfer part 22 in the figure, a transfer electrode 24 for applying a voltage for transferring signal charges to the vertical transfer part 22 is provided. Above the transfer electrode 24, a light shielding film 25 made of aluminum or the like for shielding incoming light to the portion other than the photodiode 21 is disposed.
The foregoing configuration is formed for each light sensing pixel 2. Therefore, in the photosensitive face of the image sensor 20, configurations each identical to the foregoing configuration are disposed continuously with one another. As shown in the figure, the photodiode 21 receives light passed through a window formed between the neighboring two light shielding films 25.
For each of the light sensing pixels 2, a microlens 27 as a condenser lens for condensing light is disposed. In the examples of
A color filter 26 for passing only light having a predetermined wavelength band is disposed between the microlens 27 and the photodiode 21. Color filters 26 for a plurality of colors are prepared and the color filter 26 of any one of the colors is disposed for each light sensing pixel 2.
As described above, light L is incident on the light sensing pixel 2a in the center of the image sensor 20 in parallel with the optical axis. As shown in
The sensor system shading occurs on the above-described principle mainly. The rate of occurrence of a deviation of the light condensing position Lp and shielding of light by the light shielding film 25 increases as the incident angle θ of the light L increases. Therefore, toward the periphery of the image sensor 20 or the shorter the exit pupil distance Ed is, the light amount decrease ratio by the sensor system shading increases.
In recent years, to suppress the sensor system shading, as shown in
However, even when such a technique is applied, the incident angle θ changes according to the exit pupil distance Ed as described above. Therefore, according to the exit pupil distance Ed, the sensor system shading still occurs in an image.
Since the sensor system shading occurs on the above-described principle, the light amount decrease ratio is directly influenced by the structure state such as layout of the components of the image sensor 20. Therefore, based on a manufacture error and the like of the image sensor 20, the light amount decrease ratio becomes asymmetric with respect to the center of an image (the position corresponding to the optical axis of the image capturing optical system). In recent years, the number of light sensing pixels to be provided for the image sensor is increasing dramatically. With the increase, the size of each light sensing pixel is being reduced. Consequently, the influence of a manufacture error of the image sensor exerted on the light amount decrease ratio of the sensor system shading is becoming higher.
The light amount decrease ratio of the sensor system shading varies from color to color. As shown in
1-3. Summary of Shading
In short, the sensor system shading has the following characteristics:
On the other hand, the lens system shading has the following characteristics:
In an image capturing apparatus described below, proper shading correction is made in consideration of the characteristics of both the sensor system shading and the lens system shading. In the following, a digital camera as an example of the image capturing apparatus using the image sensor will be described.
2. First Preferred Embodiment
2-1. Configuration
As shown in
In the photosensitive face of the image sensor 20, a plurality of light sensing pixels 2 for photoelectrically converting a light image formed by the taking lens 3 are arranged two-dimensionally. Each of the light sensing pixels 2 of the image sensor 20 has the same configuration as that shown in
On the top face side of the digital camera 1, a shutter start button 44 for accepting an image capture instruction from the user and a main switch 43 for switching on/off of the power are disposed.
In a side face of the digital camera 1, a card slot 45 into which a memory card 9 as a recording medium can be inserted is formed. An image captured by the digital camera 1 is recorded on the memory card 9. The recording image can be also transferred to an external computer via the memory card 9.
As shown in
The digital camera 1 has two operation modes of an “image capturing mode” for capturing an image and a “playback mode” for playing back the image. The operation modes can be switched by sliding the mode switching lever 46.
The liquid crystal monitor 47 performs various displays such as display of a setting menu and display of an image in the “playback mode”. In an image capturing standby state of the “image capturing mode”, a live view indicative of an almost real-time state of the subject is displayed on the liquid crystal monitor 47. The liquid crystal monitor 47 is used also as a viewfinder for performing framing.
Functions are dynamically assigned in accordance with the operation state of the digital camera 1 to the cross key 48 and the function button group 49. For example, when the cross key 48 is operated in the image capturing standby state of the “image capturing mode”, the magnification of the taking lens 3 is changed.
As shown in the diagram, a microcomputer for controlling the whole apparatus in a centralized manner is provided in the digital camera 1. Concretely, the digital camera 1 has a CPU 51 for performing various computing processes, an RAM 52 used as a work area of computation, and a ROM 53 for storing a program 65 and various data. The components of the digital camera 1 are electrically connected to the CPU 51 and operate under control of the CPU 51.
The taking lens 3, the image sensor 20, an A/D converter 54, an image processor 55, the RAM 52, and the CPU 51 in the configuration shown in
The image processor 55 performs various imaging processes such as γ correcting process and color interpolating process on an image output from the A/D converter 54. By the process of the image processor 55, a color image in which pixels have three pixel values of three color components is generated. It can be regarded that such a color image is formed by three color component images of an R-component image, a G-component image, and a B-component image.
When a lens driver 56 drives the lens group 31 included in the taking lens 3 and the iris 32 on the basis of a signal from the CPU 51, thereby changing the layout of the lens group 31 and the numerical aperture of the iris 32. The lens group 31 includes a zoom lens specifying the focal length of the taking lens 3 and a focus lens for changing the focus state of a light image. The lenses are also driven by the lens driver 56.
The liquid crystal monitor 47 is electrically connected to the CPU 51 and performs various displays on the basis of a signal from the CPU 51. An operation input part 57 is expressed as a function block of operation members including the shutter start button 44, mode switching lever 46, cross key 48, and function button group 49. When the operation input part 57 is operated, a signal indicative of an instruction related to the operation is generated and supplied to the CPU 51.
Various functions of the CPU 51 are realized by software in accordance with the program 65 stored in the ROM 53. More concretely, the CPU 51 performs the computing process in accordance with the program 65 while using the RAM 52, thereby realizing the various functions. The program 65 is pre-stored in the ROM 53. A new program can be obtained later by being read from the memory card 9 in which the program is recorded and stored into the ROM 53. In
The zoom controller 61 is a function for adjusting the focal length (magnification) of the taking lens 3 by changing the position of the zoom lens. The zoom controller 61 determines the position of the zoom lens to be moved on the basis of an operation on the cross key 48 of the user, transmits a signal to the lens driver 56, and moves the zoom lens to the position.
The exposure controller 62 is a function of adjusting brightness of an image captured. The exposure controller 62 sets exposure values (exposure time, an aperture value, and the like) with reference to a predetermined program chart on the basis of brightness of the image captured in the image capturing standby state. The exposure controller 62 sends a signal to the image sensor 20 and the lens driver 56 so as to achieve the exposure values. By the operation, the numerical aperture of the iris 32 is adjusted in accordance with the set aperture value and exposure for the exposure time which is set in the image sensor 20 is performed.
The focus controller 63 is an auto focus control function of adjusting a focus state of a light image by changing the position of the focus lens. The focus controller 63 derives the position of the focus lens where focus is achieved most on the basis of evaluation values of images sequentially captured with time and transmits a signal to the lens driver 56 to move the focus lens.
The shading corrector 64 is a function of correcting shading in a color image stored in the RAM 52 after process of the image processor 55. The shading corrector 64 makes shading correction by using correction data stored in the ROM 53.
2-2. Correction Data
Correction data used for shading correction will now be described. In the preferred embodiment, as correction data used for shading correction, first correction data 66 and second correction data 67 exist. The first correction data 66 is correction data for correcting the sensor system shading. The second correction data 67 is correction data for correcting the lens system shading.
Since the shading is a phenomenon that a pixel value in an image decreases, correction can be made by multiplying the pixel value of each of the pixels in the image 7 as shown in
K=1/(1−R) (2)
Such a correction factor is preliminarily obtained by measurement or the like and included in the first and second correction data 66 and 67. However, the correction factors corresponding to all of pixels of the image 7 are not included but correction factors corresponding to only some pixels are included. At the time of shading correction, the correction factors corresponding to the other pixels which are not included in the first and second correction data 66 and 67 are derived by computation (the details will be described later).
Concretely, the first and second correction data 66 and 67 include correction factors corresponding to only pixels existing in positions of the coordinate axes of the coordinate system which is set for an image to be corrected. In the digital camera 1, as shown in
Shading in an image does not occur in the origin O (=the center of the image=the position corresponding to the optical axis of the image capturing optical system), and the light amount decrease ratio increases toward the periphery of an image. Consequently, as shown in FIGS. 10 to 12, in both of the first and second correction data 66 and 67, the value of the axial factor corresponding to the origin O of the image is “1” and increases toward the periphery of the image.
As described above, the light amount decrease ratio of the sensor system shading is characterized by being “asymmetric with respect to the origin O”. Therefore, as shown in
The light amount decrease ratio of the sensor system shading is characterized by being “varied according to a color component”. Three pixel values indicated by one pixel decrease at different light amount decrease ratios. Consequently, as shown in
The light amount decrease ratio of the lens system shading is characterized by being “point symmetrical with respect to the origin O”. Therefore, the same axial factor for correcting the lens system shading can be used for the X and Y axes. Since the light amount decrease ratio of the lens system shading “does not vary according to a color component”, the common axial factor can be used for the three color components of R, G and B. Therefore, as shown in
The light amount decrease ratio of the sensor system shading is characterized by “changing according to the exit pupil distance”. Consequently, a plurality of pieces of the first correction data 66 according to the exit pupil distance of the taking lens 3 are stored in the ROM 53 in the digital camera 1. For example, when the digital camera 1 recognizes the exit pupil distance in 10 levels, ten kinds of first correction data 66 which are different from each other are stored in the ROM 53. Each of the ten kinds of the first correction data 66 includes six kinds of axial factor groups.
On the other hand, the light amount decrease ratio of the lens system shading is characterized by “changing according to the focal length, aperture value, and focus lens position determining the characteristics of the taking lens”. In the ROM 53 of the digital camera 1, therefore, a plurality of pieces of the second correction data 67 according to the focus length, aperture value, and focus lens position are stored in the ROM 53 of the digital camera 1. For example, when the digital camera 1 recognizes each of the focal length, aperture value, and focus lens position in five levels, the second correction data 67 of 125 kinds (=5×5×5) is stored in the ROM 53. Each of the 125 kinds of second correction data 67 includes one kind of the axial factor group.
2-3. Basic Operation
The operation in the image capturing mode of the digital camera 1 will now be described.
When the operation mode is set to the image capturing mode, first, the digital camera 1 enters an image capturing standby state in which the digital camera 1 waits for an operation on the shutter start button 44, and a live view is displayed on the liquid crystal monitor 47 (step S1). When the cross key 48 is operated by the user in the image capturing standby state, the position of the zoom lens is moved by control of the zoom controller 61 and the focal length of the taking lens 3 is changed.
When the shutter start button 44 is half-pressed (“half-press” in step S1), in response to this, exposure values (exposure time and an aperture value) are set by the exposure controller 62. The numerical aperture of the iris 32 is adjusted according to the set aperture value (step S2). Subsequently, auto-focus control is executed by the focus controller 63 and the focus lens is moved to the position where focus is achieved most (step S3).
After the auto-focus control, the digital camera 1 waits for depression of the shutter start button 44 (step S4). This state is maintained while the shutter start button 44 is half-depressed. In the case where the operation of the shutter start button 44 is cancelled in this state (“OFF” in step S4), the process returns to step S1.
When the shutter start button 44 is depressed (“depress” in step S4), in response to this, exposure is made by the image sensor 20 in accordance with the set exposure time, and an image is captured. The captured image is subjected to predetermined processes in the A/D converter 54 and the image processor 55, thereby obtaining a color image in which each pixel has three pixel values corresponding to three color components. The color image is stored in the RAM 52 (step S5).
Subsequently, shading correction is made on the color image stored in the RAM 52 by the shading corrector 64 (step S6). After the shading correcting process, the image is converted to an image file in the Exif (Exchangeable Image File Format) by the control of the CPU 51 and the image file is recorded in the memory card 9. The image file includes tag information. As the tag information, identification information of the digital camera 1 and optical characteristics values such as focal length, aperture value, and focus lens position as image capturing parameters are written (step S7). After the image is recorded, the process returns to step S1.
2-4. Shading Correction
The shading correcting process (step S6) performed by the shading corrector 64 will now be described in detail.
First, exit pupil distance of the taking lens 3 at the time point the un-corrected image 71 is captured is calculated by the pupil distance calculator 85. The exit pupil distance can be calculated on the basis of the focal length, aperture value, and focus lens position. The focal length, aperture value, and focus lens position are input from the zoom controller 61, exposure controller 62, and focus controller 63, respectively, to the pupil distance calculator 85. By substituting the values for a predetermined arithmetic expression, the exit pupil distance is calculated (step S11).
On the basis of the calculated exit pupil distance, the first correction data 66 is selected by the first data selector 81. As described above, the plurality of pieces of first correction data 66 are stored in the ROM 53. One piece according to the actual exit pupil distance of the taking lens 3 is selected from the plurality of pieces of first correction data 66 (step S12).
Next, correction tables 66r, 66g and 66b each in a table form are generated by the first table generator 82 from the selected first correction data 66. Specifically, correction factors corresponding to all of pixels of the un-corrected image 71 are derived from the axial factors included in the first correction data 66, and the correction tables 66r, 66g and 66b including the derived correction factors are generated.
In the correction tables 66r, 66g and 66b, the correction factors corresponding to all of the pixels of the un-corrected image 71 are included in a two-dimensional orthogonal array which is the same as that of the pixels of the un-corrected image 71. The position of each of the correction factors of the correction tables 66r, 66g and 66b is also expressed by a coordinate position in an XY coordinate system (see
From the first correction data 66, three correction tables corresponding to the three color components of R, G and B, to be specific, the R-component correction table 66r, G-component correction table 66g, and B-component correction table 66b are generated. More concretely, the R-component correction table 66r is generated from two axial factor groups of the X and Y axes related to the R components out of the six kinds of axial factor groups included in one piece of the first correction data 66. Similarly, the G-component correction table 66g is generated from the two axial factor groups of the X and Y axes related to the G, components, and the B-component correction table 66b is generated from the two axial factor groups of the X and Y axes related to the B components.
The value of each of the correction factors in the correction table is derived by referring to the values of the axial factors in the two axial factor groups of the X and Y axes on the basis of the coordinate position. For example, when the coordinate position in the XY coordinate system is expressed as (X, Y), the value of the correction factor of (X, Y)=(a, b) is derived by multiplication of the value of the axial factor of X=a in the axial factor group related to the X axis and the value of the axial factor of Y=b in the axial factor group related to the Y axis.
The generated R-component correction table 66r includes the correction factor for correcting the sensor system shading in an R-component image in the un-corrected image 71. Similarly, the G-component correction table 66g includes a correction factor for correcting the sensor system shading in a G-component image. The B-component correction table 66b includes a correction factor for correcting the sensor system shading in a B-component image. The values of the correction factors of the correction tables 66r, 66g, and 66b are asymmetrical with respect to the origin O. The generated correction tables 66r, 66g, and 66b are stored in the RAM 52 (step S13).
The second correction data 67 is selected by the second data selector 83 on the basis of the optical characteristic values at the time point when the un-corrected image 71 is captured. As described above, the plurality of pieces of second correction data 67 are stored in the ROM 53. One piece of data according to the three optical characteristic values of the focal length, aperture value, and focus lens position is selected from the plurality of pieces of second correction data 67. The focal length, aperture value, and focus lens position are input from the zoom controller 61, exposure controller 62, and focus controller 63, respectively, to the second data selector 83 and, on the basis of the values, the second correction data 67 is selected (step S114).
From the selected second correction data 67, a lens system correction table 67t is generated by the second table generator 84. Specifically, correction factors related to all of the pixels of the un-corrected image 71 are derived from the axial factors included in the second correction data 67, and the lens system correction table 67t including the derived correction factors is generated. The lens system correction table 67t is in the same data format as that of the correction tables 66r, 66g and 66b, and the position of each of the correction factors of the lens system correction table 67t is expressed by the coordinate position in the XY coordinate system.
The value of each of the correction factors of the lens system correction table 67t is also derived on the basis of the coordinate position. One of the axial factor groups (see
After the four correction tables 66r, 66g, 66b and 67t are generated, by using the four correction tables 66r, 66g, 66b and 67t, shading in the un-corrected image 71 is corrected. At the time of the shading correction, different correction tables for three color component images forming the un-corrected image 71 are used.
First, shading correction is made on the R-component image by the R-component corrector 86 by using the R-component correction table 66r and the lens system correction table 67t. Concretely, each of the pixel values of the R-component image is multiplied with a corresponding correction factor in the R-component correction table 66r, thereby correcting the sensor system shading in the R-component image. Further, each of the pixel values of the R-component image is multiplied with the corresponding correction factor in the lens system correction table 67t, thereby correcting the lens system shading in the R-component image. It is also possible to multiply each of the pixel values of the R-component image with the result obtained by multiplying the correction factor in the R-component correction table 66r with the correction factor in the lens system correction table 67t (step S16).
Similarly, shading in the G-component image is corrected by the G-component corrector 87 by using the G-component correction table 66g and the lens system correction table 67t (step S17). Further, shading in the B-component image is corrected by the B-component corrector 88 by using the B-component correction table 66b and the lens system correction table 67t (step S18). By the R-component image, G-component image, and B-component image corrected individually, a corrected image 72 is formed as a result of the shading correction performed on the un-corrected image 71.
Since the lens system shading does not differ among color components, shading correction is made by using the same lens system correction table 67t to all of the color component images, thereby properly correcting the lens system shading in the un-corrected image 71. On the other hand, the sensor system shading varies according to a color component. Shading correction is made by using the correction tables 66r, 66g and 66b dedicated to the R-component, G-component and B-component images, respectively. Consequently, the sensor system shading in the un-corrected image 71 is also properly corrected. That is, both of the lens system shading and the sensor system shading in the un-corrected image 71 can be properly corrected. Therefore, an influence of all of shadings including the color shading can be properly eliminated in the corrected image 72.
As described above in the first preferred embodiment, in the digital camera 1, shading correction is made by using a correction factor in consideration of the characteristics of both the lens system shading and sensor system shading.
Concretely, the light amount decrease ratio of the sensor system shading is asymmetrical with respect to the origin O, so that shading correction is made by using a correction table including the correction factors which are asymmetrical with respect to the origin O. Since the light amount decrease ratio of the sensor system shading varies according to a color component, a correction table is prepared in accordance with the color component image, and shading correction is made by using a correction table corresponding to the color component image. On the other hand, the light amount decrease ratio of the lens system shading is point symmetrical with respect to the origin O and does not vary according to a color component. Consequently, shading correction is made by commonly using a correction table including correction factors which are point symmetrical with respect to the origin O for three color-component images. In such a manner, shading in an image including color shading can be properly corrected.
Since the light amount decrease ratio of the sensor system shading changes according to the exit pupil distance, the first correction data 66 including the correction factor according to the actual exit pupil distance is selectively used from a plurality of candidates. On the other hand, the light amount decrease ratio of the lens system shading changes according to the optical characteristic values (focal length, aperture value, and focus lens position), so that the second correction data 67 including the correction factor according to the actual optical characteristic value is selectively used from a plurality of candidates. Thus, shading in an image can be corrected more properly.
In the digital camera 1, correction factors for all of pixels are not stored but axial factors related to only the positions of the coordinate axes in the coordinate system which is set for an image are stored. From the axial factors, correction factors corresponding to a plurality of pixels are derived. Therefore, as compared with the case where all of correction factors corresponding to the plurality of pixels are stored as the first correction data 66 in the ROM 53, the amount of data to be stored can be made smaller.
3. Second Preferred Embodiment
A second preferred embodiment of the present invention will now be described. Since the configuration and operation of the digital camera 1 of the second preferred embodiment are similar to those of the first preferred embodiment, the points different from the first preferred embodiment will be described.
As described above, the light amount decrease ratio of the sensor system shading is asymmetrical with respect to the origin O of an image. However, the asymmetry of the light amount decrease ratio in the vertical direction (Y axis direction) of an image is smaller than that of the light amount decrease ratio in the horizontal direction (X axis direction) for the following reason. Since the photosensitive face of the photodiode 21 of the light sensing pixel 2 in the vertical direction is longer than that in the horizontal direction, the allowable manufacturing tolerance of the image sensor 20 in the vertical direction is wide.
Therefore, when the light amount decrease ratio of the sensor system shading is regarded as symmetrical with respect to the origin O in the vertical direction of an image and shading correction is made by using a correction table of which correction factor values are asymmetrical in the horizontal direction and symmetrical in the vertical direction, sensor system shading can be corrected almost properly.
In the digital camera 1 of the second preferred embodiment, to correct the sensor system shading by using the principle, a correction table of which correction factor values are asymmetrical in the X axis direction and are symmetrical in the Y axis direction is used.
At the time of using the first correction data 66 for shading correction, as values of the axial factors on the negative side in the Y axis direction, the values of the axial factors on the positive side in coordinate positions obtained by inverting the sign (positive or negative sign) of the Y coordinate are used. For example, as the value of the axial factor of Y=−b, the value of the axial factor of Y=b is used. In such a manner, a correction table of which correction factor value is asymmetric in the X axis direction and symmetrical in the Y axis direction is generated.
As described above, in the digital camera 1 of the second preferred embodiment, the first correction data 66 includes values on only one side of the origin as the axial factors related to the Y axis, so that the data amount of the first correction data 66 is reduced. Therefore, the amount of data to be stored in the ROM 53 as the first correction data 66 can be reduced. Although only the axial factors corresponding to the pixels on the positive side in the Y axis direction from the origin O are included in the example of
4. Third Preferred Embodiment
A third preferred embodiment of the present invention will now be described. Since the configuration and operation of the digital camera 1 of the third preferred embodiment are similar to those of the first preferred embodiment, the points different from the first preferred embodiment will be described.
Although the rectangular coordinate system is employed as a coordinate system set for an image to be shading-corrected in the foregoing preferred embodiments, an oblique coordinate system is employed in the third preferred embodiment. Concretely, as shown in
Also in the case of employing such an oblique coordinate system, in a manner similar to the first preferred embodiment, shading in an image can be properly corrected. To be specific, axial factors as correction factors related only to the positions of the U and V axes are included in the first correction data 66 and the second correction data 67. By expressing the position of a correction factor in a correction table as the coordinate position in a similar oblique coordinate system, the values of correction factors can be derived by referring to the values of the two axial factors of the U and V axes on the basis of the coordinate position.
In the case where the oblique coordinate system is employed and the light amount decrease ratio of the sensor system shading is regarded as symmetrical with respect to the origin O in the vertical direction of an image in a manner similar to the second preferred embodiment, the axial factors of the first correction data 66 can be commonly used for the U and V axes.
As shown in
In this case, the light amount decrease ratio of the sensor system shading is regarded as symmetrical with respect to the origin O in the vertical direction of an image. Consequently, in shading correction, a correction table whose correction factor values are asymmetrical in the horizontal direction and symmetrical in the vertical direction is used. Therefore, a change in the value of the correction factor from the upper left to the lower right and a change in the value of the correction factor from the lower left to the upper right are the same. Thus, the axial factors of the first correction data 66 can be commonly used for the U and V axes.
The light amount decrease ratio of the lens system shading is point symmetrical with respect to the origin O. Therefore, as shown in
As described above, since the digital camera 1 of the third preferred embodiment employs the oblique coordinate system, when the light amount decrease ratio of the sensor system shading is regarded as symmetrical with respect to the origin O in the vertical direction of an image, the axial factors can be shared by two coordinate axes. Thus, the amount of data to be stored in the ROM 53 as first correction data 66 can be reduced.
5. Fourth Preferred Embodiment
A fourth preferred embodiment of the present invention will now be described. Although shading in an image is corrected in the digital camera 1 in the foregoing preferred embodiments, in the fourth preferred embodiment, shading is corrected in a general computer.
The digital camera 101 can have a configuration similar to that of the digital camera 1 of the foregoing preferred embodiments. The digital camera 101 captures a color image of the subject in a manner similar to the digital camera 1 of the foregoing preferred embodiments. The captured image is not subjected to shading correction but is recorded as it is as an image file of the Exif into the memory card 9. The image recorded in the memory card 9 is transferred to the computer 102 via the memory card 9, a dedicated communication cable, or an electric communication line.
The computer 102 is a general computer including a CPU, a ROM, a RAM, a hard disk, a display and a communication part. The CPU, ROM, RAM and the like in the computer 102 realize a function of correcting shading similar to that in the foregoing preferred embodiments. Specifically, the CPU, ROM, RAM and the like function like the shading correcting part shown in
A program is installed into the computer 102 via a recording medium 91 such as a CD-ROM. The CPU, ROM, RAM and the like function according to the program, thereby realizing the function of correcting shading. That is, the general computer 102 functions as an image processing apparatus for correcting shading.
An image transferred from the digital camera 101 is stored into the hard disk of the computer 102. At the time of correcting shading, the image is read from the hard disk to the RAM and prepared so that shading can be corrected. Processes similar to those of
The optical characteristic values (focal length, aperture value, and focus lens position) necessary to calculate the exit pupil distance (step S11) and select the lens system shading (step S12) are obtained from tag information of the image file. The first correction data 66, second correction data 67, and data of arithmetic expressions and the like necessary to calculate the exit pupil distance are pre-stored in the hard disk of the computer 102. A plurality of kinds of the data may be stored in accordance with the kind of a digital camera. By using the data, the shading correction can be properly made on the image also in the general computer 102.
6. Modifications
The preferred embodiments of the present invention have been described above. The present invention is not limited to the foregoing preferred embodiments but may be variously modified.
The first correction data 66 for correcting the sensor system shading may have a correction factor in which a false signal generated due to stray light in an image sensor is considered. The principle of generation of a false signal by stray light will be briefly described below with reference to
As described above, in the light sensing pixel 2 in the peripheral part of the image sensor 20, the light L is incident so as to be inclined from the optical axis. Consequently, a part of the light may be reflected by a neighboring member or the like deviated from the photosensitive face of the photodiode 21 and become stray light L1. The stray light L1 is reflected again by the light shielding film 25 and enters the vertical transfer part 22, thereby generating a false signal. Due to the false signal, the pixel value in an image fluctuates.
Since the stray light L1 is generated when the light L enters with inclination from the optical axis, the fluctuation value of the pixel value due to the false signal increases toward the periphery of the image. The stray light L1 enters the vertical transfer part 22 for transferring signal charges of the light sensing pixel 2R on the right side as shown in
That is, the fluctuation value of the pixel value due to the false signal increases toward the periphery of an image and is asymmetrical in the horizontal direction in the image. Therefore, fluctuations of the pixel value caused by the false signal have characteristics similar to those of the sensor system shading, so that they can be corrected in a manner similar to the sensor system shading. By making the correction factors in which the fluctuations of the pixel value caused by the false signal are considered included in the first correction data 66, the fluctuations of the pixel value caused by the false signal can be also corrected properly.
Although the second correction data 67 has the axial factors in both of the directions with respect to the origin O as a reference in the first preferred embodiment, since the light amount decrease ratio of the lens system shading is point symmetrical, the second correction data 67 may include the axial factors only on one side of the origin O as a reference. It is sufficient to calculate the axial factors on the other side of the origin O in a manner similar to the second preferred embodiment.
Although the second correction data 67 is selected on the basis of three optical characteristic values of the focal length, aperture value, and focus lens position in the foregoing preferred embodiments, the second correction data 67 may be selected on the basis of two of the optical characteristic values or one optical characteristic value.
Although it has been described in the foregoing preferred embodiments that the various functions are realized when the CPU performs computing processes in accordance with a program, all or part of the various functions may be also realized by dedicated electric circuits. Particularly, by constructing a part for repeating computation by a logic circuit, high-speed computation is realized. On the contrary, all or part of the functions realized by the electric circuits may be realized when the CPU performs computation processes in accordance with the program.
Although the digital camera 1 has been described as an example in the foregoing preferred embodiments, the technique according to the present invention can be applied to any image capturing apparatus as long as the apparatus captures an image by using the image sensor.
While the invention has been shown and described in detail, the foregoing description is in all aspects illustrative and not restrictive. It is therefore understood that numerous modifications and variations can be devised without departing from the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
JP2004-154781 | May 2004 | JP | national |