The present invention relates to an image processing device, an image processing method, and the like.
An image sensor that includes RGB color filters has been widely used for an imaging device. The amount of light that passes through the color filter increases (i.e., sensitivity increases) as the band of the color filter becomes wider. Therefore, a normal image sensor is designed so that the transmittance characteristics of the RGB color filters overlap each other.
JP-A-2001-174696 discloses a method that implements a phase difference detection process using a pupil division technique. According to the method disclosed in JP-A-2001-174696, RGB images are captured in a state in which the right pupil allows R and G to pass through, and the left pupil allows G and B to pass through, and the phase difference between the R image and the B image that produce parallax is detected.
According to one aspect of the invention, there is provided an image processing device comprising:
a processor comprising hardware,
the processor being configured to implement:
an image acquisition process that acquires an image captured by an image sensor, the image sensor including a first-color filter that has first transmittance characteristics, a second-color filter that has second transmittance characteristics, and a third-color filter that has third transmittance characteristics; and
a correction process that estimates a component value that corresponds to an overlapping region of the first transmittance characteristics and the third transmittance characteristics based on a pixel value that corresponds to a second color among a pixel value that corresponds to a first color, the pixel value that corresponds to the second color, and a pixel value that corresponds to a third color that form the image, and corrects the pixel value that corresponds to the first color, the pixel value that corresponds to the second color, and the pixel value that corresponds to the third color based on the component value that corresponds to the overlapping region,
the second color being a color that is longer in wavelength than the first color and is shorter in wavelength than the third color, and
the processor being configured to implement the correction process that multiplies the pixel value that corresponds to the second color by a first coefficient based on the first transmittance characteristics and the third transmittance characteristics to calculate the component value that corresponds to the overlapping region.
According to another aspect of the invention, there is provided an image processing method comprising:
acquiring an image captured by an image sensor, the image sensor including a first-color filter that has first transmittance characteristics, a second-color filter that has second transmittance characteristics, and a third-color filter that has third transmittance characteristics; and
estimating a component value that corresponds to an overlapping region of the first transmittance characteristics and the third transmittance characteristics by multiplying a pixel value that corresponds to a second color (among a pixel value that corresponds to a first color, the pixel value that corresponds to the second color, and a pixel value that corresponds to a third color that form the image) by a first coefficient based on the first transmittance characteristics and the third transmittance characteristics, and correcting the pixel value that corresponds to the first color, the pixel value that corresponds to the second color, and the pixel value that corresponds to the third color based on the component value that corresponds to the overlapping region, the second color being a color that is longer in wavelength than the first color and is shorter in wavelength than the third color.
Several aspects of the invention may provide an image processing device, an image processing method, and the like that can acquire a high-color-purity image from an image captured using color filters for which the transmittance characteristics overlap each other.
According to one aspect of the invention, an image processing device includes:
a processor comprising hardware,
the processor being configured to implement:
an image acquisition process that acquires an image captured by an image sensor, the image sensor including a first-color filter that has first transmittance characteristics, a second-color filter that has second transmittance characteristics, and a third-color filter that has third transmittance characteristics; and
a correction process that estimates a component value that corresponds to an overlapping region of the first transmittance characteristics and the third transmittance characteristics based on a pixel value that corresponds to a second color among a pixel value that corresponds to a first color, the pixel value that corresponds to the second color, and a pixel value that corresponds to a third color that form the image, and corrects the pixel value that corresponds to the first color, the pixel value that corresponds to the second color, and the pixel value that corresponds to the third color based on the component value that corresponds to the overlapping region.
According to another aspect of the invention, an image processing method includes:
acquiring an image captured by an image sensor, the image sensor including a first-color filter that has first transmittance characteristics, a second-color filter that has second transmittance characteristics, and a third-color filter that has third transmittance characteristics; and
estimating a component value that corresponds to an overlapping region of the first transmittance characteristics and the third transmittance characteristics based on a pixel value that corresponds to a second color among a pixel value that corresponds to a first color, the pixel value that corresponds to the second color, and a pixel value that corresponds to a third color that form the image, and correcting the pixel value that corresponds to the first color, the pixel value that corresponds to the second color, and the pixel value that corresponds to the third color based on the component value that corresponds to the overlapping region.
According to these aspects of the invention, the component value that corresponds to the overlapping region of the first transmittance characteristics and the third transmittance characteristics is estimated based on the pixel value that corresponds to the second color, and the pixel value that corresponds to the first color, the pixel value that corresponds to the second color, and the pixel value that corresponds to the third color are corrected based on the estimated component value that corresponds to the overlapping region. This makes it possible to acquire a high-color-purity image from an image captured using color filters for which the transmittance characteristics overlap each other.
Exemplary embodiments of the invention are described in detail below. Note that the following exemplary embodiments do not in any way limit the scope of the invention defined by the claims laid out herein. Note also that all of the elements described below in connection with the exemplary embodiments should not necessarily be taken as essential elements of the invention.
An outline of several embodiments of the invention is described below. Although an example that utilizes an image sensor that includes an RGB Bayer color filter array is described below, the embodiments of the invention are not limited thereto. It suffices that the image sensor include color filters that overlap each other with respect to the wavelength band.
It is possible to improve color reproducibility by narrowing the band of the transmittance characteristics FB, FG, and FR so that the transmittance characteristics FB, FG, and FR overlap each other to only a small extent. In this case, however, the amount of light that passes through the color filter decreases since the color filter has a narrow band, and it is difficult to achieve high sensitivity. An image sensor is normally designed so that the transmittance characteristics FB, FG, and FR overlap each other in order to achieve high sensitivity. However, since color reproducibility significantly deteriorates if the transmittance characteristics FB, FG, and FR overlap each other to a large extent, the transmittance characteristics are determined taking account of the balance between color reproducibility and sensitivity. Specifically, it has been difficult to capture an image while achieving high sensitivity and high color reproducibility in combination.
As illustrated in
This makes it possible to implement a correction process that reduces (or cancels) the overlapping component (e.g., I·φRB) of the pixel value b that corresponds to the first color, the pixel value g that corresponds to the second color, and the pixel value r that corresponds to the third color. Therefore, it is possible to obtain an image having high color purity (that is obtained using color filters for which the transmittance characteristics overlap each other to only a small extent) from the high-sensitivity captured image captured using the color filters for which the transmittance characteristics FB, FG, and FR overlap each other.
The above configuration is described in detail below (see the following embodiments). As illustrated in
Specifically, the pixel value b that corresponds to the first color, the pixel value g that corresponds to the second color, and the pixel value r that corresponds to the third color are corrected based on the component value I·φRB that corresponds to the overlapping region to calculate the corrected pixel values {b′, g′, r′}={I·φB, I·φG, I·φR}. The corrected pixel values I·B, I·φG, and I·φR are pixel values for which the overlapping region of the transmittance characteristics FB, FG, and FR of the color filters has been removed. This makes it possible to generate a high-color-reproduction image from a high-sensitivity captured image.
The image processing device 100 may be configured as described below. Specifically, the image processing device 100 may include a memory that stores information (e.g., a program and various types of data), and a processor (i.e., a processor including hardware) that operates based on the information stored in the memory. The processor is configured to implement an image acquisition process that acquires an image captured by the image sensor 20 that includes the first-color filter that has the first transmittance characteristics FB, the second-color filter that has the second transmittance characteristics FG, and the third-color filter that has the third transmittance characteristics FR, and a correction process that estimates the component value I·φRB that corresponds to the overlapping region of the first transmittance characteristics FB and the third transmittance characteristics FR based on the pixel value g that corresponds to the second color among the pixel value b that corresponds to the first color, the pixel value g that corresponds to the second color, and the pixel value r that corresponds to the third color that form the image, and corrects the pixel value b that corresponds to the first color, the pixel value g that corresponds to the second color, and the pixel value r that corresponds to the third color based on the component value I·φRB that corresponds to the overlapping region. The second color is a color that is longer in wavelength than the first color and is shorter in wavelength than the third color. The processor is configured to implement the correction process that multiplies the pixel value g that corresponds to the second color by the first coefficient based on the first transmittance characteristics FB and the third transmittance characteristics FR to calculate the component value I·φRB that corresponds to the overlapping region.
The processor may implement the function of each section by individual hardware, or may implement the function of each section by integrated hardware, for example. The processor may implement the function of each section by individual hardware, or may implement the function of each section by integrated hardware, for example. The processor may be a central processing unit (CPU), for example. Note that the processor is not limited to a CPU. Various other processors such as a graphics processing unit (GPU) or a digital signal processor (DSP) may also be used. The processor may be a hardware circuit that includes an ASIC. The memory may be a semiconductor memory (e.g., SRAM or DRAM), a register, a magnetic storage device (e.g., hard disk drive), or an optical storage device (e.g., optical disk device). For example, the memory stores a computer-readable instruction. Each section of the imaging device (i.e., the image processing device (e.g., the image processing device 100 illustrated in
The operation according to the embodiments of the invention is implemented as described below, for example. An image captured by the image sensor 20 is stored in the storage section. The processor reads (acquires) the image from the storage section, and acquires the pixel value b that corresponds to the first color, the pixel value g that corresponds to the second color, and the pixel value r that corresponds to the third color (of each pixel) from the image. The first coefficient based on the first transmittance characteristics FB and the third transmittance characteristics FR is stored in the storage section. The processor reads the first coefficient from the storage section, multiplies the pixel value g that corresponds to the second color by the first coefficient to calculate the component value I·φRB that corresponds to the overlapping region, and stores the component value I·φRB that corresponds to the overlapping region in the storage section.
Each section of the image processing device 100 is implemented as a module of a program that operates on the processor. For example, the image acquisition section 100 is implemented as an image acquisition module that acquires an image captured by the image sensor 20 that includes the first-color filter that has the first transmittance characteristics FB, the second-color filter that has the second transmittance characteristics FG, and the third-color filter that has the third transmittance characteristics FR. Likewise, the correction processing section 120 is implemented as a correction processing module that multiplies the pixel value g that corresponds to the second color among the pixel value b that corresponds to the first color, the pixel value g that corresponds to the second color, and the pixel value r that corresponds to the third color that form the image by the first coefficient based on the first transmittance characteristics FB and the third transmittance characteristics FR to calculate the component value I·φRB that corresponds to the overlapping region of the first transmittance characteristics FB and the third transmittance characteristics FR.
A first embodiment of the invention is described in detail below. Note that the image sensor 20 is hereinafter appropriately referred to as “image sensor”. The transmittance characteristics (spectral characteristics) {FB, FG, FR, FG1, FG2} and the regions {φB, φG, φR, φRB} of the transmittance characteristics are functions of the wavelength λ, but the transmittance characteristics FB(λ) and the like are referred to as the transmittance characteristics FB and the like for convenience of explanation. The values {I·φB, I·φG, I·R, I·φRB} obtained by multiplying the values {φB, φG, φR, φRB} by the light intensity I (followed by integration using the wavelength λ) are pixel values or component values.
A method that simultaneously acquires a high-sensitivity image and a high-color-reproduction image is described below. As illustrated in
When the regions φR, φB, and φG are considered to be a light intensity component of light that has passed through filters having high-purity primary-color (RGB) spectral characteristics, a high-color-reproduction image can be obtained if the characteristics of the regions φR, φB, and φG can be calculated in advance.
A method that extracts red and blue high-purity pixel values {r′, b′}={I·φR, I·φB} is described below with reference to
φRB≈FG1=α·FG,
0<α<1.0 (1)
The regions φR and φB are calculated from the region φRB calculated using the expression (1). As illustrated in
F
R=φR+φRB,
F
B=φB+φRB (2)
Since the following expression (3) is satisfied in view of the expressions (1) and (2), the regions φR and φB can be calculated.
φR=FR−φRB≈FR−α·FG,
φBFB−φRB≈FB−αFG (3)
When the intensity of light incident on each pixel of the image sensor is referred to as I, the values represented by the following expression (4) are obtained as the RGB pixel values of each pixel.
r=I·F
R,
g=I·F
G,
b=I·F
B (4)
Specifically, the high-purity pixel values r′ and b′ are calculated using the following expression (5) in view of the expressions (3) and (4).
r′=I·φ
R
≈r−α·g,
b′=I·φ
b
≈b−α·g (5)
A method that extracts a green high-purity pixel value g′ (=I·φG) is described below with reference to
The gain of the green spectral characteristics FG is increased using the gain β to obtain the spectral characteristics FG2 (see the following expression (6)).
F
G2
=β·F
G,
1.0<β (6)
The region φG is calculated by the following expression (7) from the spectral characteristics FG2 calculated using the expression (6) and the regions φR, φRB, and φB calculated using the expressions (1) and (3).
φG=β·FG−(φR+φRB+φB) (7)
The high-purity pixel value g′ is calculated using the following expression (8) in view of the expressions (1), (4), and (7).
g′=I·φ
G
=β·g−(r+α·g+b) (8)
The high-purity primary-color pixel values {r′, g′, b′}={I·φB3, I·φG, I·φR} are thus obtained.
Note that an image acquired by an image sensor designed so that the RGB characteristics overlap each other to a certain extent can be used directly as the high-sensitivity image. Therefore, the pixel values r, g, and b can be used for the high-sensitivity image.
The coefficients α and β are calculated (estimated) as described below. The coefficients α and β calculated as described below may be stored in the image processing device 100, or the image processing device 100 may acquire the spectral characteristics FB, FG, and FR, and calculate the coefficients α and β based on the spectral characteristics FB, FG, and FR.
As illustrated in
The coefficient α is calculated as described below. The overlapping region φRB of the spectral characteristics FB and the spectral characteristics FR is biased with respect to the spectral characteristics FG. The wavelength that corresponds to the maximum value of the spectral characteristics FG is referred to as λC, and the wavelength that corresponds to the cross point of the spectral characteristics FB and the spectral characteristics FR is referred to as λ′C. The spectral characteristics of the overlapping region φRB and the spectral characteristics FG are respectively considered to be a vector VRB and a vector VG that include transmittance components that respectively correspond to a wavelength λ0, a wavelength λ1, a wavelength λ2, . . . , and a wavelength λN (see the following expression (9)).
V
RB=[φRB(λ0),φRB(λ1),φRB(λ2), . . . , φRB(λN)]
V
G
=[F
G(λ0),FG(λ1),FG(λ2), . . . , FG(λN)] (9)
The coefficient α that minimizes the Euclidean distance between the vector VRB and the vector VG is used (see the following expression (10)). This makes it possible to improve the similarity between the overlapping region φRB and the spectral characteristics αFG (=FG1), and optimize the effect of reducing a leakage light component.
α=min{√{square root over ((VRB−αVG)2})} (10)
Note that the coefficient α may be calculated using another method. For example, the regions φR and φB may be calculated using the expressions (1) and (3) while changing the coefficient α as a variable, the pixel values r′ and b′ may be calculated using the expression (5) on the assumption that the light intensity I is the intensity of white light having flat spectral characteristics, and the coefficient α may be calculated (searched) so that the white balance is optimized.
The coefficient β is calculated as described below. As illustrated in
V
R
=[F
R(λR),FR(λR+1), . . . , FR(λN)],
V
GR2
=[F
G2(λR),FG2(λR+1), . . . , FG2(λN)],
V
B
=[F
B(λ0),FB(λ1), . . . , FB(λB)],
V
GB2
=[F
G2(λ0),FG2(λ1), . . . , FG2(λB)] (11)
The coefficient β that minimizes the Euclidean distance between the vector VR and the vector VGR2 and the Euclidean distance between the vector VB and the vector VGB2 is used (see the following expression (12)). This makes it possible to improve the similarity between the regions φR and φB and the spectral characteristics βFG (=FG2), and optimize the effect of reducing a leakage light component.
β=min{√{square root over ((VB−βVGB2)2)}+√{square root over ((VR−βVGR2)2)}} (12)
Note that the coefficient β may be calculated using another method. For example, the regions φR, φRB, and φB may be calculated using the expressions (1) and (3) while changing the coefficient β as a variable, the pixel values r′, g′, and b′ may be calculated using the expression (5) on the assumption that the light intensity I is the intensity of white light having flat spectral characteristics, and the coefficient β may be calculated (searched) so that the white balance is optimized.
Although an example in which the spectral characteristics FB, FG, and FR are the spectral characteristics of the color filters of the image sensor has been described above, the spectral characteristics FB, FG, and FR may include the spectral characteristics of the imaging optical system, the sensitivity characteristics of the pixels of the image sensor, the spectral characteristics of a light source, and the like. In this case, the spectral characteristics FB, FG, and FR are determined corresponding to illumination and the imaging conditions using the image sensor, and the coefficients α and β may be calculated in advance. When an image is captured using external light or the like, the spectral characteristics FB, FG, and FR may be detected each time an image is captured, and the coefficients α and β may be calculated from the detected spectral characteristics.
The imaging device includes an imaging lens 14 (optical system), an imaging section 40, a monitor display section 50, a spectral characteristic detection section 60, and the image processing device 100. The image processing device 100 includes a demosaicing section 130, a quality enhancement processing section 140, a monitor image generation section 150, a high-purity spectral separation processing section 160, a quality enhancement processing section 180, an imaging mode selection section 190, and a spectral characteristic storage section 195.
The imaging section 40 includes the image sensor 20 and an imaging processing section. The image sensor 20 is a color image sensor that includes a Bayer array, for example. The imaging processing section performs an imaging operation control process, an analog pixel signal A/D conversion process, and the like, and outputs a Bayer-array image {r, gr, gb, b}.
The demosaicing section 130 performs a demosaicing process on the Bayer-array image to generate an image (RGB image) {r, g, b} that has RGB pixel values on a pixel basis.
Note that the image acquisition section 110 illustrated in
Since the color image sensor is designed so that the RGB characteristics FB, FG, and FR overlap each other to a certain extent, the image {r, g, b} can be used directly as the high-sensitivity image. Note that an image sensor having normal overlapping characteristics may be used as the color image sensor, or an image sensor having higher overlapping characteristics may be used as the color image sensor. In the latter case, each color band can be widened, and an image can be captured with higher sensitivity. For example, a bright image can be captured in a dark place.
The quality enhancement processing section 140 performs a quality enhancement process (e.g., noise reduction process and grayscale correction process) on the high-sensitivity image {r, g, b}, and outputs the resulting image to the monitor image generation section 150. The monitor image generation section 150 displays the image on the monitor display section 50.
The high-purity spectral separation processing section 160 includes a coefficient acquisition section 170 that acquires the coefficient α and β, and a correction processing section 120 that extracts high-purity primary-color components from the RGB image {r, g, b} generated by the demosaicing process.
The spectral characteristic storage section 195 stores the spectral characteristics {FB, FG, FR} of the color filters. The spectral characteristics may be acquired from the imaging section 40, or may be stored in the spectral characteristic storage section 195 in advance, for example. The spectral characteristic detection section 60 that acquires the spectral characteristics of external light or illumination light may be further provided, and the spectral characteristics {FB, FG, FR} may be acquired together with the spectral characteristics of external light or illumination light.
The coefficient acquisition section 170 reads the spectral characteristics {FB, FG, FR} from the spectral characteristic storage section 195, and calculates the coefficients α and β as described above based on the spectral characteristics {FB, FG, FR} Alternatively, the coefficients α and β calculated in advance from the spectral characteristics {FB, FG, FR} may be stored in the spectral characteristic storage section 195. In this case, the coefficient acquisition section 170 acquires the coefficients α and β by reading the coefficients α and β from the spectral characteristic storage section 195.
The correction processing section 120 calculates the high-color-reproduction image {r′, g′, b′} based on the coefficients α and β by performing a process based on the above method, and outputs the high-color-reproduction image.
The quality enhancement processing section 180 performs an adjustment process (e.g., color balance adjustment process) on the high-color-reproduction image {r′, g′,} in order to improve color reproducibility. Specifically, the quality enhancement processing section 180 performs an appropriate quality enhancement process (e.g., white balance process) on the high-color-reproduction image {r′, g′, b′}. The quality enhancement processing section 180 outputs the resulting image to the monitor image generation section 150, and the monitor image generation section 150 displays the image on the monitor display section 50.
The imaging mode selection section 190 selects the monitor display image.
Specifically, the imaging mode selection section 190 instructs the monitor image generation section 150 to display the high-sensitivity image or the high-color-reproduction image (that has been selected) on the monitor. The imaging mode selection section 190 may instruct the monitor image generation section 150 to display both the high-sensitivity image and the high-color-reproduction image on the monitor. The image may be selected based on an instruction that has been input by the user through an operation section (not illustrated in the drawings), for example.
Alternatively, an external light sensor may be provided, and the image may be selected based on the brightness detected by the external light sensor. For example, the high-color-reproduction image may be selected when the brightness of external light is higher than a threshold value, and the high-sensitivity image may be selected when the brightness of external light is lower than the threshold value.
Although an example in which the image processing device 100 is included in the imaging device has been described above, the image processing device 100 may be provided separately from the imaging device. In this case, the imaging device (that is provided separately from the image processing device 100) records captured image data and data that represents the spectral characteristics {FB, FG, FR} in a recording device (not illustrated in the drawings). The image processing device 100 acquires the recorded data, and calculates the high-color-reproduction image {r′, g′, b′} from the acquired data. Specifically, the image processing device 100 may calculate the high-color-reproduction image by performing a post-capture process. When the image processing device 100 is provided separately from the imaging device, the image processing device 100 may be an information processing device such as a PC, for example.
According to the above imaging system, it is possible to select the high-sensitivity image when it is desired to give priority to sensitivity in a dark environment, and select the high-color-reproduction image when it is desired to give priority to color reproducibility in a bright environment. It is possible to capture two images in real time, and implement a reduction in the amount of data and an imaging process that flexibly deals with the objective as compared with a known method that captures images a plurality of times while changing the conditions. According to the first embodiment, since the process basically depends on the characteristics of the color filters of the image sensor, it is unnecessary to provide an optical filter that extracts high-purity primary-color pixel values, and a mechanism that mechanically inserts and removes an optical filter. Therefore, the imaging system can be easily implemented, and is absolutely feasible.
According to the first embodiment, the second color (green) is a color that is longer in wavelength than the first color (blue) and is shorter in wavelength than the third color (red) (see
A normal image sensor is designed so that the overlapping region φRB of the spectral characteristics FB of the color filter that corresponds to the first color (blue) and the spectral characteristics FR of the color filter that corresponds to the third color (red) is similar to the spectral characteristics FG of the color filter that corresponds to the second color (green) (see above). According to the first embodiment, since it is possible to separate the high-color-purity image, spectral characteristics differing from those of a normal image sensor may also be employed. In this case, the similarity between the overlapping region φRB and the spectral characteristics FG may be increased. It is possible to estimate the component value I·φRB that corresponds to the overlapping region to be α·g based on the similarity between the overlapping region φRB and the spectral characteristics FG, and perform the correction process that increases the color purity of the pixel value b that corresponds to the first color, the pixel value g that corresponds to the second color, and the pixel value r that corresponds to the third color based on the estimated value.
According to the first embodiment, the coefficient acquisition section 170 acquires the first coefficient α that maximizes the similarity between the second transmittance characteristics FG multiplied by the first coefficient α and the overlapping region φRB (see the expressions (9) and (10)).
This makes it possible to determine the coefficient α that maximizes the similarity between the second transmittance characteristics FG and the overlapping region φRB based on the similarity between the second transmittance characteristics FG and the overlapping region φRB. Therefore, it is possible to improve the accuracy of the correction process that increases the color purity (i.e., the correction process that subtracts the component α·g that corresponds to the overlapping region from the pixel values r and b).
According to the first embodiment, the first coefficient α that maximizes the similarity is the first coefficient α that minimizes the Euclidean distance between the vector VRB and the vector VG, the vector VRB corresponding to the overlapping region φRB, and including the transmittance components that respectively correspond to a plurality of wavelengths (λ0, λ1, . . . , and λN), and the vector VG corresponding to the second transmittance characteristics FG multiplied by the first coefficient α, and including the transmittance components that respectively correspond to a plurality of wavelengths (λ0, λ1, . . . , and λN) (see the expressions (9) and (10)).
This makes it possible to calculate the similarity using the Euclidean distance between the vector VRB and the vector VG as an index. The first coefficient α is determined on the assumption that the similarity becomes a maximum when the index becomes a minimum.
According to the first embodiment, the correction processing section 120 corrects the pixel value b that corresponds to the first color by subtracting the component value α·g that corresponds to the overlapping region φRB from the pixel value b that corresponds to the first color (pixel value b′), and corrects the pixel value r that corresponds to the third color by subtracting the component value ag that corresponds to the overlapping region φRB from the pixel value r that corresponds to the third color (pixel value r′) (see the expressions (1) to (5)).
This makes it possible to reduce the component value I·φRB that corresponds to the overlapping region φRB from the pixel value b that corresponds to the first color and the pixel value r that corresponds to the third color using the component value α·g that corresponds to the overlapping region φRB estimated using the first coefficient. Therefore, it is possible to calculate the pixel values b and r having high color purity from the captured image for which the spectral characteristics of the color filters overlap each other.
According to the first embodiment, the correction processing section 120 corrects the pixel value g that corresponds to the second color by subtracting the component value α·g that corresponds to the overlapping region, the corrected pixel value b′ that corresponds to the first color, and the corrected pixel value r′ that corresponds to the third color from the pixel value g that corresponds to the second color that is multiplied by the second coefficient t based on the first transmittance characteristics FB and the third transmittance characteristics FR (pixel value g′) (see
This makes it possible to estimate the component value β·g of the spectral characteristics obtained by increasing the gain of the second transmittance characteristics FG using the second coefficient β, and calculate the pixel value g′ that corresponds to the second color and has high color purity based on the estimated component value β·g. Specifically, it is possible to perform the correction process that reduces (or cancels) the overlapping component of the pixel value b that corresponds to the first color and the pixel value r that corresponds to the second color from the component value β·g.
According to the first embodiment, the coefficient acquisition section 170 acquires the second coefficient β that maximizes the similarity of a short-wavelength-side part (i.e., a part that corresponds to a wavelength shorter than the wavelength λB) of the first transmittance characteristics FB and a long-wavelength-side part (i.e., a part that corresponds to a wavelength shorter than the wavelength λR) of the third transmittance characteristics FR, with the second transmittance characteristics FG multiplied by the second coefficient β (see
This makes it possible to determine the coefficient β that maximizes the similarity of the second transmittance characteristics FG with part of the first transmittance characteristics FB and part of the third transmittance characteristics FR based on the similarity of the second transmittance characteristics FG with part of the first transmittance characteristics FB and part of the third transmittance characteristics FR. Therefore, it is possible to improve the accuracy of the correction process that increases the color purity (i.e., the correction process that subtracts the components r′, b′, and α·g that correspond to the overlapping region from the component value β·g).
According to the first embodiment, a given wavelength that is shorter than the wavelength that corresponds to the maximum transmittance represented by the first transmittance characteristics FB is referred to as a first wavelength λB, and a given wavelength that is longer than the wavelength that corresponds to the maximum transmittance represented by the third transmittance characteristics FR is referred to as a second wavelength λR (see
This makes it possible to calculate the similarity using the Euclidean distance between the vector VB and the vector VGB2 and the Euclidean distance between the vector VR and the vector VGR2 as indices. The second coefficient β is determined on the assumption that the similarity becomes a maximum when the indices become a minimum.
According to the first embodiment, a display control section (monitor image generation section 150) performs a control process that displays at least one of the high-sensitivity image (pixel values b, g, and r) and the high-color-reproduction image on a display section (monitor display section 50), the high-sensitivity image being an image acquired by the image acquisition section 110 (e.g., demosaicing section 130), and the high-color-reproduction image being an image based on the pixel value b that corresponds to the first color, the pixel value g that corresponds to the second color, and the pixel value r that corresponds to the third color that have been corrected by the correction processing section 120.
According to the first embodiment, it is possible to generate the high-color-reproduction image from the high-sensitivity image. Specifically, the high-sensitivity image and the high-color-reproduction image can be simultaneously acquired from a single captured image. The display control section can select one of the high-sensitivity image and the high-color-reproduction image that have been simultaneously acquired, and display the selected image on the display section, or simultaneously display both the high-sensitivity image and the high-color-reproduction image on the display section.
According to the first embodiment, a selection section (imaging mode selection section 190) acquires external brightness information, and selects one of the high-sensitivity image and the high-color-reproduction image based on the external brightness information. The display control section (monitor image generation section 150) performs the control process that displays the image selected by the selection section.
This makes it possible to selectively display the high-sensitivity image and the high-color-reproduction image corresponding to the external brightness. Specifically, it is possible to display the high-sensitivity image (for which the spectral characteristics overlap each other) in a dark environment, and display the high-color-reproduction image (for which the overlapping component is reduced or deleted) in a bright environment.
A second embodiment of the invention is described below. An imaging device according to the second embodiment may be configured in the same manner as the imaging device illustrated in
In the second embodiment, the spectral characteristics of the regions {φR, φRB, φB} are estimated based on a relational expression. Specifically, the pixel values {I·φR, I·φRB, I·φB}={r′, gx, b′} that correspond to the regions {φR, φRB, φB} and the primary-color pixel values {r, g, b} obtained by the demosaicing process satisfy the relationship represented by the following expression (13). Note that the case where I·φRB=gx is described below for convenience of explanation.
r=I·F
R
=I·(φR+φRB)=r′+gx,
b=I·F
B
=I·(φB+φRB)=b′+gx (13)
The relational expression represented by the following expression (14) is obtained (see the expression (13)) provided that I·φRB=gx is an unknown (unknown variable).
gx=(unknown),
r′=r−gx,
b′=b−gx (14)
The expression (14) represents that the pixel values {r′, b′} are uniquely determined when the unknown gx has been determined. When the value gx is correct, the pixel values {r′, b′} can be calculated as correct values.
However, there are a number of solutions in this stage with respect to the candidate values {r′, gx, b′}. In order to determine the maximum likelihood solution from a number of solutions, reference values {IR, IRB, IB} that exist in the vicinity of the candidate values {r′, gx, b′} are calculated. The occupancy ratio of the region φR with respect to the spectral characteristics FR is referred to as γR, the occupancy ratio of the region φB with respect to the spectral characteristics FB is referred to as γB, and the reference values are represented by the following expression (15). Note that a is a coefficient that is calculated as described above in connection with the first embodiment.
I
R
=I·φ
R
=I·(γR·FR)=γR·r,
I
RB
=I·φ
RB
=I·(α·FG)=α·g,
I
B
=I·φ
B
=I·(γB·FB)=γB·b (15)
In order to determine the maximum likelihood solution with respect to the reference values {IR, IRB, IB} from the candidate values {r′, gx, b′}, the candidate values {r′, gx, b′} that minimize an error therebetween is calculated.
The spectral characteristics {FR, FG, FB} are determined by the imaging conditions, and the coefficient α, the occupancy ratio γR, and the occupancy ratio γB are known information. Therefore, the spectral characteristics {FR, FG, FB}, the coefficient α, the occupancy ratio γR, and the occupancy ratio γB are substituted into the expression (15) to calculate the reference values {IR, IRB, IB}. The reference values and the expression (14) are substituted into the following expression (16) (evaluation function E(gx)) to calculate the unknown gx that minimizes the evaluation function E(gx). Specifically, the unknown gx that minimizes an error between the candidate values {r′, gx, b′} (calculated using the unknown gx) and the reference values {IR, IRB, IB} is determined (see
E(gx)=(r′−IR)2+(gx−IRB)2+(b′−IB)2 (16)
The unknown gx that minimizes the evaluation function E(gx) may be calculated (searched) using the expression (16) while changing the unknown gx, or the expression (16) may be expanded as a quadratic function of the unknown gx, and analytically solved to determine the unknown gx.
Note that the ranges of the candidate values {r′, gx, b′} are limited as represented by the following expression (17). Therefore, the unknown gx is determined so that the conditions represented by the following expression (17) are satisfied.
0≦r′<(maximum preset pixel value)
0≦gx<(maximum preset pixel value)
0≦b′<(maximum preset pixel value) (17)
According to the second embodiment, the correction processing section 120 calculates the evaluation value E(gx) that represents the similarity of the candidate value for the component value gx=I·φRB that corresponds to the overlapping region φRB, the value b′=b-gx obtained by correcting the pixel value b that corresponds to the first color (blue) using the candidate value gx, and the value r′=r-gx obtained by correcting the pixel value r that corresponds to the third color (red) using the candidate value gx, with the pixel value b that corresponds to the first color, the pixel value g that corresponds to the second color, and the pixel value r that corresponds to the third color (see the expressions (13) to (16)). The correction processing section 120 calculates the component value gx=hpRB that corresponds to the overlapping region φRB by determining the candidate value gx that maximizes the similarity based on the evaluation value E(gx) (see
This makes it possible to calculate the corrected pixel values {r′, g′, b′} based on the similarity of the candidate values {r′, gx, b′} for the corrected pixel values with the pixel value b that corresponds to the first color, the pixel value g that corresponds to the second color, and the pixel value r that corresponds to the third color. Specifically, the evaluation value E(gx) is calculated as a similarity index, and the component value gx that corresponds to the overlapping region φRB is determined on the assumption that the similarity becomes a maximum when the evaluation value E(gx) becomes a minimum, for example. When the component value gx that corresponds to the overlapping region φRB has been determined (calculated), the high-color-purity pixel values {r′, g′, b′} can be calculated in the same manner as described above in connection with the first embodiment.
A third embodiment of the invention is described below. In the third embodiment, a phase difference detection process is performed using the right pupil and the left pupil to reduce leakage light between the right-pupil image and the left-pupil image.
Although an example that divides the pupil of a monocular optical system is described below, the configuration is not limited thereto. For example, a twin-lens optical system may be used to provide two pupils. Although an example in which the first pupil is the right pupil and the second pupil is the left pupil is described below, the configuration is not limited thereto. Specifically, the pupil need not necessarily be divided into the right pupil and the left pupil. It suffices that the pupil be divided into the first pupil and the second pupil along an arbitrary direction that is perpendicular to the optical axis of the imaging optical system.
The optical filter 12 includes a right-pupil filter FL1 (first filter) that has transmittance characteristics fR, and a left-pupil filter FL2 (second filter) that has transmittance characteristics fL. The optical filter 12 is provided at the pupil position (e.g., a position where the aperture is provided) of the imaging optical system 10. The filter FL1 corresponds to the right pupil, and the filter FL2 corresponds to the left pupil. When the light intensity of light incident on the image sensor is referred to as I, IR(x)=I(x)·fR, IL(x)=I(x)·fL, and I(x)=IR(x)+IL(x). Note that x is the position (coordinates) in the horizontal direction (pupil division direction).
The transmittance characteristics {fR, fL} are obtained by dividing the imaging wavelength band into two spectral (band) components. As illustrated in
The image captured by the image sensor is obtained as the component values obtained by multiplying the incident light intensity I by the RGB spectral characteristics {FR, FG, FB} (see
The spectral characteristics {FR, FG, FB} are divided into four regions. The overlapping region of the spectral characteristics FR and the transmittance characteristics fR is referred to as φGR, and the overlapping region of the spectral characteristics FB and the transmittance characteristics fL is referred to as φGB. A region obtained by subtracting the overlapping component φRB (=φGR+φGB) from the spectral characteristics FR is referred to as φR, and a region obtained by subtracting the overlapping component φRB (=φGR+φGB) from the spectral characteristics FB is referred to as φB.
It is necessary to accurately separate the right-pupil image and the left-pupil image in order to accurately detect phase difference information. Since separate bands {fR, fL} are assigned to the right pupil and the left pupil, it is possible to separate the right-pupil image and the left-pupil image when the right-pupil image is formed by the pixel value r=I·FR, and the left-pupil image is formed by the pixel value b=I·FB, for example.
However, since the filter characteristics of the image sensor are not completely separated into the bands {fR, fL} assigned to the right pupil and the left pupil, the pupil image component is mixed as leakage light, and acquired. Specifically, the pixel value r=I·FR of the right-pupil image includes the component φGB that overlaps the spectral characteristics fL of the left pupil, and the pixel value b=I·FB of the left-pupil image includes the component φGR that overlaps the spectral characteristics fR of the right pupil. The right-pupil image and the left-pupil image are mixed in this manner, and the degree of separation decreases.
The phase difference detection accuracy deteriorates due to a decrease in the degree of separation. For example, when the image pattern is a pattern in which the color of the object changes from white to black (see
Specifically, the component that has passed through the right pupil and the component that has passed through the left pupil are a convolution of a point spread function PSFL or PSFR and the profile of the object. Therefore, the component that has passed through the right pupil and the component that has passed through the left pupil produce parallax (phase difference). Since the pixel value r of the right-pupil image includes the component IR·(φR+φGR) that has passed through the right pupil and the component IL·φGB that has passed through the left pupil, the profile of the pixel value r obtained by adding up the components that produce parallax is not a profile that corresponds to only the right pupil, and is distorted. Since the pixel value b includes the component IL·(φ9B+φGB) that has passed through the left pupil and the component IR·φGR that has passed through the right pupil, the profile of the pixel value b is distorted. When correlation calculations are performed on these profiles, it is impossible to implement accurate matching since the similarity has decreased due to distortion.
According to the third embodiment, the component IL·φGB (unnecessary component) that has passed through the left pupil is removed from the right-pupil image, and the component IR·φGR (unnecessary component) that has passed through the right pupil is removed from the left-pupil image. Since φRB=φGR+φGB, the pixel values {r′, b′}={IR·φR, IL·φB} that correspond to the spectral characteristics {φR, φB} can be calculated by reducing or removing the component value that corresponds to the region φRB using the method described above in connection with the first embodiment and the like. Since the pixel values {r′, b′} are considered to be a pure component (light) that has passed through the right pupil and a pure component (light) that has passed through the left pupil, it is possible to obtain an undistorted profile. When the right-pupil image and the left-pupil image are respectively formed by the pixel values {r′, b′}, it is possible to maintain the similarity between the right-pupil image and the left-pupil image, and implement a highly accurate phase difference detection process.
The spectral characteristic storage section 195 stores the spectral characteristics {FB, FG, FR} determined by the spectral characteristics of the optical filter 12, the spectral characteristics of illumination light (or external light), and the spectral characteristics of the color filters of the image sensor. Alternatively, the spectral characteristic storage section 195 may store the coefficients α and β calculated in advance from the spectral characteristics {FB, FG, FR}.
The phase difference detection section 185 detects the phase difference δ(x, y) between the high-color-purity right-pupil image and the high-color-purity left-pupil image {r′, b′} output from the high-purity spectral separation processing section 160. The phase difference δ(x, y) is calculated on a pixel basis. Note that (x, y) represents the position (coordinates) within the image. For example, x corresponds to the horizontal scan direction, and y corresponds to the vertical scan direction.
The range calculation section 175 performs a three-dimensional measurement process based on the detected phase difference δ(x, y). Specifically, the range calculation section 175 calculates the distance to the object at each pixel position (x, y) from the phase difference δ(x, y) to acquire three-dimensional shape information about the object.
Examples of the application of the third embodiment include a high-speed phase detection AF process that utilizes the phase difference δ(x, y), a three-dimensional measurement process that utilizes ranging information, a three-dimensional display process, and the like.
Although only some embodiments of the invention and the modifications thereof have been described in detail above, those skilled in the art will readily appreciate that many modifications are possible in the embodiments and the modifications thereof without materially departing from the novel teachings and advantages of the invention. A plurality of elements described in connection with the embodiments and the modifications thereof may be appropriately combined to implement various configurations. For example, some elements may be omitted from the elements described in connection with the embodiments and the modifications thereof. Some of the elements described above in connection with different embodiments or modifications thereof may be appropriately combined. The configuration and the operation of the imaging device and the image processing device, and the methods (imaging method and image processing method) for operating the imaging device and the image processing device are not limited to those described in connection with the embodiments. Various modifications and variations may be made. Specifically, various modifications and applications are possible without materially departing from the novel teachings and advantages of the invention. Any term cited with a different term having a broader meaning or the same meaning at least once in the specification and the drawings can be replaced by the different term in any place in the specification and the drawings.
Number | Date | Country | Kind |
---|---|---|---|
2013-133919 | Jun 2013 | JP | national |
This application is a continuation of International Patent Application No. PCT/JP2014/061514, having an international filing date of Apr. 24, 2014, which designated the United States, the entirety of which is incorporated herein by reference. Japanese Patent Application No. 2013-133919 filed on Jun. 26, 2013 is also incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2014/061514 | Apr 2014 | US |
Child | 14964834 | US |