The present invention relates to an imaging apparatus such as a camera.
The refractive index of light with respect to the material composing a lens differs depending on the wavelength. Therefore, when various wavelengths of light enter the optical system of an imaging apparatus, axial chromatic aberration occurs, so that images of varying sharpnesses (i.e., how sharp the images are) may be obtained depending on the color. When a color of low sharpness is contained in the image, that color becomes a cause for image quality deterioration.
In an imaging apparatus such as a camera, if the position of a subject is contained within the depth of field, focusing is attained, whereby a clear image can be imaged. In order to enable imaging of subjects located at various positions, the imaging apparatus needs to separately have a means for detecting a focusing state and a means for making a focus adjustment.
In order to solve the aforementioned problems, a technique has been proposed which, by utilizing an axial chromatic aberration of the optical system, allows the sharpness of a first color component to be reflected on a second color component which is different from the first color component, thus achieving expansion of the depth of field and correction of the axial chromatic aberration (Patent Document 1). According to the method of Patent Document 1, by allowing the sharpness of the first color component to be reflected on the second color component, the sharpness of the second color component can be enhanced. As a result, the depth of field can be increased, whereby subjects at a greater variety of distances can be relatively clearly imaged, without making focus adjustments.
In the construction of Patent Document 1, in order to allow the sharpness of the first color component to be reflected on the second color component, information of the sharpnesses of both of the first color component and the second color component is needed. Therefore, the depth of focus is confined to a range where the information of sharpnesses of all colors is available. Thus, with the construction of Patent Document 1, there are limits to the expansion of depth of focus, and it has been difficult to attain a sufficiently large depth of field.
Moreover, in the case where a monochromatic (e.g., blue) subject is to be imaged against a black background, for example, the image will contain no other color components (green and red) than the color of the subject. Therefore, if the subject image is blurred due to axial chromatic aberration, it would be impossible to detect the sharpness of any other color on the image and allow it to be reflected on the sharpness of the subject.
The present invention has been made in order to solve the aforementioned problems, and a main objective thereof is to provide an imaging apparatus for obtaining an image which has a large depth of focus and depth of field, and a high sharpness. Another objective of the present invention is to provide an imaging apparatus which can capture a high-sharpness image of a monochromatic (e.g., blue) subject against a black background.
An imaging apparatus according to the present invention comprises: a lens optical system having a first region in which a first color, a second color, and a third color of light pass through, and a second region in which the first color, second color, and third color of light pass through, the second region having an optical power for causing at least two or more colors of light to be converged at different positions from respective converged positions of the first color, second color, and third color of light passing through the first region; an imaging device having a plurality of first pixels and a plurality of second pixels on which light from the lens optical system is incident; an array optical device disposed between the lens optical system and the imaging device, the array optical device causing light passing through the first region to enter the plurality of first pixels, and causing light passing through the second region to enter the plurality of second pixels; and a calculation processing section for generating an output image, wherein the calculation processing section generates a first image of at least one color component among the first color, second color, and third color by using pixel values obtained at the plurality of first pixels, generates a second image containing the same color component as the at least one color component by using pixel values obtained at the plurality of second pixels, and generates the output image by using, for each color, an image component of a higher sharpness or contrast value between a predetermined region of the first image and a predetermined region of the second image.
An imaging system according to the present invention comprises: an imaging apparatus including: a lens optical system having a first region in which a first color, a second color, and a third color of light pass through, and a second region in which the first color, second color, and third color of light pass through, the second region having an optical power for causing at least two or more colors of light to be converged at different positions from respective converged positions of the first color, second color, and third color of light passing through the first region; an imaging device having a plurality of first pixels and a plurality of second pixels on which light from the lens optical system is incident; and an array optical device disposed between the lens optical system and the imaging device, the array optical device causing light passing through the first region to enter the plurality of first pixels, and causing light passing through the second region to enter the plurality of second pixels; and a calculation processing section for generating a first image of at least one color component among the first color, second color, and third color by using pixel values obtained at the plurality of first pixels, generates a second image containing the same color component as the at least one color component by using pixel values obtained at the plurality of second pixels, and generates the output image by using, for each color, an image component of a higher sharpness between a predetermined region of the first image and a predetermined region of the second image.
According to the present invention, between predetermined regions of two or more images, an output image is generated based on an image component of the higher sharpness for each color, thus enhancing the sharpness of the output image through a simple technique. Moreover, the depth of focus can be made greater than conventional, whereby a sufficiently large depth of field can be obtained.
Furthermore, according to the present invention, when imaging a monochromatic subject of red, green, or blue against a black background, the sharpness of the subject color is greater than a predetermined value in either one of the two or more imaging regions. As a result, an image with a high sharpness can be generated.
Hereinafter, embodiments of the imaging apparatus according to the present invention will be described with reference to the drawings.
The lens optical system L has a first optical region D1 and a second optical region D2 having mutually different optical powers, and is composed of a stop (stop or diaphragm) S through which light from a subject (not shown) enters, an optical device L1 through which the light through the stop S passes, and a lens L2 which is struck by the light having passed through the optical device L1. Although the lens L2 is illustrated as being a single lens, it may be composed of a plurality of lenses. In
The array optical device K is located near a focal point of the lens optical system L, and is located at a position which is a predetermined distance away from the imaging plane Ni.
When the ray A1W enters the lens L2 via a portion of the optical device L1 that is located in the first optical region D1, due to axial chromatic aberration, rays gather in the order of blue (A1B), green (A1G), and red (A1R) toward an image surface on the optical axis of the lens L2.
Similarly, when the ray A2W enters the lens L2 via a portion of the optical device L1 that is located in the second optical region D2, due to axial chromatic aberration, rays gather in the order of blue (A2B), green (A2G), and red (A2R) toward the image surface on the optical axis of the lens L2. However, since the second optical region D2 has a different optical power from that of the first optical region D1, these gather at positions respectively shifted from the rays passing through the first optical region D1.
a) is a diagram showing enlarged the array optical device K and imaging device N shown in
The array optical device K is disposed so that one of its optical elements M1 would correspond to two rows of pixels, i.e., one row of pixels P1 and one row of pixels P2, on the imaging plane Ni. On the imaging plane Ni, microlenses Ms are provided so as to cover the surface of the pixels P1 and P2.
The array optical device K is designed so that a large part of the light beam (light beam A1 indicated by a solid line in
First image information which is obtained with the plurality of pixels P1 in
As shown in
As shown in
Furthermore, in each of the first and second image information, the luminance information of the image along the vertical direction (column direction) is missing in every other row. The missing luminance information of a pixel P(x, y) may be generated through complementation based on luminance values that adjoin along the vertical direction (column direction). For example, in the first image shown in
Through the above complementation process, a first color image as shown in
According to the present embodiment, the converged positions of the blue (B), green (G), and red (R) rays having passed through the first optical region D1 of the optical device L1 are shifted from the converged positions of the blue (B), green (G), and red (R) rays having passed through the second optical region D2, and vice versa. Therefore, the respective sharpnesses of blue, green, and red in the image which is obtained with the pixels P1 differ from the respective sharpnesses of blue, green, and red in the image obtained with the pixels P2.
These differences are utilized so that, between the first color image which is obtained with the pixels P1 and the second color image which is obtained with the pixels P2, an image component of the higher sharpness is used for each of blue, green, and red, thereby generating an output image which has a high sharpness (or resolution) for each color. In the case where the first color image and the second color image do not contain all of blue, green, and red, an image component that happens to have the higher sharpness for each of the colors which are contained in these images may be used, whereby an output image having a high sharpness with respect to the colors contained in the images can be obtained. Such a process can be performed by the calculation processing section C.
As the sharpness increases, the blur of an image will decrease, and therefore the difference in luminance value (difference in gray scale level) between adjoining pixels is usually considered to increase. Therefore, in the present embodiment, sharpness is determined based on a difference in luminance value between adjoining pixels within a predetermined microregion of an acquired image. The microregion may be a single pixel P(x, y) shown in
Alternatively, sharpness may be determined based on a frequency spectrum which is obtained by applying a Fourier transform to the luminance values of the first color image and the second color image. In this case, a response at a predetermined spatial frequency may be determined as a sharpness. In other words, through a comparison between responses at a predetermined spatial frequency, an image sharpness can be evaluated to be high or low. Since an image is two-dimensional, a method which determines sharpness by using a two-dimensional Fourier transform is desirable.
The stop S is a region where a light beam will pass through at every angle of view. Therefore, by allowing a face having optical characteristics for controlling optical power to be inserted near the stop S, it becomes possible to control the convergence characteristics of the light beam at all angles of view alike. In other words, in the present embodiment, the optical device L1 may be provided near the stop S. By providing near the stop S the optical regions D1 and D2 having optical powers for ensuring that the converged positions of at least two or more colors of light are mutually different, it becomes possible to confer convergence characteristics which are adapted to the number of divided regions to the light beam.
In
Next, a specific method of deepening the depth of field will be described.
Table 1 and Table 2 show design data for the optical system of the imaging apparatus A shown in
In this design example, within the subject-side face of the optical device L1, the face located in the first optical region D1 is a plane, whereas the face located in the second optical region D2 is an optical surface constituting a spherical lens with a radius of curvature of 1600 mm. With such a construction, the rays having passed through each optical region as described above gather at mutually shifted positions, from color to color.
In a generic optical system, a sagittal direction and a tangential direction of an MTF value on the optical axis would be identical. In the present embodiment, when an MTF value is to be calculated for each ray passing through the respective optical region, since each immediately preceding stop face is a semicircular shape, the sagittal direction and the tangential direction of the MTF value on the optical axis will be separated as shown in
In the graph showing the characteristics associated with rays passing through the first optical region D1, MBa, MGa, and MRa respectively represent the through-focus MTF characteristics of blue, green, and red. Moreover, PBa, PGa, and PRa represent the respective peak positions. Similarly, in the graph showing the characteristics associated with rays passing through the second optical region D2, MRb, MGb, and MBb respectively represent the through-focus MTF characteristics of red, green, and blue. Moreover, PRb, PGb, and PBb represent the respective peak positions. Herein, S and T in the parentheses of each alphanumeric expression respectively represent the sagittal direction and the tangential direction.
In a generic imaging optical system, as the distance from the subject becomes closer to a lens, light passing through the lens will gather in a farther region from the lens (a region farther from the subject). Therefore, when the subject distance is classified into a short distance, an intermediate distance, and a long distance, as shown in
In the case where the subject distance is a short distance, as shown in
By a similar method, in the case of an intermediate distance, a blue 1Bm MTF value associated with rays passing through the first optical region D1 and a red 2Rm MTF value and green 2Gm MTF value associated with rays passing through the second optical region D2 are selected. In the case of a long distance, a green 1Gf MTF value and a blue 1Bf MTF value associated with rays passing through the first optical region D1 and a red 2Rf MTF value associated with rays passing through the second optical region D2 are selected.
When designing the lens optical system L, the design is to be made so that the through-focus MTF characteristics shown in
Since an MTF is an indication of how faithfully the contrast of a subject is reproduced on an image surface, calculation of an MTF value requires a spatial frequency of the subject. This makes it impossible to directly detect an MTF value from a given arbitrary image in actual imaging. Therefore, in actual imaging, a difference in luminance value is used for evaluating the sharpness to be high or low. The higher the sharpness is, the smaller the image blur is; therefore, usually, an image having a higher sharpness contains greater differences in luminance value between adjoining pixels.
Specifically, first, in an image separation section C0 of the calculation processing section C shown in
When the first optical region D1 and the second optical region D2 are designed by the above method, if the subject exists within the depth of field, the greater sharpness between the sharpness calculated from the pixels P1 and the sharpness calculated from the pixels P2 will be equal to or greater than the desired value. Therefore, without having to measure the distance to the subject, a high sharpness image can be selected for each color through comparison of the absolute values of differences in luminance value.
As for means of color image synthesis, a technique of synthesizing one output image by selecting each high-sharpness color as described above, or a technique of merging two color images through color-by-color additions may be used. With these methods, an output image which retains high sharpness despite changes in the subject distance can be generated.
Next, the range of axial chromatic aberration of rays passing through the first optical region D1 and the second optical region D2 will be discussed. The upper graph of
In
Moreover, Wa in the upper graph of
On the other hand, in a range Ws shown in
Since a sharpness which is calculated from the image generated from rays passing through the first optical region D1 and a sharpness which is calculated from the image generated from rays passing through the second optical region D2 are both derived, and the image with the higher sharpness is selected for each color to generate an output image, the range Ws in
According to the present embodiment, between a microregion of the first color image generated from rays passing through the first optical region D1 and a microregion of the second color image generated from rays passing through the second optical region D2, an image component of the higher sharpness for each color is used to generate an output image, thus enhancing the sharpness of the output image through a simple technique. Moreover, as shown in
In the case where an image contains a plurality of subjects at different subject distances, an image having the higher sharpness for each color may be selected for each respective image region to generate an output image.
In the present embodiment, when imaging a monochromatic subject of red, green, or blue against a black background, the sharpness of the subject color is greater than the predetermined value Md in either one of the images generated from rays passing through the first and second optical regions D1 and D2. As a result of this, an image with a high sharpness can be generated.
The description of the present embodiment only illustrates merging of regions of a color image that are on the optical axis. As for non-axial regions, a correction of chromatic aberration of magnification or a correction of distortion may be performed before generating a color image.
In the present embodiment, an image sharpness is evaluated to be high or low by comparing the absolute values of differences in luminance value, i.e., sharpness itself; otherwise, it may be conducted through comparison of contrast values, for example. Usually, an image with a higher contrast value has a higher sharpness. A contrast value can be determined from a ratio (Lmax/Lmin) between a highest luminance value Lmax and a lowest luminance value Lmin within a predetermined calculation block, for example. Sharpness is a difference between luminance values, whereas a contrast value is a ratio between luminance values. A contrast value may be determined from a ratio between a point of the highest luminance value and a point of the lowest luminance value; or, a contrast value may be determined from a ratio between an average value of several points of the greatest luminance values and an average value of several points of the smallest luminance values, for example. In this case, instead of the first and second sharpness detection sections C1 and C2 shown in
Moreover, the optical system of the imaging apparatus as indicated in Table 1 and Table 2 is an image-side telecentric optical system. As a result of this, even if the angle of view changes, the principal ray enters the array optical device K at an incident angle with a value which is closer to 0 degrees, so that crosstalk between the light beam reaching the pixel P1 and the light beam reaching the pixel P2 can be reduced across the entire imaging region.
In the present embodiment, the subject-side surface of the optical device L1 of the optical device L1 is planar and spherical, respectively, in the first and second optical regions D1 and D2. However, these may be spherical surfaces with mutually different optical powers, or non-spherical surfaces with mutually different optical powers.
When the ray A1W enters the lens L2 via the first optical region D1 of the optical device L1, due to axial chromatic aberration, rays gather in the order of blue (A1B), green (A1G), and red (A1R) toward an image surface on the optical axis of the lens L2.
On the other hand, when the ray A2W enters the lens L2 via the second optical region D2 of the optical device L1, due to axial chromatic aberration, rays gather in the order of red (A2R), green (A2G), and blue (A2B) toward the image surface on the optical axis of the lens L2. Due to the action of the diffractive lens, the second optical region D2 has an optical power resulting in an axial chromatic aberration which is inverted from that ascribable to the first optical region D1. Therefore, in the light passing through the second optical region D2, red and blue are converged in reversed order from the rays passing through the first optical region.
a) is a diagram showing enlarged the array optical device K and imaging device N shown in
In the present Embodiment 2, a first color image and a second color image are generated in a similar manner to Embodiment 1, and an image having the higher sharpness (or contrast) for each color is selected for each respective image region to generate an output image.
Next, a specific method of deepening the depth of field will be described. A cross-sectional view of the imaging apparatus A of the present Embodiment 2 is similar to
Table 3, Table 4, and Table 5 show design data for the optical system of the imaging apparatus A. In Table 3 and Table 4, the respective symbols are identical to those in Table 1 and Table 2.
In Table 5, a phase difference function φ(h) on the diffraction plane (L1-R1 face) is expressed by (math. 2) in units of radians, where h is a height from the optical axis; and Bn(n=2, 4, 6) is a coefficient of an nth phase function.
φ(h)=B2h2+B4h4+B6h6 [math. 2]
In this design example, a portion of the subject-side face of the optical device L1 that is located in the first optical region D1 is a plane, and a portion that is located in the second optical region D2 is an optical surface obtained by adding a diffractive shape onto a spherical lens with a radius of curvature −132 mm. With this construction, rays passing through each optical region converge in an inverted order with respect to colors, as described above.
As in Embodiment 1, depending on how long the subject distance is, a difference in terms of the MTF value of each color may occur between an image into which rays passing through the first optical region D1 are converged and an image into which rays passing through the second optical region D2 are converged.
In the case where the subject distance is a short distance, as shown in
By a similar method, in the case of an intermediate distance, a blue 1Bm MTF value associated with rays passing through the first optical region D1 and a red 2Rm MTF value and green 2Gm MTF value associated with rays passing through the second optical region D2 are selected. In the case of a long distance, a green 1Gf MTF value and a red 1Rf MTF value associated with rays passing through the first optical region D1 and a blue 2Bf MTF value associated with rays passing through the second optical region D2 are selected.
Thereafter, in a similar manner to Embodiment I, the sharpnesses of the pixels P1 and the pixels P2 are actually calculated by the first and second sharpness detection sections C1 and C2 in the calculation processing section C.
Next, the range of axial chromatic aberration of rays passing through the first optical region D1 and the second optical region D2 will be discussed. The upper graph of
In
As in Embodiment 1, in a range Ws shown in
Since a sharpness which is calculated from the image generated from rays passing through the first optical region D1 and a sharpness which is calculated from the image generated from rays passing through the second optical region D2 are both derived, and the image with the higher sharpness is selected for each color to generate an output image, the range Ws in
According to the present embodiment, between a microregion of the first color image generated from rays passing through the first optical region D1 and a microregion of the second color image generated from rays passing through the second optical region D2, an image component of the higher sharpness for each color is used to generate an output image, thus enhancing the sharpness of the output image through a simple technique, similarly to Embodiment 1. Moreover, as shown in
In the present embodiment, when imaging a monochromatic subject of red, green, or blue against a black background, the sharpness of the subject color is greater than the predetermined value Md in either one of the images generated from rays passing through the first and second optical regions D1 and D2. As a result of this, an image with a high sharpness can be generated.
The present Embodiment 3 differs from Embodiment 2 in that an optical adjustment layer O is provided in the second optical region D2 of the optical device L1, the second optical region D2 having the shape of a diffraction grating.
a) is a cross-sectional view showing the optical adjustment layer provided on a diffraction plane of the second optical region D2. In the diffraction plane shape shown in
[math. 3]
d=λ/(n2−n1) (3)
With such a construction, a high diffraction efficiency can be maintained across a broad wavelength band for rays passing through the second optical region D2, so that the image quality of the image generated by the pixels P2 can be enhanced over Embodiment 1. As a result, the image quality of the image generated by the calculation processing section can be improved.
For the optical adjustment layer O, a material having a higher refractive index and a greater Abbe number than those of the substrate of the optical device L1, or a material having a lower refractive index and a smaller Abbe number than those of the substrate of the optical device L1 can be used. By using such materials, it is possible to reduce the wavelength dependence of first-order diffraction efficiency. When the substrate of the optical device L1 is polycarbonate, a composite material obtained by dispersing zirconium oxide in resin may be used for the optical adjustment layer O, for example.
Although
The present Embodiment 4 differs from Embodiment I in that the optical device L1 is divided into four regions, and that the array optical device is changed from lenticular elements to microlenses.
The third optical region D3 has a different optical power from those of the first and second optical regions D1 and D2. Specifically, the third optical region D3 is characterized so as to induce different converged positions of red, green, and blue light from the converged positions of the red, green, and blue light passing through the respective first and second optical regions D1 and D2.
Similarly, the fourth optical region D4 has a different optical power from those of the first, second, and third optical regions D1, D2, and D3. Specifically, the fourth optical region D4 is characterized so as to induce different converged positions of red, green, and blue light from the converged positions of the red, green, and blue light passing through the respective first, second, and third optical regions D1, D2, and D3.
a) is a diagram showing enlarged the array optical device K and the imaging device N; and
Moreover, the array optical device K is disposed so that its face having optical elements M2 formed thereon faces toward the imaging plane Ni. The array optical device K is disposed so that one of its optical elements M2 would correspond to four pixels, i.e., two rows by two columns of pixels Plc to P4c (where c means R, G, or B), on the imaging plane Ni.
With such a construction, a large part of the light beam passing through the first to fourth optical regions D1 to D4 of the optical device L1 shown in
As mentioned above, the first to fourth optical regions D1 to D4 of the optical device L1 are spherical lenses with mutually different radii of curvature. Therefore, the focal points are shifted into four positions for incidence upon the pixels P1c to P4c (where c means R, G, or B) of respective colors.
With the plurality of pixels P1c, P2c, Pic, and P4c (where c means R, G, or B), respectively, first, second, third, and fourth image information are obtained. The calculation processing section C (shown in
With such a construction, the depth of focus can be deepened relative to Embodiment 1 and Embodiment 2, and the depth of field can be further expanded.
Although the entire subject-side face of the optical device L1 is supposed to be spherical surfaces in the present Embodiment 4, a portion of the subject-side face of the optical device L1 that is located in at least one optical region may be a plane, or portions that are located in some optical regions may be non-spherical surfaces. Alternatively, the entire subject-side face of the optical device L1 may be a non-spherical surface. Moreover, as shown in
The present Embodiment 5 differs from Embodiments 1 and 4 in that a lenticular lens or a microlens array is formed on the imaging plane. In the present embodiment, any detailed description directed to similar substance to that of Embodiment 1 will be omitted.
a) and (b) are diagrams showing, enlarged, array optical devices K and imaging devices N. In the present embodiment, an array optical device K which is a lenticular lens (or a microlens array) is formed on an imaging plane Ni of an imaging device N. On the imaging plane Ni, pixels P are disposed in a matrix shape, as in Embodiment 1 and the like. One optical element of a lenticular lens, or a microlens corresponds to such plural pixels P. As in Embodiments 1 and 4, light beams passing through different regions of the optical device L1 can be led to different pixels according to the present embodiment.
In the case where the array optical device K is separated from the imaging device N as in Embodiment 1, it is difficult to establish alignment between the array optical device K and the imaging device N. On the other hand, forming the array optical device K on the imaging device N as in the present Embodiment 5 permits alignment through a wafer process. This facilitates alignment, whereby the accuracy of alignment can be improved.
The present Embodiment 6 differs from Embodiment 1 in that the first and second optical regions D1 and D2 are a plurality of regions separated so as to sandwich the optical axis, and that the array optical device K is changed from lenticular elements to microlenses. Herein, any detailed description directed to similar substance to that of Embodiment 1 will be omitted.
a) is a front view showing the optical device L1 from the subject side. In
b) is a diagram showing relative positioning of the array optical device K and pixels on the imaging device N. In the present Embodiment 6, rays passing through the first optical region D1 reach pixels of odd rows and odd columns and pixels of even rows and even columns Plc (where c means R, G, or B). Therefore, luminance values which are obtained with pixels of odd rows and odd columns and luminance values obtained with pixels of even rows and even columns are used for generating a first color image. On the other hand, rays passing through the second optical region D2 reach pixels of even rows and odd columns and pixels of odd rows and even columns P2c (where c means R, G, or B), and therefore the luminance values of pixels of even rows and odd columns and the luminance values of pixels of odd rows and even columns are used for generating a second color image.
Next, effects obtained in the present embodiment will be discussed in comparison with the effects obtained in Embodiment 1.
In Embodiment 1, as shown in
Each image is schematically shown as a twofold expansion along the Y direction, obtained through a complementation process, of an image ((a2), (b2), (c2)) which is extracted for every odd row of pixels or an image ((a3), (b3), (c3)) which is extracted for every even row of pixels.
As shown in
On the other hand, according to the present Embodiment 6, the first and second optical regions D1 and D2 are disposed so as to be point-symmetric around the optical axis as a center, and therefore the distance d between the barycenters of the point images do not vary even if the subject distance changes.
In
Thus, in the present Embodiment 6, by disposing the first and second optical regions D1 and D2 so as to be separated with the optical axis sandwiched therebetween, it is ensured that no parallax occurs in the acquired image even if the subject distance changes. As a result, shifts in the extracted position of the image due to parallax can be suppressed, whereby deteriorations in sharpness (or contrast) can be reduced.
Note that it suffices if the converged position of light passing through the first optical region D1 and the converged position of light passing through the second optical region D2 are different between at least two or more colors of light, without being limited to what is described in the aforementioned Embodiments. The difference(s) between the converged positions of two or more colors of light may be even smaller, or even greater.
Although the lens L2 is illustrated as being a single lens, it may be composed of a plurality of groups or a plurality of lenses.
Although the optical device L1 is disposed on the image surface side of the position of the stop, it may be on the subject side of the position of the stop.
Although the lens optical system L is illustrated as an image-side telecentric optical system in Embodiments 1 to 6 described above, it may be an image-side nontelecentric optical system.
Moreover, in Embodiment I described above, pixels of the three colors of R (red), G (green), and B (blue) are in iterative arrays within a single optical element M1 of the lenticular lens. Alternatively, a construction as shown in
In Embodiment 1 described above, in each optical element M1 of the array optical device K, pixels of different colors are constituting iterative arrays. However, different optical elements M1 may be associated with pixels of different colors, such that each optical element M1 corresponds to one color of pixels. In the case where the optical elements are lenticular elements, as shown in
Each optical element (microlens) of the microlens array according to the present Embodiments 2 to 6 may have a rotation-symmetric shape with respect to the optical axis. This will be discussed below in comparison with microlenses of a shape which is rotation-asymmetric with respect to the optical axis.
a
1) is a perspective view showing a microlens array having a shape which is rotation-asymmetric with respect to the optical axis. Such a microlens array is formed by forming quadrangular prisms of resist on the array and rounding the corner portions of the resist through a heat treatment, and performing a patterning by using this resist. The contours of a microlens shown in
a
3) is a diagram showing ray tracing simulation results in the case where the microlenses shown in
b
1) is a perspective view showing a microlens array having a shape which is rotation-symmetric with respect to the optical axis. Microlenses of such a rotation-symmetric shape can be formed on a glass plate or the like by a thermal imprinting or UV imprinting manufacturing method.
b
2) shows contours of a microlens having a rotation-symmetric shape. In a microlens having a rotation-symmetric shape, the radius of curvature is identical between the vertical and lateral directions and oblique directions.
b
3) is a diagram showing ray tracing simulation results in the case where the microlenses shown in
The imaging apparatus according to the present invention is useful for imaging apparatuses such as digital still cameras or digital camcorders. It is also applicable to security cameras, imaging apparatuses for monitoring the surroundings or monitoring people riding in an automobile, or imaging apparatuses for medical uses.
Number | Date | Country | Kind |
---|---|---|---|
2011-139502 | Jun 2011 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP12/00621 | 1/31/2012 | WO | 00 | 11/27/2012 |