The present disclosure relates to an image reading device.
There has been known an image reading device that acquires two-dimensional image information by optically reading an object as an image capture target (hereinafter referred to also as a “subject”). See Patent Reference 1, for example.
The image reading device of the Patent Reference 1 includes a plurality of light receiving parts arrayed regularly, a light blocking member having a plurality of openings arrayed corresponding respectively to the plurality of light receiving parts, and a plurality of microlenses arrayed corresponding respectively to the plurality of openings.
In the Patent Reference 1, each light receiving part includes a plurality of light receiving pixels arrayed in an X-axis direction and a Y-axis direction on an XY plane. With this configuration, the resolution increases. In the Patent Reference 1, an imaging optical system is formed by one light receiving pixel, an opening corresponding to the light receiving pixel, and a microlens corresponding to the opening.
Patent Reference 1: Japanese Patent Application Publication No. 2009-524263
However, in the image reading device of the Patent Reference 1, the optical axis of each microlens is inclined with respect to a direction orthogonal to the XY plane. Thus, there is a problem in that an overlap of visual fields between adjoining imaging optical systems changes and precision of image processing deteriorates when the distance between the light receiving part and the subject changes. In this case, the depth of field decreases. A technology that increases the depth of field while improving the resolution is being requested.
An object of the present disclosure is to increase the depth of field while improving the resolution.
An image reading device according to an aspect of the present disclosure is an image reading device that optically reads an object as an image capture target, including a plurality of light receiving parts arrayed regularly, a first light blocking member including a plurality of first openings arrayed corresponding respectively to the plurality of light receiving parts, and a plurality of microlenses arrayed corresponding respectively to the plurality of first openings. Each light receiving part included in the plurality of light receiving parts includes a plurality of light receiving pixels arrayed in a first direction as a main scanning direction. Each microlens included in the plurality of microlenses is object side telecentric. The plurality of microlenses, the first light blocking member and the plurality of light receiving parts are arranged so that light reflected by the object and passing through toe microlens and the first opening corresponding to the microlens enters the plurality of light receiving pixels included in the light receiving part corresponding to the first opening.
According to the present disclosure, the depth of field can be increased while improving the resolution.
Image reading devices according to embodiments of the present disclosure will be described below with reference to the drawings. The following embodiments are just examples and a variety of modifications are possible within the scope of the present disclosure.
In the first embodiment, in order for the imaging optical unit 1 to acquire two-dimensional image information on the document 6, the document 6 is conveyed by a conveyance unit (not shown) along the glass top plate 3 in an auxiliary scanning direction as a second direction orthogonal to a main scanning direction as a first direction. This operation makes it possible to scan the whole of the document 6. In the first embodiment, the main scanning direction is an X-axis direction, and the auxiliary scanning direction is a Y-axis direction. It is also possible to execute the scan of the whole of the document 6 by moving the imaging optical unit 1 in the Y-axis while leaving the document 6 still.
The document 6 is an example of an image capture target that undergoes image capturing by the imaging optical unit 1. The document 6 is, for example, a print that has been printed with characters, an image or the like. The document 6 is arranged on a predetermined reference surface S. The reference surface S is a plane on which the document 6 is set, specifically, a surface on the glass top plate 3. The glass top plate 3 is situated between the document 6 and the imaging optical unit 1. The thickness of the glass top plate 3 is 1.0 mm, for example. The structure for setting the document 6 on the reference surface S is not limited to the glass top plate 3.
The imaging optical unit 1 includes an imaging element unit 10 as an imaging section, a first light blocking member 11 having a plurality of openings 31, a second light blocking member 12 having a plurality of openings 32, a third light blocking member 13 having a plurality of openings 33, and a plurality of microlenses 14.
The sensor chips 7 are formed from silicon material, for example. The sensor chips 7 are provided on the sensor substrate 8. The sensor chips 7 are electrically connected to the sensor substrate 8 by means of wire bonding, for example. The sensor substrate 8 is a mounting substrate, and is formed from glass epoxy resin, for example.
The image processing device 9 executes image processing based on an image signal outputted from the sensor chips 7. The image processing device 9 is, for example, an ASIC (Application Specific Integrated Circuit) mounted on the sensor substrate 8. The image processing device 9 can also be implemented by an arithmetic processing device not mounted on the sensor substrate 8. Details of the image processing executed by the image processing device 9 will be described later.
On each sensor chip 7, a plurality of light receiving pixel units 70 as a plurality of light receiving parts regularly arrayed are arranged. The plurality of light receiving pixel units 70 are arrayed in the X-axis direction. Each sensor chip 7 includes 64 light receiving pixel units 70, for example. Each light receiving pixel unit 70 receives reflected light reflected by the document 6. Each sensor chip 7 is not limited to the configuration described in the first embodiment but can be implemented by a set of an arbitrary number of light receiving pixel units 70.
As shown in
In the first embodiment, each light receiving pixel unit 72 in the second line 70n is situated in the middle of two light receiving pixel units 71 in the first line 70m. Specifically, the light receiving pixel units 72 are arranged to deviate in the X-axis direction relative to the light receiving pixel units 71 belonging to a different line by a distance P/2 (hereinafter referred to also as a “pitch P0”) as ½ of the pitch P. By this arrangement, in the first embodiment, the plurality of light receiving pixel units 70 are arrayed in a zigzag pattern. Therefore, in the first embodiment, the pitch P can be increased compared to a configuration in which a plurality of light receiving pixel units are arrayed in one line, and thus the image reading device 100 is capable of acquiring an image not affected by stray light. Further, since the openings can be enlarged, luminance of the image increases.
The plurality of light receiving pixel units 70 included in one sensor chip 7 include light receiving pixel units 70z and 70a as first light receiving parts and light receiving pixel units 70x as second light receiving parts being light receiving parts other than the light receiving pixel units 70z and 70a. The light receiving pixel unit 70z is a light receiving pixel unit arranged at a positron closest to a +X-axis direction end 7e of the sensor chip 7. The light receiving pixel unit 70a is a light receiving pixel unit arranged at a position closest no a −X-axis direction end 71 of the sensor chip 7.
As above, in the first embodiment, the number of light receiving pixels 80 included in the light receiving pixel unit 70z, 70a is greater than the number of light receiving pixels 80 included in the light receiving pixel unit 70x. Effect of this feature will be described later. The number of light receiving pixels 80 included in the light receiving pixel unit 70z, 70a may also be the same as the number of light receiving pixels 80 included in the light receiving pixel unit 70x. Namely, it is permissible if the number of the plurality of light receiving pixels 80 included in the light receiving pixel unit 70z, 70a is greater than or equal to the number of the plurality of light receiving pixels 80 included in the light receiving pixel unit 70b.
Further, each point C1, C2, C3 respectively shown in
Each light receiving pixel 80 included in the plurality of light receiving pixels 80 includes a color filter (not shown). Specifically, the light receiving pixel unit 70 includes first light receiving pixels 80R each including a red color filter that allows light of red color to pass through, second light receiving pixels 80G each including a green color filter that allows light of green color to pass through, and third light receiving pixels 80B each including a blue color filter that allows light of blue color to pass through. With this configuration, when the illuminating light (e.g., the illuminating light 25 shown in
Next, the rest of the configuration of the imaging optical unit 1 will be described below. As shown in
The plurality of openings 31 are arranged at positions corresponding respectively to the plurality of light receiving pixel units 70. As viewed in the Z-axis direction, the plurality of openings 31 respectively overlap with the plurality of light receiving pixel units 70. Specifically, on an XY plane, the central position at each opening 31 included in the plurality of openings 31 is the same as the central position of a light receiving pixel unit 70.
The plurality of openings 31 are arrayed in two lines. The openings 31 in each line are arrayed in the X-axis direction. In the first embodiment, the plurality of openings 31 are arrayed in a zigzag pattern. Each opening 31 is in a square shape of 40 μm×40 μm, for example. Reflected light reflected by the document 6 passes through the openings 31. In the first light blocking member 11, a part excluding the openings 31 is a first light blocking part 41 that blocks the reflected light.
The second light blocking member 12 is arranged on the document 6's side relative to the first light blocking member 11. The second light blocking member 12 is arranged between the first light blocking member 11 and the plurality of microlenses 14. The second light blocking member 12 includes the plurality of openings 32 as a plurality of second openings.
The plurality of openings 32 are arranged at positions corresponding respectively to the plurality of microlenses 14. Specifically, on an XY plane, the central position of each opening 32 included in the plurality of openings 32 is the same as the central position of a microlens 14. As viewed in the Z-axis direction, the plurality of openings 32 respectively overlap with the plurality of light receiving pixel units 70.
The plurality of openings 32 are arrayed in two lines. The openings 32 in each line are arrayed in the X-axis direction. Then, the plurality of openings 32 are arrayed in a zigzag pattern. Further, the plurality of openings 32 respectively overlap with the plurality of openings 31, and respectively overlap with the plurality of openings 33 which will be described later.
The opening 32 is in a circular shape, for example. The opening area of the opening 32 is larger than the opening area of the opening 31 and the opening area of the opening 33. Namely, the diameter of the opening 32 (diameter Φ shown in
The imaging optical unit 1 further includes a glass member 51 as a first light-permeable member arranged between the first light blocking member 11 and the second light blocking member 12. The first light blocking member 11 is formed on a surface 51a of the glass member 51 on the −Z-axis side (i.e., the light receiving pixel units 70's side), while the second light blocking member 12 is formed on a surface 51b of the glass member 51 on the +Z-axis side (i.e., tire document 6's side).
The first light blocking member 11 and the second fight blocking member 12 are light blocking layers as thin films formed by chrome oxide films vapor-deposited on the glass member 51. The openings 31 and 32 are formed by etching the chrome oxide films by using mask patterns. By this, positional accuracy and shape accuracy of the openings 31 and 32 can be made excellent. For example, a Y-axis direction position error among the plurality of openings 31 (or among the plurality of openings 32) is approximately 1 μm.
The third light blocking member 13 is arranged on the light receiving pixel units 70's side relative to the first light blocking member 11. The third light blocking member 13 has the plurality of openings 33 as a plurality of third openings. The plurality of openings 33 are arranged at positions corresponding respectively to the plurality of light receiving pixel units 70. As viewed in the Z-axis direction, the plurality of openings 33 respectively overlap with the plurality of light receiving pixel units 70. Specifically, on an XY plane, the central position of each opening 33 in in the plurality of openings 33 is the same as the central position of a light receiving pixel unit 70.
The plurality of openings 33 are arrayed in two lines. The openings 33 in each line are arrayed in the X-axis direction. In the first embodiment, the plurality of openings 33 are arrayed in a zigzag pattern. Each opening 33 is in a square shape of 60 μm×60 μm, for example. Reflected light reflected by the document 6 passes through the openings 33. In the third light blocking member 13, a part excluding the openings 33 is a third light blocking member 13 that blocks the reflected light.
The imaging optical unit 1 further includes a glass member 52 as a second light-permeable member arranged between the first light blocking member 11 and the third light blocking member 13. In other words, the glass member 52 is arranged on the light receiving pixel units 70's side relative to the glass member 51. The third light blocking member 13 is formed on a surface 52a of the glass member 52 on the −Z-axis side (i.e., the light receiving pixel units 70's side). The method of forming the openings 33 is similar to the aforementioned formation method of the openings 31 and 32; the openings 33 are formed by etching a chrome oxide film vapor-deposited on the glass member 52, for example. As shown in
As shown in
The glass members 51 and 52 are members capable of allowing light to pass through, such as glass substrates, for example. In she first embodiment, the refractive index of the glass member 51 is equal to the refractive index of the glass member 52. The refractive indices n of the glass members 51 and 52 are 1.52, for example. The thickness t1 (see
Here, in the case where the aforementioned wire bonding is employed as the method of electrically connecting the sensor chips 7 to the sensor substrate 8, wires can stick out from a +Z-axis side surface of the sensor chip 7 in the +Z-axis direction by approximately 100 to 200 μm. In the first embodiment, spacing t0 (see
The plurality of microlenses 14 are arranged on the +Z-axis side relative to the plurality of openings 32. The optical axis of the microlens 14 is indicated by the reference character 40 (see
The plurality of microlenses 14 are arranged at positions corresponding respectively to the plurality of light receiving pixel units 70. In the first embodiment, as viewed in the Z-axis direction, the plurality of microlenses 14 respectively overlap with the plurality of light receiving pixel units 70. The plurality of microlenses 14 are arrayed in two lines. The microlenses 14 in each line are arrayed in the X-axis direction. In the first embodiment, the plurality of microlenses 14 are arrayed in a zigzag pattern. The microlenses 14 arrayed in the zigzag pattern constitute a microlens array 60.
The microlens array 60 is produced by a method such as nanoimprinting or injection molding, for example. In such cases, a mold used for manufacturing the microlens array 60 has concave parts corresponding to the shape of the microlens array 60. By manufacturing the microlens array 60 by means of nanoimprinting as above, the shape accuracy of the microlens array 60 can be increased. Further, by nanoimprinting, the microlens array 60 can be formed directly on the second light blocking member 12.
In the first embodiment, the diameter of the microlens 14 is set at a predetermined size in a range of some micrometers to some millimeters. The curvature radius of the surface of the microlens 14 is approximately 1.0 mm, for example. Further, the plurality of microlenses 14 are arranged at positions corresponding respectively to the plurality of openings 31. Specifically, on an XY plane, the central position of each microlens 14 overlaps with the central position of an opening 31. Accordingly, the optical axis 40 of each microlens 14 extends in the Z-axis direction orthogonal to the XY plane.
As shown in
The illuminating light 25 applied to the document 6 shown in
Next, image formation by the microlens 14 will be described below by using
As shown in
In the first embodiment, an imaging unit 110 as a unit optical system is formed by one microlens 14, one opening 32, one opening 31, one opening 33 and one light receiving pixel unit 70. Here, as mentioned earlier, a plurality of light receiving pixel units 70 are formed on one sensor chip 7. Therefore, the positional accuracy among the plurality of light receiving pixel units 70 can be increased. Thus, variation in the position of the optical axis 40 of the microlens 14 is small between imaging units 110 adjoining in the X-axis direction.
Further, in the example shown in
In the first embodiment, the microlens 14 is object side telecentric. By this, the depth of field can be increased. In order to realize the object side telecentricity, the opening 33 as an aperture surface for the microlens 14 is arranged at the position of a rear-side focal point of the microlens 14.
Next, the image formation by the microlens 14 will be described below by using specific numerical values. In
Furthermore, let f represent the focal distance of the microlens 14 and R represent the curvature radius of the microlens 14, f=1.78 mm and R=0.95 mm. In order to realize the object side telecentricity, the focal distance f needs to satisfy the following expression (1):
f=(t1+t2)/n (1)
In an example in the first embodiment, t1=300 μm, t2=2400 μm and n=1.52, and thus the value of the focal distance f obtained by substituting these values into the expression (1) is 1.78 mm. Therefore, the microlens 14 is object side telecentric. Since the depth of field is increased by this feature, the distance between the Z-axis direction position of the document 6 as an image formation position on the object side and the Z-axis direction position of the microlens 14 can be increased. In other words, the plurality of microlenses 14, the first light blocking member 11 and the plurality of light receiving pixel units 70 are arranged so that reflected light reflected by the document 6 and passing through a microlens 14 and the opening 31 corresponding to the microlens 14 enters the plurality of light receiving pixels 80 included in the light receiving pixel unit 70 corresponding to the opening 31.
In the first embodiment, the Z-axis direction position of the document 6 is separate from the Z-axis direction position of the microlens 14 to the +Z-axis side by approximately 8 mm (see
Here, the depth of field is determined by the microlens 14's numerical aperture on the object side. Further, the numerical aperture on the object side is determined by the opening width of the opening 31 and the spacing t0. Namely, a desired depth of field can be obtained by changing the opening width of the opening 31. While she definition of the depth of field changes depending, on a permissible range of contrast of the image, the depth of field is approximately 8 mm in the first embodiment, and thus a sufficiently great depth of field can be obtained for the image reading device 100 employed for a copy machine. While the opening 33 is the aperture surface for the microlens 14 in the first embodiment, the opening 31 may also be the aperture surface. In this case, the object side telecentricity can be realized if the opening 31 is arranged at the position of the rear-side focal point of the microlens 14.
In order to prevent loss of image information between imaging units 110 adjoining in the X-axis direction, the number N of light receiving pixels 80 arrayed in the X-axis direction and included in one light receiving pixel unit 70 needs to be set greater than or equal to a predetermined number. In the plurality of light receiving pixel units 70 arrayed in the 2-line zigzag pattern as shown in
Here, since the plurality of microlenses 14 are arranged at positions corresponding respectively to the plurality of light receiving pixel units 70, the interval between the central positions of two microlenses 14 adjoining in the X-axis direction (namely, the interval of the optical axes 40) is also P/2. When the following expression (2) is satisfied, the pitch P0 is equal to an X-axis direction width of the microlens 14 as a visual field range of one microlens 14. Thus, the loss of image information can be prevented between the two microlenses 14. Further, when the expression (2) is satisfied, overlap of the visual fields of ad joining microlenses 14 can also be prevented.
P
0
=P/2=N·r (2).
In an example in the first embodiment, P=320 μm, N=4 and r=40 μm, and thus the above expression (2) is satisfied.
An X-axis direction distance between the microlens 141 and the microlens 142 is 160 μm. Therefore, in a space on the object side, the four principal rays L21 to L24 are arranged at 40 μm intervals in the X-axis direction. Further, the four principal rays L21 to L24 are also arranged at 40 μm intervals in the X-axis direction. Namely, spatial resolution in the X-axis direction on the object side is 40 μm, and the spatial resolution is 40 μm and does not change between imaging units 110 adjoining in the X-axis direction. As above, in the first embodiment, all the principal rays L21 to L24 and L31 to L34 are parallel to the Z-axis direction. Thus, the reduction ratio does not change irrespective of the position of the document 6 in the Z-axis direction. Accordingly, between imaging units 110 belonging to different lines, the loss of image information can be prevented and the overlap of the visual fields can also be prevented.
Next, conditions for the image reading device 100 for acquiring an image not affected by stray light will be described below by using
In
Among rays passing through an opening 31 and an opening 33 having optical axes different from each other, there exists no ray that enters a light receiving pixel unit 70.
A ray that passed through an opening 31 and an opening 33 having the same optical axis does not arrive at a light receiving pixel unit 70 other than the light receiving pixel unit 70 on the same optical axis.
For the condition 1, a sufficient condition is that the smallest incidence angle θ1 of a ray at an opening 31, among the incidence angles of the rays passing through an opening 31 and an opening 33 having optical axes different from each other, satisfies the following expression (3):
n
2·sin θ1>1 (3).
Next, the condition of the expression (3) will be expressed below by using parameters of the thickness and the opening width of the glass member 52. Opening half widths as ½ widths of the opening widths at the opening 31 and the opening 33 are respectively represented as X1 and X3. Further, a ½ width of the width of the light receiving pixel unit 70 in the X-axis direction is represented as X0. A distance D1 in the X-axis direction between a −X-axis direction end of an opening 33w and a +X-axis direction end of an opening 33v is obtained by the following expression (4):
D
1=(P/2)−X1−X2 (4).
As is clear from
tan θ1=D1/t2((P/2)−X1−X2)/t2 (5)
From the expression (3) and the expression (5), the following expression (6) is derived in regard to the thickness t2 of the glass member 52 satisfying the aforementioned condition 1:
t
2<√{square root over (n22−1)}·((P/2)−X1−X2) (6).
Namely, the ray L4 satisfies an internal total reflection condition if the thickness t2 of the glass member 52 is less than the value on the right side of the expression (6). In this case, the aforementioned condition 1 is satisfied.
Next, the aforementioned condition 2 will be explained below by using
The ray L6 shown in
Here, let α1 represent the emission angle of the ray L6 and α2 represent the incidence angle of the ray L6, the incidence angle α2 is obtained by using the following expression (7):
tan α2=(X1+X2)/t2 (7).
Further, according to the Snell's law, the relationship between the emission angle α1 and the incidence angle α2 is represented by the following expression (8):
n
2·sin α2=sin α1 (8).
Furthermore, the distance D2 from the optical axis 40b to the point Q0 is obtained by using the following expression (9):
D
2
=X
1
+t
0·tan α1 (9).
Here, the condition that the point Q0 is situated on the light receiving pixel unit 70b's side relative to the end of the light receiving pixel unit 70a in the +X-axis direction is represented by the following expression (10):
P/2−X0>X1+t0·tan α1 (10).
From the expressions (7) to (10), the following expression (11) is derived in regard to the thickness t2 of the glass member 52 satisfying the aforementioned condition 2:
Namely, the aforementioned condition 2 is satisfied when the thickness t2 of the glass member 52 is greater than the value or the right side of the expression (11).
In an example in the first embodiment, X1=20 μm, X2=30 μm, X0=20 μm, t0=500 μm, P/2=160 μm, and n2=1.52. With these values substituted into the right sides of the expression (6) and the expression (11), the right sides of the expressions respectively take on values of 126 μm and 322 μm. Thus, t2=300 μm satisfies both of the expression (6) and the expression (11).
Next, a relationship between the array of the plurality of light receiving pixel units 70 and the aforementioned conditions 1 and 2 will be explained below by using
In contrast, in the image reading device in which the plurality of light receiving pixel units are arrayed in one line, it is difficult to obtain the thickness t2 satisfying both of the aforementioned expressions (6) and (11) while maintaining the opening half widths of the openings at great values. Even in the case where the plurality of light receiving pixel units are arrayed in one line, there exists the thickness t2 as a parameter satisfying both of the expression (6) and the expression (11). Therefore, the above explanations (e.g., the explanations regarding the aforementioned conditions 1 and 2), excluding the explanations regarding the configuration in which the plurality of light receiving pixel units 70 are arrayed in two lines, are applied also to the case where the plurality of light receiving pixel units are arrayed in one line.
Next, a description will be given of conditions for a ray after passing through an opening 31 and an opening 33 arranged at positions overlapping with a light receiving pixel unit 70 belonging to one line included in the two lines of light receiving pixel units 70 for not entering a light receiving pixel unit 70 belonging to the other line in the image reading device 100.
Further, in
The following description will be given of the conditions for a ray after passing through the opening 312 and the opening 332 for not entering the light receiving pixel unit 71. If the inverse ray L8 arrives at the first light blocking part 41 or a third light blocking part 43, the ray after passing through the opening 312 and the opening 332 does not enter the light receiving pixel unit 71. Referring to
If the distance D3 is greater than the length X20, the inverse ray L8 arrives at the first light blocking part 41. Accordingly, the ray after passing through the opening 312 and the opening 332 does not enter the light receiving pixel unit 71. Even supposing that the distance D3 is less than the length X20 and the inverse ray L8 passes through the opening 312, if she interval q shown in
Next, a different configuration of the second light blocking member 12 will be described below by using
The inverse rays 61b, 62b, 63b and 66b are inverse rays heading in the +Z-axis direction from an object surface defined as a light receiving surface of the light receiving pixel unit 70b. The inverse say 61b is an inverse ray heading in the +Z-axis direction from a point on the object surface where an object height h=0. The inverse ray 62b is an inverse ray heading in the +Z-axis direction from a point on the object surface where the object height h=X0/2. The inverse ray 63b is an inverse ray heading in the +Z-axis direction from a point on the object surface where the object height h=X0. The inverse ray 66b, which is an inverse ray heading in the +Z-axis direction from the point where the image height h=X0 similarly to the inverse ray 63b, is blocked by the second light blocking member 12.
The opening width (diameter Φ in
Next, a relationship between a scan width (hereinafter referred so also as a “scan length”) of the image reading device 100 and the number of sensor chips 7 will be described below. When manufacturing an image reading device 100 whose scan length is 200 mm, a configuration in which the imaging element unit is provided with one sensor chip that is 200 mm long in the X-axis direction can be considered to be unrealistic. Therefore, the imaging element unit 10 implements the image reading device 100 whose scan length is 200 mm by including a plurality of sensor chips 7 arrayed in the X-axis direction as shown in
The aforementioned pitch P of the light receiving pixel units 70 is represented by the following expression (12) by using the number N of light receiving pixels 80 arrayed in the X-axis direction in one light receiving pixel unit 70 and the resolution r:
P=2·N·r (12).
Since N=4 and r=40 μm in an example of the first embodiment, substituting these values into the expression (12) results in P=320 μm.
Further, in the first embodiment, one sensor chip 7 includes 64 light receiving pixel units 70. Thus, let M represent the number of light receiving pixels 80 arrayed in the X-axis direction in one sensor chip 7, the number M is 64×4=256.
Assuming that an imaging range in the X-axis direction that can be captured by one sensor chip 7 is A, the imaging range A is represented by the following expression (13) by using the aforementioned number M and the resolution r:
A=M·r (13).
By substituting M=256 and r=40 μm into the expression (13), A=10.24 mm is derived. Thus, in order to implement the image reading device 100 whose scan length is 200 mm, it is sufficient if the imaging element unit 10 includes 20 sensor chips 7.
Here, for arraying a plurality of sensor chips 7 on the sensor substrate 8, it is necessary to prevent a defect of a pixel in a boundary region of adjoining sensor chips 7. In order to prevent the defect, it is necessary, for example, to set the interval between the light receiving pixel unit 70z situated at the +X-axis direction end of the sensor chip 7a and the light receiving pixel unit 70a situated at the −X-axis direction end of the sensor chip 7b shown in
As shown in
Xg=P/2−2·P2 (14).
Since P/2=160 μm and P2=20 μm in the first embodiment, substituting these values into the expression (14) results in Xg=120 μm.
One sensor chip 7 is formed by being cut out from a silicon wafer by a dicing apparatus. Therefore, in consideration of a cutting margin, a cutting error or the like, a margin is necessary when cutting the silicon wafer. In the first embodiment, the distance between the end 70e of the light receiving pixel unit 70z in the +X-axis direction and the end 7e of the sensor chip 7a in the +X-axis direction and the distance between the end 70f of the light receiving pixel unit 70a in the −X-axis direction and an end of the sensor chip 7a in the −X-axis direction are set at values smaller than 60 μm as ½ of the aforementioned spacing Xg. Accordingly, the defect of a pixel can be prevented in the boundary region of adjoining sensor chips 7. Since the image transfer magnification ratio of the microlens 14 is ¼ and is smaller than 1, the margin necessary when cutting the silicon wafer can be provided.
As mentioned earlier, in the first embodiment, a plurality of openings are formed in the same glass member and a plurality of microlenses 14 are also formed on the same glass member 51. With this feature, the positional accuracy among a plurality of openings or a plurality of microlenses 14 situated on the same plane can be made excellent. In the first embodiment, the first light blocking member 11 including the plurality of openings 31, the second light blocking member 12 including the plurality of openings 32, and the microlens array 60 including the plurality of microlenses 14 are formed on the glass member 51. Further, the third light blocking member 13 including the plurality of openings 33 is formed on the glass member 52.
Here, it is possible to consider increasing positioning accuracy of the central positions of the microlenses 14 and the central positions of the openings 31, 32 and 33 and accuracy of the positioning of the central positions of the openings 31, 32 and 33 differing in the Z-axis direction position by forming alignment marks for the alignment on the glass members 51 and 52. However, the alignment of members differing in the Z-axis direction position tends to have a greater error compared to the alignment of members situated on the same plane. Further, when a process of sticking the glass member 51 and the glass member 52 together by using alignment marks is necessary, it is difficult to reduce the error to 0.
Influence of displacement (misalignment) on image processing when the displacement occurs in the process of sticking the glass member 51 and the glass member 52 together will be described below by using
In
However, in the first embodiment, the optical axes 45 adjoining in the X-axis direction are parallel to each other even though the glass members 51 and 52 are deviated in the X-axis direction with respect to the imaging element unit 10. With this feature, the overlap or separation of the visual fields of the microlenses 14 between adjoining imaging units 110 (see
Suppose that the plurality of microlenses 14 (or the plurality of openings) are not formed integrally on the same plane, variation occurs in the central positions of the microlens 14 and the openings 31, 32 and 33 in one imaging unit 110. In this case, the adjoining optical axes 45 do not become parallel to each other as in the first embodiment. Therefore, the overlap or separation of the visual fields of the microlenses 14 occurs between adjoining imaging units 110, and thus the overlap or loss of image information occurs. Accordingly, in the image reading device 100 including a plurality of imaging units 110, the plurality of microlenses 14 formed integrally on the same plane and the plurality of openings formed integrally on the same plane are a configuration necessary for reading out an image having excellent image quality.
Next, a method for mitigating variation in the mounting of the sensor chips 7 will be described below in contrast with a comparative example by using
In
In the example shown in
In the first embodiment, as shown in
Next, a method of restoring the image of the document 6 by image processing executed by the image processing device 9 will be described below. The image processing device 9 converts an analog image signal outputted from the sensor chips 7 into digital image data and executes image processing described below. In the first embodiment, the plurality of light receiving pixel units 70 are arrayed in a zigzag pattern as shown in
In
Here, a method for the image processing device 9 for correcting the overlap or deviation of visual fields 91 caused by the variation in the mounting of the sensor chips 7 will be described below. As described earlier, in each of the light receiving pixel units 70z and 70a respectively arranged at positions closest to X-axis direction ends of the sensor chip 7, one light receiving pixel 80 is additionally arranged in the X-axis direction. Therefore, then the mounting of a sensor chip 7 varies in the X-axis direction within the range of the one light receiving pixel 80, light receiving pixel loss on the document 6 does not occur.
When the sensor chip 7a has deviated in the −X-axis direction and the sensor chip 7b has deviated in the +X-axis direction in
Furthermore, when the mounting of a sensor chip 7 varies in the Y-axis direction, the interval between reading positions on the document 6 between sensor chips 7 adjoining in the X-axis direction deviates from the distance q in the Y-axis direction. However, this deviation is corrected based on a shift amount as the distance for which the image processing device 9 shifts the image information in the Y-axis direction. Moreover, also for a subpixel deviation in the Y-axis direction, it is desirable to execute the image complementing process at the subpixel position.
According to the first embodiment described above, the image reading device 100 includes a plurality of light receiving pixel units 70 arrayed regularly. Further, the light receiving pixel unit 70 includes a plurality of light receiving pixels 80 arrayed in the main scanning direction. With this configuration, the resolution can be improved in the image reading device 100. Further, the occurrence of variation in the direction of the optical axis between adjoining imaging units 110 can be prevented.
According to the first embodiment, the microlens 14 is object side telecentric. Further, the plurality of microlenses 14, the first light blocking member 11 and the plurality of light receiving pixel units 70 are arranged so that reflected light reflected by the document 6 and passing through a microlens 14 and the opening 31 corresponding to the microlens 14 enters the plurality of light receiving pixels 80 included in the light receiving pixel unit 70 corresponding to the opening 31. Therefore, the visual field of the microlens 14 does not overlap with the visual field of adjoining another microlens 14, and no gap occurs. Since the depth of field is increased by this feature, the distance between the Z-axis direction position of the document 6 as the image formation position on the object side and the Z-axis direction position of the microlens 14 can be increased.
According to the first embodiment, the plurality of light receiving pixel units 70 are arrayed in a zigzag pattern and the plurality of openings 31 are arrayed in a zigzag pattern so as to correspond respectively to the plurality of light receiving pixel units 70. Further, the plurality of microlenses 14 are arrayed in a zigzag pattern so as to correspond respectively to the plurality of openings 31. The pitch P0 between the light receiving pixel unit 71 and the light receiving pixel unit 72 belonging to different lines among the plurality of light receiving pixel units 70 satisfies the aforementioned expression (2). Accordingly, irrespective of the distance from the light receiving pixel unit 70 to the document 6, the visual field of the microlens 14 does not overlap with the visual field of adjoining another microlens 14, and no loss occurs. Thus, the depth of field can be increased in the image reading device 100.
According to the first embodiment, the image reading device 100 further includes the glass member 51 having the surface 51a, on which the first light blocking member 11 is provided, including the plurality of openings 31 and the surface 51c, on which the microlens array 60 is provided (i.e., the plurality of microlenses 14). With this configuration, the first light blocking member 11 and the microlens array 60 can be formed integrally on the same member. Further, the positional accuracy of the plurality of openings 31 and the plurality of microlenses 14 can be increased. Thus, the occurrence of the variation in the direction of the optical axis between adjoining imaging units 110 can be prevented. In other words, the occurrence of the visual field overlap or loss can be prevented between adjoining microlenses 14.
According to the first embodiment, the plurality of light receiving pixel units 70 included in one sensor chip 7 include the light receiving pixel units 70z and 70a respectively arranged at positions closest to the +X-axis direction end 7e and the −X-axis direction end 7f of the sensor chip 7 and the light receiving pixel units 70x other than the light receiving pixel units 70z and 70a. The number of light receiving pixels 80 arrayed in the X-axis direction in the light receiving pixel unit 70z, 70a is greater than the number of light receiving pixels 80 in the light receiving pixel unit 70x. With this feature, the occurrence of the visual field loss can be prevented even when the variation in the mounting of the sensor chips 7 occurs.
According to the first embodiment, since the thickness t2 of the glass member 52 satisfies the aforementioned expression (6), reflected light after passing through an opening 32 and an opening 31 situated on the same optical axis as a light receiving pixel unit 70 enters the light receiving pixel unit 70, and thus an. image not affected by stray light can be acquired.
According to the first embodiment, since the thickness t2 of the glass member 52 satisfies the aforementioned expression (11), the depth of field can be increased. further,
According to the first embodiment, the image reading device 100 includes The second light blocking member 12 provided on the surface of the glass member 51 on the +Z-axis side and including a plurality of openings 32 corresponding respectively to the plurality of microlenses 14. The opening width Φ of each opening 32 included in the plurality of openings 32 is smaller than the external diameter of the micro lens 14. With this feature, reflected light not passing through the microlenses 14, included in the reflected light reflected by the document 6, is blocked by the second light blocking member 12. Thus, the image reading device 100 is capable of reading out an image having excellent image quality.
According to the first embodiment, the plurality of light receiving pixel units 70 include a plurality of light receiving pixel units 71 in the first line 70m and a plurality of light receiving pixel units 72 in the second line 70n arrayed at different positions in the auxiliary scanning direction. Each light receiving pixel unit 72 included in the plurality of light receiving pixel units 72 is arranged between two light receiving pixel units 71 adjoining in the main scanning direction. Namely, the plurality of light receiving pixel units 70 are arrayed in a zigzag pattern. Accordingly, the pitch P can be increased compared to a configuration in which a plurality of light receiving pixel units are arrayed in one line, and thus the image reading device 100 is capable of acquiring an image not affected by stray light. Further, the luminance of the image can be increased.
As shown in
According to the second embodiment described above, the image reading device 200 does not include the second light blocking member 12 (see
Further, according to the second embodiment, the image reading device 200 does not include the class member 52 (see
As shown in
According to the modification of the second embodiment described above, the image reading device 200a includes the second light blocking member 12 arranged between the surface 251d of the glass member 251a on the +Z-axis side and the microlens array 60. With this configuration, when scattered and reflected light from the document 6 arrives at a position outside the microlens 14 the X-axis direction, the scattered and reflected light is blocked by the second light blocking part 42 of the second light blocking member 12 and thus does not arrive at the light receiving pixel unit 70. Accordingly, the image reading device 200a is capable of reading out an image not affected by stray light.
The imaging element unit 310 includes a plurality of sensor chips 307, the sensor substrate 8, and the image processing device 9. Each sensor chip 307 includes a plurality of light receiving pixel units 370. In the third embodiment, the plurality of light receiving pixel units 370 are arrayed in the X-axis direction in one line. Accordingly, the image processing in the image processing device 9 can be simplified in comparison with the configuration in which the plurality of light receiving pixel units 70 (see
While illustration is left out, the plurality of microlenses 14, the plurality of openings 32, the plurality of openings 31 and the plurality of openings 33 are arranged at positions corresponding respectively to the plurality of light receiving pixel units 370. Namely, the plurality of microlenses 14, the plurality of openings 32, the plurality of openings and the plurality of openings 33 are respectively arrayed in the X-axis direction in one line. An imaging unit as a unit optical system is formed by one light receiving pixel unit 370 and one microlens 14, one opening 32, one opening 31 and one opening 33 situated on the same optical axis as the light receiving pixel unit 370. in the case where the plurality of light receiving pixel units 370 are arrayed in one line as above, there is a danger of occurrence of stray light since the interval between the imaging units adjoining in the X-axis direction becomes narrower. However, by reducing the opening width of the opening 31, the image reading device according to the third embodiment is capable of acquiring an image not affected by stray light.
According to the third embodiment described above, the plurality of light receiving pixel units 370 are arrayed in one line. Accordingly, in the third embodiment, the image combination process becomes unnecessary and thus the image processing in the image processing device 9 can be simplified in comparison with the configuration in which the plurality of light receiving pixel units 70 (see
As shown in
To the image processing device 9, an image signal outputted from a plurality of light receiving pixels 480, among the aforementioned plurality of light receiving pixels 480, overlapping with the regions of the light receiving pixel units 70 shown in
According to the fourth embodiment described above, the sensor-chip 407 includes a plurality of light receiving pixels 480 arrayed like a two-dimensional matrix. With this configuration, in contrast with the sensor chip 7 shown in
As shown in
The first light blocking member 511 includes a plurality of openings 531 as the plurality of first openings corresponding respectively to the plurality of light receiving pixel units 70. The thickness of the first light blocking member 511 is greater than the thickness of the first light blocking member 11 shown in
The second light blocking member 512 includes a plurality of openings 532 as the plurality of second openings corresponding respectively to the plurality of microlenses 14. The thickness of the second light blocking member 512 is greater than the thickness of the second light blocking member 12 shown in
Each of the first light blocking member 511 and the second light blocking member 512 is formed from a metal plate, for example. The first light blocking member 511 and the second light blocking member 512 are formed by, for example, electroforming with which high processing accuracy can be obtained.
The microlens array 60 is formed on a surface of the second light blocking member 512 on the +Z-axis side. As mentioned above, in the fifth embodiment, the thickness of the second light blocking member 512 is greater than the thickness of the second light blocking member 12 in the first embodiment. Therefore, the plurality of openings 32 and the microlens array 60 can be formed even if the second light blocking member 512 is not supported by a glass member (e.g., the glass member 51 shown in
The spacer member 515 connects the first light blocking member 511 and the second light blocking member 512 together. By this, spacing between the first light blocking member 511 and the second light blocking member 512 in the Z-axis direction is set at a predetermined size. As above, the image reading device 500 in the fifth embodiment does not include the glass member 51 shown in
According to the fifth embodiment described above, each of the first light blocking member 511 and the second light blocking member 512 is formed from a metal plate. Therefore, each of the first light blocking member 511 and the second light blocking member 512 has a thickness in the Z-axis direction greater than or equal to a predetermined size. Thus, there exists no ray passing through an opening 532 and an opening 531 having optical axes different from each other. Accordingly, the image reading device 100 is capable of acquiring an image not affected by stray light.
Further, according to the fifth embodiment, the image reading device 500 includes the spacer members 515 connecting the first light blocking member 511 and the second light blocking member 512 together and the spacing between the first light blocking member 511 and the second light blocking member 512 is set at a predetermined size. With this configuration, the occurrence of the visual field overlap or loss can be prevented between adjoining imaging units, and thus the depth of field can be increased.
As shown in
The glass member 551 is arranged between the first light blocking member 511 and the second light blocking member 512. In this case, each of the first light blocking member 511 and the second light blocking member 512 can be manufactured by electroforming with which high processing accuracy can be obtained. The glass member 551 is stuck to the first light blocking member 511 and the second light blocking member 512.
According to the first modification of the fifth embodiment described above, the image reading device 500a includes the glass member 551 arranged between the first light blocking member 511 and the second light blocking member 512 and the spacing between the first light blocking member 511 and the second light blocking member 512 is set at a predetermined size. With this configuration, the occurrence of the visual field overlap or loss can be prevented between adjoining imaging units, and thus the depth of field can be increased.
As shown in
In the first light blocking member 511, a part excluding the plurality of openings 531 is a first light blocking part 541 that blocks the reflected light. In the second light blocking member 512, a part excluding the plurality of openings 532 is a second light blocking part 542 that blocks the reflected light.
Each light blocking wall 516 included in the plurality of light blocking walls 516 extends along the optical axis 40 of the microlens 14. The light blocking walls 516 connect the first light blocking part 541 and the second light blocking part 542 together. With this configuration, there can exist no ray passing through an opening 532 and an opening 531 having optical axes different from each other. Accordingly, the image reading device 500b is capable of acquiring an image not affected by stray light.
According to the second modification of the fifth embodiment described above, the image reading device 500b includes the light blocking walls 516 connecting the first light blocking part 541 and the second light blocking part 542 together. With this configuration, there can exist no ray passing through an opening 532 and an opening 531 having optical axes different from each other. Accordingly, the image reading device 500b is capable of acquiring an image not affected by stray light.
As shown in
The sensor auxiliary substrate 605 is bonded to surfaces 7h of the sensor chips 7 on the side opposite to surfaces 7g on which the plurality of light receiving pixel units 70 are provided. The sensor auxiliary substrate 605 is formed from glass material similarly to the glass members 51 and 52. The linear expansion coefficient of the sensor auxiliary substrate 605 is the same as the linear expansion coefficient of the glass members 51 and 52.
An electric circuit (not shown) is printed on the sensor auxiliary substrate 605. The electric circuit is electrically connected to the sensor chips 7 by means of wire bonding, for example. Further, the sensor auxiliary substrate 605 is electrically connected to an electric circuit provided on the sensor substrate 8 by means of wire bonding, for example. The image processing device 9 converts the analog image signal outputted from the sensor chips 7 into the digital image data.
In general, the linear expansion coefficient of glass epoxy resin as the material of the sensor substrate 8 is 3×10−5/° C., for example. On the other hand, the linear expansion coefficient of the glass members 51 and 52 is 7.0×10−6/° C., for example. Thus, there is a great difference between the linear expansion coefficient of the sensor substrate 8 and the linear expansion coefficient of the glass members 51 and 52. Therefore, in the image reading device 100 according to the first embodiment, displacement of the light receiving pixel unit 70 with respect to the optical axis 40 of the microlens 14 can occur upon the occurrence of a temperature change. Since this displacement causes a decrease in the light reception amount of each light receiving pixel unit 70, there is a danger of occurrence of a problem such as darkening of the image acquired by the image reading device 100 or impossibility of acquiring the image.
In the sixth embodiment, the sensor auxiliary substrate 605 as a glass member is bonded to the surfaces 7h of the sensor chips 7 on the side opposite to the surfaces 7g on which the plurality of light receiving pixel units 70 are provided. With this configuration, even when a temperature change occurs, the light receiving pixel unit 70 is situated on the optical axis of the microlens 14, and thus the decrease in the light reception amount of each light receiving pixel unit 70 can be prevented. Accordingly, the image reading device 600 is capable of acquiring an image having excellent image quality irrespective of the temperature change.
According to the sixth embodiment described above, the image reading device 600 further includes the sensor auxiliary substrate 605 bonded to the surfaces 7h of the sensor chips 7 on the side opposite to the surfaces 7g on which the plurality of light receiving pixel units 70 are provided. With this configuration, even when a temperature change occurs, she light receiving pixel unit 70 is situated on the optical axis of the microlens 14, and thus the decrease in the light reception amount of each light receiving pixel unit 70 can be prevented. Accordingly, the image reading device 600 is capable of acquiring an image having excellent image quality irrespective of the temperature change.
6: document, 7, 307, 407: sensor chip, 7e, 7f: end, 7g, 7h, 51a, 51b, 52a, 551a, 551b: surface, 11, 511: first light blocking member, 12, 512: second light blocking member, 13: third light blocking member, 14: microlens, 31, 32, 33, 531, 532: opening, 70, 70a, 70x, 70z, 71, 72: light receiving pixel unit, 70m: first line, 70n: second line, 80, 80R, 80G, 80B: light receiving pixel, 51, 251, 251a, 551: first glass member, 52: second glass member, 100, 200, 200a, 500, 500a, 500b, 600: image reading device, 515: spacer member, 516: light blocking wall, 605: sensor auxiliary substrate, N: number of pieces, P0: distance, r: resolution, Φ: diameter.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/007003 | 2/25/2021 | WO |