Embodiments of the present invention relate to optical image stabilization.
An optical image stabilizer (OIS) is used in a still camera or video camera to stabilizes a recorded image. It varies the optical path to the image sensor, stabilizing the projected image on the image sensor before it is captured and recorded.
There are currently two solutions. One solution uses complex fixed or replaceable lens units that have in-built optical image stabilization and another solution moves the image sensor.
The complex replaceable lens units occupy a large volume and are complex. Moving the image sensor to compensate for camera movement can introduce a parallax error.
When a camera tilts towards/away from an object, the object image is compressed where the camera sensor moves away (greater field of view) and is expanded where the sensor moves towards (smaller field of view). The error caused in the image by expansion at one side and contraction at the other side is the parallax error.
The parallax error becomes more noticeable for cameras with larger fields of view such as ‘point and shoot’ cameras which are common in hand portable apparatus and the error becomes less noticeable for cameras with smaller fields of view such as telephoto lens cameras.
When a camera tilts towards/away from an object, the object image is compressed where the camera sensor moves away (greater field of view) and is expanded where the sensor moves towards (smaller field of view). The error caused in the image by expansion at one side and contraction at the other side is the parallax error.
The parallax error may be resolved into an error formed by lateral movement and an error formed by a transverse pinch. Where the tilt is about a y-axis the compression may be resolved into a lateral movement in an x-direction and a transverse pinch in a y-direction.
The expansion error may be resolved into an error formed by lateral movement and an error formed by a transverse stretch. Where the tilt is about a y-axis the expansion may be resolved into a lateral movement in a x-direction and a transverse stretch in a y-direction.
A lateral shift of the sensor (e.g. in the x-direction) removes those parts of the errors formed by lateral movement, but does not resolve the pinch and stretch errors at opposite ends of the image.
However the movement of a lens that comprises a central region and first and second outer regions on either side of the central region in the first direction, where the first and second outer regions optically distort more than the central region, introduces a stretch distortion to compensate for the pinch error and a pinch distortion to compensate for the stretch error. That resolves or ameliorates the pinch error and the stretch error at opposite ends of the image.
According to various, but not necessarily all, embodiments of the invention there is provided an apparatus comprising: an image sensor; a lens for focusing an optical image onto the image sensor; a driver configured to move the lens at least in a first direction, wherein the lens comprises a central region and first and second outer regions on either side of the central region in the first direction, wherein the first and second outer regions optically distort more than the central region.
According to various, but not necessarily all, embodiments of the invention there is provided a method comprising: shifting an optical image focused on an image sensor towards a first region of the image sensor and away from a second region of the image sensor by moving a lens; expanding, orthogonally to the shift of the optical image, the optical image focused on the first region of the image sensor using a change in distortion provided by the lens as a consequence of the movement of the lens; and compressing, orthogonally to the shift of the optical image, the optical image focused on the second region of the image sensor using a change in distortion provided by the lens as a consequence of the movement of the lens
According to various, but not necessarily all, embodiments of the invention there is provided a method comprising shifting an optical image towards a first region of the optical image and away from a second region of the optical image; expanding, at least orthogonally to the shift, the first region of the optical image; and compressing, at least orthogonally to the shift, the second region of optical image.
For a better understanding of various examples of embodiments of the present invention reference will now be made by way of example only to the accompanying drawings in which:
In this document where the term ‘lens’ is used it mean a lens (an optical element that focuses light) or a system comprising one or more lenses.
The image sensor 10 has an image plane 14 on which the image 4 is focused by the lens 20. The image sensor 10 may, for example, be a high quality image sensor having, for example, in excess of 6M pixels, 12M pixels or 18M pixels.
The lens 20 may have a wide field of view e.g. an angle of view greater than 30 degrees or greater than 60 degrees across both the horizontal and the vertical.
The lens 20 is mounted for movement substantially parallel to the image plane 14. It may, for example, be moved in the first direction d1 either in a positive sense (+x) or a negative sense (−x). It may, for example, also be moved in a second direction d2 (illustrated in
A lens movement driver 6 is configured to move the lens 20. The driver 6 may, for example, use mechanical linkages to move the lens 20 or may, for example, use electromagnetism to control the position of the lens 20.
The apparatus 2 may also comprise one or more motion sensors 40 such as gyroscopes, accelerometers or other sensors that can detect a change in orientation.
If the motion sensor 40 detects a yaw about the y axis, then the lens driver 6 may move the lens in the first direction d1 either in the +x sense or the −x sense depending upon the direction of yaw about the y-axis.
If the optical sensor has a first region 11 associated with the first region 21 of the lens 20, a second region 12 associated with the second region 22 of the lens 20, and a central region 13 associated with the central region 23 of the lens 20, then if the yaw about the y axis causes the first region 11 of the sensor 11 to lead the second region 12 of the sensor, the lens 20 is moved in the first direction (parallel to the image sensor 10) in a sense from the leading first region 21 towards the lagging second region 22 (in the +x direction in
If the yaw about the y axis causes the first region 11 of the sensor 11 to lag the second region 12 of the sensor, the lens 20 is moved in the first direction (parallel to the image plane 14) in a sense from the leading second region 22 towards the lagging first region 21 (in the +x direction in
Referring to
If the motion sensor 40 detects a pitch about the x axis, then the lens driver 6 may move the lens in the second direction d2 either in the +y sense or the −y sense depending upon the direction of pitch about the x-axis.
If the optical sensor has a third region associated with the third region 24 of the lens 20 and a fourth region associated with the fourth region 25 of the lens 20, then if the pitch about the x axis causes the third region of the sensor 10 to lead the fourth region of the sensor 10, the lens 20 is moved in the second direction (parallel to the image plane 14) in a sense from the leading third region 24 of the lens 20 towards the lagging fourth region 25 of the lens 20 (in the +y direction in
If the pitch about the x axis causes the third region of the sensor 10 to lag the fourth region of the sensor 10, the lens 20 is moved in the second direction (parallel to the image plane 14) in a sense from the leading fourth region 25 of the lens 20 towards the lagging third region 24 of the lens 20 (in the −y direction in
The apparatus 2 may have a housing 30 and the lens 20 may be moved relative to housing 30. The optical sensor 10 may be fixed relative to the housing 30.
The apparatus 2 may be a hand portable electronic apparatus or a mobile personal apparatus, such as, for example a mobile cellular telephone, a personal media recorder/player etc.
When the image sensor 10 tilts away, the image 4 is expanded (greater field of view) so that it extends beyond the edges of a lagging region of the image plane 14. When the image sensor 10 tilts towards, the image 4 is compressed (smaller field of view) so that it lies within a leading region of the image plane 14. The error caused in the image by expansion at the lagging side and contraction at the leading side is a parallax error.
The compression error at the leading edges may be resolved into an error formed by lateral movement and an error formed by a transverse pinch. Where the tilt is about a y-axis the compression may be resolved into a lateral movement in an x-direction and a transverse pinch in the y-direction.
The expansion error at the lagging edges may be resolved into an error formed by lateral movement and an error formed by a transverse stretch. Where the tilt is about a y-axis the expansion may be resolved into a lateral movement in a x-direction and a transverse stretch in a y-direction.
Referring to
However, referring to
A change in distortion provided by the first outer region 21, as a consequence of the movement in the first direction (towards but parallel to the sensor), expands the optical image 4 focused on the first region 11 of the image sensor 10.
The lens 20 may have negative distortion (image magnification decreases with distance away from the central region 23). The absolute value of the distortion increases (becomes more negative i.e. more compressive) in at least the second outer region 22 with distance away from the central region 23.
The lens 20 is configured to provide an absolute value of distortion D that increases monotonically with absolute distance x from central region 23 of the lens 20.
The absolute value of distortion D is symmetric about the axis x=0. Consequently, the first outer region and the second outer region, have symmetric distortion when measured from a center of the lens 20.
In this example, the absolute value of distortion D is a second order quadratic in the absolute distance x from the central region 23 of the lens 20.
Consequently, as illustrated in
Consequently, the change in distortion provided by the second outer region 22, as a consequence of the movement in the first direction x, is proportional to the movement and the change in distortion provided by the first outer region 21, as a consequence of that movement in the first direction x, is proportional to the movement. The change in distortion provided by the second outer region 22 and the change in distortion provided by the first outer region 21, as a consequence of the movement in the first direction, has the same absolute value but opposite sense.
Although
The peripheral edge region 70, which comprises the first and second outer regions and the third and fourth outer regions, optically distorts more than the central region 23 it circumscribes.
The peripheral region 70 may, for example provide barrel distortion. In barrel distortion, distortion is negative and image magnification decreases with distance from the optical axis 71. The absolute value of the distortion increases (becomes more negative i.e. more compressive) with distance from the optical axis. The effect is of an image mapped onto a barrel or sphere.
A change in distortion provided by the peripheral region 70, as a consequence of the movement of the lens in the first direction and/or second direction, compresses the optical image focused on the portion of the image sensor 10 towards which the lens 20 moves (in a plane parallel to the image sensor) and expands the optical image focused on the portion of the image sensor 10 away from which the lens 20 moves (in a plane parallel to the image sensor).
At block 81, the method comprises shifting an optical image focused on an image sensor 10 towards a first region 11 of the image sensor and away from a second region 12 of the image sensor 10 by moving a lens 20.
At block 82, the method comprises expanding, orthogonally to the shift of the optical image 4, the optical image 4 focused on the first region 11 of the image sensor 10 using a change in distortion provided by the lens 20 as a consequence of the movement of the lens 20.
At block 83, the method comprises compressing, orthogonally to the shift of the optical image 4, the optical image 4 focused on the second region 12 of the image sensor 10 using a change in distortion provided by the lens 20 as a consequence of the movement of the lens 20.
The method 80 is performed in response to a yaw of the image sensor 10 in which the first region 11 of the image sensor 10 leads the second region 12 of the image sensor 10.
In response to a yaw of the image sensor in which the second region 12 of the image sensor 10 leads the first region 11 of the image sensor, the method 80 may comprise: shifting 81 an optical image focused on an image sensor towards the second region of the image sensor and away from the first region of the image sensor by moving the lens; compressing 82 orthogonally to the shift of the optical image, the optical image focused on the first region of the image sensor using a change in distortion provided by the lens as a consequence of the movement of the lens; and expanding 83, orthogonally to the shift of the optical image, the optical image focused on the second region of the image sensor using a change in distortion provided by the lens as a consequence of the movement of the lens
In response to a pitch of the image sensor in which a third region of the image sensor leads a fourth region of the image sensor, the method 80 may comprise: shifting 81 an optical image focused on an image sensor towards the third region of the image sensor and away from the fourth region of the image sensor by moving the lens; compressing 82, orthogonally to the shift of the optical image, the optical image focused on the fourth region of the image sensor using a change in distortion provided by the lens as a consequence of the movement of the lens; and expanding 83, orthogonally to the shift of the optical image, the optical image focused on the third region of the image sensor using a change in distortion provided by the lens as a consequence of the movement of the lens;
In response to a pitch of the image sensor in which a third region of the image sensor lags the fourth region of the image sensor, the method 80 comprises: shifting 81 an optical image focused on an image sensor towards the fourth region of the image sensor and away from the third region of the image sensor by moving the lens; compressing 82, orthogonally to the shift of the optical image, the optical image focused on the third region of the image sensor using a change in distortion provided by the lens as a consequence of the movement of the lens; and expanding 83, orthogonally to the shift of the optical image, the optical image focused on the fourth region of the image sensor using a change in distortion provided by the lens as a consequence of the movement of the lens.
A suitable lens 20 may be designed and manufactured, for example, as described below:
Initially, the maximum correction (tilt) angles αx, αy for image stabilization are defined. αx is the maximum yaw angle about the y-axis. αx is the maximum pitch angle about x-axis. Typically these angles will be in the range 0.3-0.6 degrees.
The error in the x direction is given by:
Δx=f·(tan(βx/2+αx)−W/2
The lens should therefore be moved −Δx to correct this error.
βx is the angular field of view in the x-direction, f is the focal length of the lens and W is the width of the image in the x-direction.
The error in the y direction at the pinched edge is given by:
e1=f·(tan βy−tan(βy−αy))
The error in the y direction at the stretched edge is given by:
e2=f·(tan(βy+αy)−tan βy)
where βy is the angular field of view in the y-direction and f is the focal length of the lens.
The distortion of the lens is designed so that the change in distortion caused by Δx at x=−W/2 compensates for the error e1 and the change in distortion caused by Δx at x=W/2 compensates for the error e2.
If the distortion is modeled as a quadratic, D=kx2 then solving along the x axis
D
max
=k(W/2)2
and
D
max
−e1=k(W/2−Δx)2
However, maximum distortion will occur along the diagonal, so solving for a 3×4 sensor geometry along the diagonal provides:
D
max
=k(5/4)2(W/2)2
and
D
max−5/4*e1=k(5/4)2(W/2−Δx)2
Solving the equations gives k.
The blocks illustrated in
Although embodiments of the present invention have been described in the preceding paragraphs with reference to various examples, it should be appreciated that modifications to the examples given can be made without departing from the scope of the invention as claimed.
Features described in the preceding description may be used in combinations other than the combinations explicitly described.
Although functions have been described with reference to certain features, those functions may be performable by other features whether described or not.
Although features have been described with reference to certain embodiments, those features may also be present in other embodiments whether described or not.
Whilst endeavoring in the foregoing specification to draw attention to those features of the invention believed to be of particular importance it should be understood that the Applicant claims protection in respect of any patentable feature or combination of features hereinbefore referred to and/or shown in the drawings whether or not particular emphasis has been placed thereon.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/IB2010/054757 | 10/20/2010 | WO | 00 | 6/12/2013 |