The present invention relates to optical and mechanical apparatus and methods for improved virtual interface projection and detection.
The following patent documents, and the references cited therein are believed to represent the current state of the art:
PCT Application PCT/IL01/00480, published as International Publication No. WO 2001/093182,
PCT Application PCT/IL01/01082, published as International Publication No. WO 2002/054169, and
PCT Application PCT/IL03/00538, published as International Publication No. WO 2004/003656,
the disclosures of all of which are incorporated herein by reference, each in its entirety.
The present application seeks to provide optical and mechanical apparatus and methods for improved virtual interface projection and detection. There is thus provided in accordance with a preferred embodiment of the present invention, an electronic camera comprising an electronic imaging sensor providing outputs representing imaged fields, a first imaging functionality employing the electronic imaging sensor for data entry responsive to user hand activity in a first imaged field, at least a second imaging functionality employing the electronic imaging sensor for taking at least a second picture of a scene in a second imaged field, optics associating the first and the at least second imaging functionalities with the electronic imaging sensor, and a user-operated imaging functionality selection switch operative to enable a user to select operation in one of the first and the at least second imaging functionalities. The above described electronic camera also preferably comprises a projected virtual keyboard on which the user hand activity is operative.
The optics associating the first and the at least second imaging functionalities with the electronic imaging sensor preferably, includes at least one optical element which is selectably positioned upstream of the sensor only for use of the at least second imaging functionality. Alternatively and preferably, this optics does not include an optical element having optical power which is selectably positioned upstream of the sensor for use of the first imaging functionality.
In accordance with another preferred embodiment of the present invention, in the above described electronic camera, the optics associating the first and second imaging functionalities with the electronic imaging sensor includes a beam splitter which defines separate optical paths for the first and the second imaging functionalities. In any of the above-described embodiments, the user-operated imaging functionality selection switch is preferably operative to select operation in one of the first and the at least second imaging functionalities by suitable positioning of at least one shutter to block at least one of the imaging functionalities. Furthermore, the first and second imaging functionalities preferably define separate optical paths, which can extend in different directions, or can have different fields of view.
In accordance with yet another preferred embodiment of the present invention, in those above-described embodiments utilizing a wavelength dependent splitter, the splitter is operative to separates visible and IR spectra for use by the first and second imaging functionalities respectively.
Furthermore, any of the above-described electronic cameras may preferably also comprise a liquid crystal display on which the output representing an imaged field is displayed. Additionally, the optics associating the first imaging functionality with the electronic imaging sensor may preferably comprise a field expander lens.
There is further provided in accordance with yet another preferred embodiment of the present invention, an electronic camera comprising an electronic imaging sensor providing outputs representing imaged fields, a first imaging functionality employing the electronic imaging sensor for taking a picture of a scene in a first imaged field, at least a second imaging functionality employing the electronic imaging sensor for taking a picture of a scene in at least a second imaged field, optics associating the first and the at least second imaging functionalities with the electronic imaging sensor, and a user-operated imaging functionality selection switch operative to enable a user to select operation in one of the first and the at least second imaging functionalities.
The optics associating the first and the at least second imaging functionalities with the electronic imaging sensor preferably, includes at least one optical element which is selectably positioned upstream of the sensor only for use of the at least second imaging functionality. Alternatively and preferably, this optics does not include an optical element having optical power which is selectably positioned upstream of the sensor for use of the first imaging functionality.
In accordance with another preferred embodiment of the present invention, in the above described electronic camera, the optics associating the first and second imaging functionalities with the electronic imaging sensor includes a wavelength dependent splitter which defines separate optical paths for the first and the second imaging functionalities. In any of the above-described embodiments, the user-operated imaging functionality selection switch is preferably operative to select operation in one of the first and the at least second imaging functionalities by suitable positioning of at least one shutter to block at least one of the imaging functionalities. Furthermore, the first and second imaging functionalities preferably define separate optical paths, which can extend in different directions, or can have different fields of view.
Furthermore, any of the above-described electronic cameras may preferably also comprise a liquid crystal display on which the output representing an imaged field is displayed. Additionally, the optics associating the first imaging functionality with the electronic imaging sensor may preferably comprise a field expander lens.
In accordance with still more preferred embodiments of the present invention, the above mentioned optics associating the first and the at least second imaging functionalities with the electronic imaging sensor may preferably be fixed. Additionally and preferably, the first and the second imaged fields may each undergo a single reflection before being imaged on the electronic imaging sensor. In such a case, the reflection of the second imaged field may preferably be executed by means of a pivoted stowable mirror. Alternatively and preferably, the first imaged field may be imaged directly on the electronic imaging sensor, and the second imaged field may undergo two reflections before being imaged on the electronic imaging sensor. In such a case, the second of the two reflections may preferably be executed by means of a pivoted stowable mirror. Furthermore, the second imaged field may be imaged directly on the electronic imaging sensor, and the first imaged field may undergo two reflections before being imaged on the electronic imaging sensor.
There is further provided in accordance with still another preferred embodiment of the present invention, an electronic camera as described above, and wherein the first imaging functionality is performed over a spectral band in the infra red region, and the second imaging functionality is performed over a spectral band in the visible region, the camera also comprising filter sets, one filter set for each of the first and second imaging functionalities. In such a case, the filter sets preferably comprise a filter set for the first imaging functionality comprising at least one filter transmissive in the visible region and in the spectral band in the infra red region, and at least one filter transmissive in the infra red region to below the spectral band in the infra red region and not transmissive in the visible region, and a filter set for the second imaging functionality comprising at least one filter transmissive in the visible region up to below the spectral band in the infra red region. In the latter case, the first and the second imaging functionalities are preferably directed along a common optical path, and the first and the second filter sets are interchanged in accordance with the imaging functionality selected.
In accordance with a further preferred embodiment of the present invention, there is also provided an electronic camera as described above, and wherein the user-operated imaging functionality selection is preferably performed either by rotating the electronic imaging sensor in front of the optics associating the first and the at least second imaging functionalities with the electronic imaging sensor, or alternatively by rotating a mirror in front of the electronic imaging sensor in order to associate the first and the at least second imaging functionalities with the electronic imaging sensor.
There is also provided in accordance with yet a further preferred embodiment of the present invention, an electronic camera as described above, and also comprising a partially transmitting beam splitter to combine the first and the second imaging fields, and wherein both of the imaging fields are reflected once by the partially transmitting beam splitter, and one of the imaging fields is also transmitted after reflection from a full reflector through the partially transmitting beam splitter. The partially transmitting beam splitter may also preferably be dichroic. In either of these two cases, the full reflector may preferably also have optical power.
There is even further provided in accordance with another preferred embodiment of the present invention, a portable telephone comprising telephone functionality, an electronic imaging sensor providing outputs representing imaged fields, a first imaging functionality employing the electronic imaging sensor for data entry responsive to user hand activity in a first imaged field, at least a second imaging functionality employing the electronic imaging sensor for taking at least a second picture of a scene in a second imaged field, optics associating the first and the at least second imaging functionalities with the electronic imaging sensor, and a user-operated imaging functionality selection switch operative to enable a user to select operation in one of the first and the at least second imaging functionalities.
Furthermore, in accordance with yet another preferred embodiment of the present invention, there is also provided a digital personal assistant comprising at least one personal digital assistant functionality, an electronic imaging sensor providing outputs representing imaged fields, a first imaging functionality employing the electronic imaging sensor for data entry responsive to user hand activity in a first imaged field, at least a second imaging functionality employing the electronic imaging sensor for taking at least a second picture of a scene in a second imaged field, optics associating the first and the at least second imaging functionalities with the electronic imaging sensor, and a user-operated imaging functionality selection switch operative to enable a user to select operation in one of the first and the at least second imaging functionalities.
In accordance with still another preferred embodiment of the present invention, there is provided a remote control device comprising remote control functionality, an electronic imaging sensor providing outputs representing imaged fields, a first imaging functionality employing the electronic imaging sensor for data entry responsive to user hand activity in a first imaged field, at least a second imaging functionality employing the electronic imaging sensor for taking at least a second picture of a scene in a second imaged field, optics associating the first and the at least second imaging functionalities with the electronic imaging sensor, and a user-operated imaging functionality selection switch operative to enable a user to select operation in one of the first and the at least second imaging functionalities.
There is also provided in accordance with yet a further preferred embodiment of the present invention optical apparatus for producing an image including portions located at a large diffraction angle comprising a diode laser light source providing an output light beam, a collimator operative to collimate the output light beam and to define a collimated light beam directed parallel to a collimator axis, a diffractive optical element constructed to define an image and being impinged upon by the collimated light beam from the collimator and producing a multiplicity of diffracted beams which define the image and which are directed within a range of angles relative to the collimator axis, and a focusing lens downstream of the diffractive optical element and being operative to focus the multiplicity of light beams to points at locations remote from the diffractive optical element. In such apparatus, the large diffraction angle is defined as being generally such that the image has unacceptable aberrations when the focusing lens downstream of the diffractive optical element is absent. Preferably, it is defined as being at least 30 degrees from the collimator axis.
There is even further provided in accordance with a preferred embodiment of the present invention optical apparatus for producing an image including portions located at a large diffraction angle from an axis comprising a diode laser light source providing an output light beam, a beam modifying element receiving the output light beam and providing a modified output light beam, a collimator operative to define a collimated light beam, and a diffractive optical element constructed to define an image and being impinged upon by the collimated light beam from the collimator, and producing a multiplicity of diffracted beams which define the image and which are directed within a range of angles relative to the axis. The large diffraction angle is generally defined to be such that the image has unacceptable aberrations when the focusing lens downstream of the diffractive optical element is absent. Preferably, it is defined as being at least 30 degrees from the collimator axis. Any of the optical apparatus described in this paragraph, preferably may also comprise a focusing lens downstream of the diffractive optical element and being operative to focus the multiplicity of light beams to points at locations remote from the diffractive optical element.
Furthermore, in accordance with yet another preferred embodiment of the present invention, there is provided optical apparatus comprising a diode laser light source providing an output light beam, and a non-periodic diffractive optical element constructed to define an image template and being impinged upon by the output light beam and producing a multiplicity of diffracted beams which define the image template. The image template is preferably such as to enable data entry into a data entry device.
There is also provided in accordance with a further preferred embodiment of the present invention, optical apparatus for projecting an image comprising a diode laser light source providing an illuminating light beam, a lenslet array defining a plurality of focussing elements, each defining an output light beam, and a diffractive optical elements comprising a plurality of diffractive optical sub-elements, each sub-element being associated with one of the plurality of output light beams, and constructed to define part of an image and being impinged upon by one of the output light beam from one of the focussing elements to produce a multiplicity of diffracted beams which taken together define the image. The image preferably comprises a template to enable data entry into a data entry device.
In accordance with yet another preferred embodiment of the present invention, there is provided optical apparatus for projecting an image, comprising an array of diode laser light sources providing a plurality of illuminating light beams, a lenslet array defining a plurality of focussing elements, each focussing one of the plurality of illuminating light beams, and a diffractive optical elements comprising a plurality of diffractive optical sub-elements, each sub-element being associated with one of the plurality of output light beams, and constructed to define part of an image and being impinged upon by one of the output light beam from one of the focussing elements to produce a multiplicity of diffracted beams which taken together define the image. The image preferably comprises a template to enable data entry into a data entry device. In any of the optical apparatus described in this paragraph, the array of diode laser light sources may preferably be a vertical cavity surface emitting laser (VCSEL) array.
Furthermore, in any of the above-mentioned optical apparatus, the diffractive optical element may preferably define the output window of the optical apparatus.
There is further provided in accordance with yet another preferred embodiment of the present invention an integrated laser diode package comprising a laser diode chip emitting a light beam, a beam modifying element for modifying the light beam, a focussing element for focussing the modified light beam, and a diffractive optical element to generate an image from the beam. The image preferably comprises a template to enable data entry into a data entry device.
Alternatively and preferably, there is also provided an integrated laser diode package comprising a laser diode chip emitting a light beam, and a non-periodic diffractive optical element to generate an image from the beam. In such an embodiment also, the image preferably comprises a template to enable data entry into a data entry device.
In accordance with still another preferred embodiment of the present invention, there is provided optical apparatus comprising an input illuminating beam, a non-periodic diffractive optical element onto which the illuminating beam is impinged, and a translation mechanism to vary the position of impingement of the input beam on the diffractive optical element, wherein the diffractive optical element preferably deflects the input beam onto a projection plane at an angle which varies according to a predefined function of the position of impingement. In this embodiment, the translation mechanism preferably translates the DOE. In either of the apparatus described in this paragraph, the position of the impingement may be such as to vary in a sinusoidal manner, and the predetermined function may be such as to preferably provide a linear scan. In such cases, the predetermined function is preferably such as to provide a scan generating an image having a uniform intensity.
In any of these described embodiments, the input beam may either be a collimated beam or a focussed beam. In the latter situation, the apparatus also preferably comprises a focussing lens to focus the diffracted beams onto the projection plane.
Preferably, in the above-described optical apparatus, the predefined function of the position of impingement is such as to deflect the beam in two dimensions. In such a case, the translation mechanism may translate the DOE in one dimension, or in two dimensions
There is further provided in accordance with still another preferred embodiment of the present invention, an on-axis two dimensional optical scanning apparatus, comprising a diffractive optical element, operative to deflect a beam in two dimensions as a function of the position of impingement of the beam on the diffractive optical element, a low mass support structure, on which the diffractive optical element is mounted, a first frame external to the low mass support structure, to which the low mass support is attached by first support members such that the low mass support structure can perform oscillations at a first frequency in a first direction, a second frame external to the first frame, to which the first frame is attached by second support members such that the second frame can perform oscillations at a second frequency in a second direction, and at least one drive mechanism for exciting at least one of the oscillations at the first frequency and the oscillations at the second frequency. In this apparatus, the first frequency is preferably higher than the second frequency, in which case, the scan is a raster-type scan.
In accordance with still another preferred embodiment of the present invention, there is provided optical apparatus comprising a diode laser source for emitting an illuminating beam, a lens for focussing the illumination beam onto a projection plane, a non-periodic diffractive optical element onto which the illuminating beam is impinged, and a translation mechanism to vary the position of impingement of the input beam on the diffractive optical element, wherein the diffractive optical element preferably deflects the input beam onto a projection plane at an angle which varies according to a predefined function of the position of impingement. The optical apparatus may also preferably comprise, in addition to the first lens for focussing the illumination beam onto the diffractive optical element, a second lens for focussing the deflected illumination beam onto the projection plane.
Any of the above described optical apparatus involving scanning applications may preferably be operative to project a data entry template onto the projection plane, or alternatively and preferably, may be operative to project a video image onto the projection plane.
The present invention will be understood and appreciated more fully from the description with follows, taken in conjunction with the drawings in which:
Reference is now made to
As described in the PCT Application published as International Publication No. WO 2004/003656, the disclosure of which is hereby incorporated by reference in its entirety, an imaging lens for imaging in a virtual interface mode is required to be positioned with very high mechanical accuracy and reproducibility in order to obtain precise image calibration.
In the embodiment of
When CMOS module 10 is employed in a virtual interface mode, as shown at the top of
When the CMOS camera module 10 is used for general-purpose color imaging, as is shown in phantom lines at the bottom of
Although in the preferred embodiment shown in
Furthermore, although in
The embodiment shown in
Referring now to
A normally reflective visible light mirror 132 and an infra-red blocking filter 134 are positioned along a visible light path, thus providing color imaging capability over a medium field of view 140.
The embodiment of
Reference is now made to
As seen in
Reference is now made to
Reference is now made to
Reference is now made to
Turning specifically to
Turning specifically to
Reference is now made to
Turning specifically to
Referring specifically to
In the devices described in the embodiments of
Reference is now made to
Reference is now made to
Reference is now made to
Turning to
The embodiments of
In the embodiment of
Reference is now made to
Table 1 sets forth essential characteristics of each of the seven embodiments, which are described in detail hereinbelow:
Turning to
The VSSR field of view 556 is preferably captured through an optional field lens 560 in order to expand the field of view by a factor of approximately 1.5 and a combiner 562. The VSSR field of view employs a fixed IR cut-off window 564 that is covered by an opaque slide shutter 566 for enabling/disabling passage of light from the VSSR field of view. Preferably, the optics for this field of view have a low distortion (<2.5%) and support the resolution of the camera 550, preferably a Modulation Transfer Function MTF of approximately 50% at 50 cy/mm for a VGA camera, and an MTF of approximately 60% at 70 cy/mm for a 1.3M camera.
The VKB field of view 576 and the VC field of view 586 are preferably captured via a large angle field lens 590 that may expand the field of view of the common optics by a factor of up to 4.5, depending upon the geometry. The center section of the field of view of lens 590, e.g. the VC field of view, is preferably designed for obtaining images in the visible part of the spectrum, and has a distortion level of less than 4% and resolution of approximately 60% at 70 cy/mm. The remainder of the field of view of lens 590, e.g. the VKB field of view, may have a higher level of distortion, up to 25%, and lower resolution, typically less than 20% at 20 cy/mm at 785 nm.
In front of lens 590 there is preferably provided a triple position slider or rotation shutter 594 having three operative regions, an opaque region 596, an IR cut-off region 598 for providing true color video and an IR cut-on filter region 600 for sensing IR from a virtual keyboard. Suitable positioning of shutter 594 at region 600 for the VC field of view enables low resolution IR imaging to be realized when a suitable IR source, such as an IR LED is employed.
The light from field lens 590 is reflected by means of a flat reflective element 580 down towards the camera optics 554 and camera 550. In the simplest triple field of view embodiment, this flat reflective element 580 is a full mirror. When an additional optional fourth field of view is utilized, as described below, this flat reflective element 580 is a dichroic beam combiner.
An optional additional field of view 582 can be provided when the flat reflective element 580 is a dichroic mirror or beam combiner Since both combiners 562 and 580 are flat windows, they will cause minimal distortion to the image quality. In front of this field 582, there should be an enabling/disabling shutter. A pivoted mirror 584 enables this additional field of view to be that above the camera, in the sense of
The CUP field of view may be provided internally by employing a variable field lens in the VSSR path 556 or externally by employing an add-on macro lens in front of the VSSR field 556 or the optional field 582, as is done in the Nokia 3650 and Nokia 3660 products. In the latter case the upper mirror 580 should be a dichroic combiner transmissive for visible light and highly reflective to 785 nm light. This optional field should also have a disable/enable shutter (sliding or flipping) in front of a IR cut-off window, also not shown in
Reference is now made to
A top swivel head 660 comprises a tilted mirror 662 mounted on a rotating base 664, shown in
Although the swivel head can rotate 664 and capture an image in any direction, however it is believed to be more useful to define discrete imaging stations. Movement between stations may require the rotation of the image on the screen. The image obtained is a mirror image, which can be corrected electronically if needed. An entrance aperture 640 is shown in the swivel head, pointed out of the plane of the drawing.
An IR cut-off filter 670 is positioned just under the swivel head 660 to enable a true color picture to be captured. The light from the swivel head 660 passes via a dichroic combiner 672 to a CMOS camera 650. Additional optics (not shown in
Preferred optical arrangements for these four fields of view are now described.
VKB mode—A field lens 680 for the VKB mode captures a large field of view 694 of up to about 90° depending upon the geometry. An IR cut-on filter plastic window 682 is positioned in front of the field lens. The captured IR light is steered by means of a dichroic mirror 672 to the common optics. The IR image obtained upon the CMOS may preferably be of low quality, with barrel distortion of up to 25% and an MTF of about 20% at 20 cy/mm at 785 nm). To turn on the VKB mode an opaque shutter 684 has to be opened, and the top swivel head rotated to a disabling position.
A VSSR mode is obtained by enabling the top swivel head 660 for VSSR imaging, and rotating it to the VSSR station position that is at the rear part of the handset, such that, through the VSSR field lens 696, which expands the field of view by a factor of approximately 1.5, the VSSR field of view 688 is imaged.
A VC mode is obtained by enabling the top swivel head 660 and rotating it to the VC station position that is at the front side of the handset, where the LCD is located, such that the VC field of view 692 is imaged by use of the optional optical element 690. Using this option, only part of the COMS imaging plane is utilized, this being known as the windowing option. When the optic 690 is not present, the original FOV of the lens 654 captures the image upon the entire camera sensing area but is down sampled to give the lower resolution VC image, this being known as the down sampling option.
A CUP mode could be realized by one of the methods described above in relation to the embodiment of
Reference is now made to
The VSSR field 708 is captured through an additional field lens 710 to expand the field of view by a factor of approximately 1.5 and a dichroic combiner 712. The VSSR field preferably has a fixed/sliding IR cut-off window 714 and an opaque slide shutter 716 for enabling/disabling the imaging path. The optics for the VSSR field should have a low distortion of <2.5%, and should support the camera resolution, which for the VGA camera should provide an MTF of approximately at least 50% at 50 cy/mm, and for a 1.3M camera, an MTF of approximately at least 60% at 70 cy/mm.
The VKB field of view 720 is captured via a large angle field lens 722 that preferably expands the common optics field of view by a factor of up to 4.5, depending upon the geometry chosen, and is steered to the common optics by means of a mirror 724 and via the dichroic combiner 712. The field of view for the VKB mode may be of low quality, having a level of distortion of up to 25%, and a low resolution of typically less than 20% at 20 cy/mm at 785 nm. When the VKB mode is active, the mode selection slider 726 is positioned to the IR cut-on filter position 728, which can preferably be a suitable black plastic window.
An additional optional field 730 can also be provided, using additional components exactly like those shown in the embodiment of
The VC field mode 732 is obtained when the triple mode selection slider 726 is positioned with the field shrinking element 734, in front of the large angle field lens 722, this being the position shown in
A CUP mode could be realized by one of the methods described above in relation to the embodiment of
Reference is now made to
The VSSR field 740 is achieved using a focussing lens 742 and a conventional camera 744 having either a VGA or a 1.3M pixel resolution. This same camera can also be preferably used for CUP mode imaging, either externally by use of an add-on macro module, as is done in the Nokia 3650/Nokia 3660 product, or internally by using modules such as the FDK and Macnica's FMZ10 or the Sharp LZOP3726 module.
A CUP mode could be realized by one of the methods described above in relation to the embodiment of
The VC field 750 and the VKB field 752 modes preferably use a high-resolution camera 754, such as a VGA or 1.3M pixel resolution camera, with large field of view optics 756, having a field of view of up to 90°, depending on the VKB geometry used. A filter, preferably an interference filter 764, such as is shown in
In the VC mode, the camera is operative in a windowing mode, where only the center of the field is used. For this mode, a field of view of 30° is used. This field of view should preferably have a distortion level of less than 4% and an MTF of at least approximately 60% at 70 cy/mm in the visible.
In the VKB mode, a large field of view of up to 90° is required, but a higher level of distortion of up to 25% can be tolerated, and the resolution can be lower, typically less than 20% at 20 cy/mm at 785 nm. In this mode the camera is preferably operated in a windowing mode vertically, and also preferably in a down-sampling mode horizontally.
Reference is now made to
The VSSR field 770 is achieved using a focussing lens 772 and a conventional camera 774 having either a VGA or a 1.3M pixel resolution. This same camera can also be preferably used for CUP mode imaging, either externally by use of an add-on macro module, as is done in the Nokia 3650/Nokia 3660 product, or internally by using modules such as the FDK and Macnica's FMZ10 or the Sharp LZOP3726 module. A CUP mode could be realized by one of the methods described above in relation to the embodiment of
The VC field of view 776 mode and the VKB field of view 778 mode both preferably use a low-resolution camera 780, or a high resolution camera in a down-sampling mode. A filter, preferably an interference filter 784, such as is shown in
In the VC mode, the mode selection slider 786 positions a field shrinking lens with an IR-cut-off filter that narrows the effective camera field of view to about 30°. This field of view should preferably have a distortion level of less than 4% and an MTF of less than approximately 60% at 30 cy/mm in the visible.
In the VKB mode, the mode selection slider 786 positions an IR cut-on filter window 788 in front of the field lens 782. It is sufficient for this field of view to have a high level of distortion of up to 25%, and a low MTF, typically less than 20% at 20 cy/mm at 785 nm.
Reference is now made to
The VKB field of view 790 mode may preferably be imaged on a low-resolution camera (CIF) 792 with a lens 794 having a large field of view, of up to 90°, depending on the geometry used. A filter, preferably an interference filter 816, such as is shown in
A top swivel head 800 comprises a tilted mirror 802 mounted on a rotating base 804, shown in
Although the swivel head can rotate 804 and capture an image in any direction, however it is believed to be more useful to define discrete imaging stations. Movement between stations may require the rotation of the image on the screen. The image obtained is a mirror image, which can be corrected electronically if needed. An IR cut-off filter 806 is positioned just under the swivel head 800 to enable a true color picture to be captured.
The light from the swivel head 800 passes via a focussing lens 808 with a field of view of the order of 30° or less to the CMOS camera 810. Additional optics (not shown in
A VSSR mode is obtained by enabling the top swivel head 800 for VSSR imaging and rotating it to the VSSR station position that is at the rear part of the handset, such that the VSSR field of view 812 is imaged.
A VC mode is obtained by enabling the top swivel head 800 for VC imaging, and rotating it to the VC station position at the front side of the handset, where the LCD is located, such that the VC field of view 814 is imaged. Using this option, only part of the COMS imaging plane is utilized, this being known as the windowing option. Otherwise, the image is down sampled to give the lower resolution VC image, this being known as the down sampling option.
A CUP mode could be realized by one of the methods described above in relation to the embodiment of
Reference is now made to
The common optics generally comprises a high-resolution CMOS camera 820, either VGA or 1.3M pixel, and a 20°-30° field of view lens 822. A filter, not shown in
In the VSSR mode, the camera is stationed in front of an IR cut-off filter window 824 at the rear side of the handset, facing the entrance aperture from the VSSR field of view 828. The optics for this field should have a low distortion, preferably of <2.5%, and should support a camera resolution having an MTF of ˜50% at 50 cy/mm for the VGA camera, and 60% at 70 cy/min for a 1.3M camera.
In the VC mode, the camera, now shown in position 830, is stationed in front of an IR cut-off filter window 832 at the front side of the handset, facing the entrance aperture from the VC field of view 834. At this position the image is down-sampled. The optical resolution is preferably better than approximately 60% at 35 cy/mm for visible light, and the distortion should be less than 4%.
In the CUP mode, the camera, shown in position 840, is pointed upwards towards a macro lens assembly 842 with an IR cut-off filter 844. The optics for this field should have a low distortion, preferably of less than <2.5%, and should support the camera resolution, preferably having an MTF of at least 50% at 50 cy/mm for the VGA camera and at least 60% at 70 cy/mm for a 1.3M camera.
Finally, in the VKB mode, the camera, shown in position 846, is stationed pointing downwards towards the location of the keyboard projection. In this station, the optics in front of the lens preferably includes an expander lens 848 and an IR cut-on filter window 850. In this mode the camera is typically operated in a windowed, down sampled mode. The field of view 852 of the overall optics is wide, typically up to 90°, depending on the geometry used. This large field of view can tolerate a high level of distortion, typically of up to 25%, and need have only a low MTF, typically less than 20% at 20 cy/mm at 785 nm.
Reference is now made to
As shown in the calculated, diffractive ray tracing illustrations in
Reference is now made to
From geometrical optics considerations it is known that the depth of field of a focussed spot varies inversely with the focussing power used. Thus, it is clear that, for a given DOE focussing power, the larger the illuminating spot on the DOE, the smaller the depth of field will be. Therefore, to maintain a good depth of focus at the image plane, it is advantageous to use a collimating lens with a focal length sufficiently short such that a minimum area of the DOE is illuminated, commensurate with illuminating sufficient area in order to obtain a satisfactory diffracted image.
A typical laser diode source, as used in prior art DOE imaging systems, generally produces an astigmatic beam with an elliptical shape 1020, as shown in an insert in
Reference is now made to
One of the advantages of this configuration is that no focusing lens is required, potentially reducing the manufacturing cost. Another advantage is that there is no bright zero order spot from undiffracted light, but rather a diffuse zero order region 1054 whose size is dependent on the laser divergence angle. This type of zero order hot spot does not present a safety hazard. Furthermore, if it does not impact negatively on the apparent image contrast, because of its low intensity and diffusiveness, it does not have to be separated from the main image 1056 and blocked, as was required in the embodiment of
Reference is now made to
All the separate sections 1070 are preferably calculated together and mastered in a single pass, so that they are all precisely aligned. Each DOE section 1070 can be provided with its own illumination beam by forming a beam splitting structure such as a microlens array 1074 on the back side of the substrate of the DOE 1072. Alternative beam splitting and focusing techniques can also be employed.
The size of the beam splitting and focusing regions can be adjusted to collect the appropriate amount of light for each diffractive region of the DOE to insure uniform illumination over the entire field.
This technique also has the added advantage that the focal length of each segment 1070 can be adjusted individually, thus achieving a much more uniform focus over the entire field even at strongly oblique projection angles. Since this geometry has low opening angles θ for each of the diffractive segments 1070, and a correspondingly larger minimum feature size, the design can use an on-axis geometry, since the zero order and ghost image can be effectively rejected using standard fabrication techniques. Thus no masking is required.
One drawback of this geometry is the fact that the entire element acts as a non-periodic DOE requiring precise alignment with the optical source. The divergence angle and energy distribution of the diode laser source, as well as the distance to the optical element, must also be accurately controlled in order to illuminate each DOE section and its corresponding region of the projected interface with the appropriate amount of energy.
Reference is now made to
The array 1080 still needs to be positioned accurately behind the element in order not to result in a distorted projected image, but there is no need to control the divergence angle of the individual emissions other than to make sure that all the light from each emitting point enters its appropriate collimating/focusing element 1086 and sufficiently fills the aperture of the corresponding DOE segment 1088 to obtain good diffraction results.
This structure of
Reference is now made to
Optical elements 1106 and 1108 need to be precisely positioned in front of the laser beam by means of an active alignment procedure to precisely align the direction of the emitted beam. A diffractive optical element DOE 1110 containing the image template is inserted at the end of the package, aligned and fixed in place. This element can also serve as the package window, with the DOE 1110 being either on the inside or the outside of the window 1114. If a non-periodic DOE is employed, the beam modifying optics and/or the collimating optics can be selectively dispensed with, resulting in a smaller and cheaper package.
Reference is now made to
Even though there may be significant overlap between the various incidence positions of the beam, the DOE is constructed in a non-periodic fashion to diffract all the light to a point whose position is determined by the total incident area of illumination on the DOE. The focal position can also be varied as a function of the diffraction angle to keep the spot in sharp focus across a planar field. The focusing can be also done by a separate diffractive or refractive element, not shown in
A second element with a similar functionality may be provided along an orthogonal axis and positioned behind the first DOE to diffract the emitted spot along the orthogonal axis, thus enabling two dimensional scanning.
Rather than actually scanning the input beam, which would mean vibrating the laser diode sources, the input beam can be held stationary, and DOE elements can preferably be oscillated back and forth to generate a scanned beam pattern. Scanning the first element at a higher frequency and the second element at a lower frequency can generate a two dimensional raster scan, while synchronizing and modulating the laser intensity with the scanning pattern generates a complete two dimensional projected image.
Reference is now made to
These functionalities can be further combined into a single DOE where the horizontal position determines the horizontal angle of diffraction and the vertical position determines the vertical angle of diffraction. This is illustrated schematically in
Orthogonal X and Y scanning can be integrated into a single element as is illustrated in
By driving the entire device with one or more piezoelectric elements 1278 with a drive signal containing both resonant frequencies, a two axis, resonant raster scan can be generated. By tuning the mass of the DOE and support 1272 and the internal oscillation frame 1274, along with the stiffness of the lateral motion oscillation supports 1280 and the vertical motion oscillation supports 1282, it is possible to tune the X and Y scanning frequencies accordingly. This design can provide a compact, on-axis two dimensional scanning element.
Reference is now made to
Reference is now made to
It is appreciated by persons skilled in the art that the present invention is not limited by what has been particularly shown and described hereinabove. Rather the scope of the present invention includes both combinations and subcombinations of various features described hereinabove as well as variations and modifications thereto which would occur to a person of skill in the art upon reading the above description and which are not in the prior art.
The present application is related to and claims priority from the following U.S. Provisional Patent Applications, the disclosures of which are hereby incorporated by reference: Applications No. 60/515,647, 60/532,581, 60/575,702, 60/591,606 and 60/598,486.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/IL04/00995 | 10/31/2004 | WO | 00 | 5/30/2007 |
Number | Date | Country | |
---|---|---|---|
60515647 | Oct 2003 | US | |
60532581 | Dec 2003 | US | |
60575702 | Jun 2004 | US | |
60591606 | Jul 2004 | US | |
60598486 | Aug 2004 | US |