This invention relates in general to methods and systems of rapid focusing and zooming for the applications in the projection of volumetric 3D images and in the imaging of 3D objects.
One category of V3D display generates V3D images by rapidly moving a screen to repeatedly sweep a volume and projecting 2D images on the screen. V3D images thus form in the swept volume by after-image effect. One typical mode of motion is to place a screen on a slider-crank mechanism to make the screen move in reciprocation motion. Tsao U.S. Pat. No. 6,765,566 (
Another category of V3D display applies a stack of electrically switchable screens (usually liquid crystal materials) as display means. By quickly and sequentially switching different screens in the stack, a moving screen can be generated. 2D image frames can be projected onto the liquid crystal screen to create V3D images. For example, Sullivan (U.S. Pat. No. 6,100,862, which is incorporated herein for this invention by reference) describes such a system.
In these approaches, one major issue is to project images to the screen (or the liquid crystal screen) and keep the image in focus.
Paek (“A 3D Projection Display using PDLCs”, presented at the Conference of the International Society for Optical Engineering, January 29-February 2, San Jose Calif.) uses a piezoelectric material based simple lens for rapid focusing. However, practical projection lens requires multiple lens with certain size in order to provide bright and high quality image. To achieve this by piezoelectric lens can be difficult and costly. By similar reasoning, a vari-focal liquid lens, which can change its focal length by changing the voltage applied over its “electro-wetting” water-oil interface, can also be used. Vari-focal liquid lens is now commercially available (Varioptic of France, see www.varioptic.com). However, for the purpose of projecting bright, high quality image, liquid lens appears to be too small (only 2˜3 mm aperture) to pass enough light. And because its principle is based on surface tension between two liquids, it is doubtful that larger lens can be made. Liquid weight becomes more significant for larger lens.
Sullivan (U.S. Pat. No. 6,100,862) uses a varifocal mirror before a projection lens to adjust the focus of projection.
Sullivan (US Pat. Pub. No. 2003/0067421, which is incorporated herein for this invention by reference) describes a vari-focusing projection system. A rotating transparent disk with an azimuthally varying thickness is placed between the projection lens and the image source to change the effective object distance in a fast and periodical fashion. This is based on the principle that a transparent material (of refractive index>1) between lens and object changes (shortens) the effective object distance due to refraction. When effective object distance is shortened, image distance is increased.
However, it is difficult to apply Sullivan's disks to systems with reciprocating screen or “Rotary Reciprocating” screen (of
In the field of imaging, Fantone et al. (U.S. Pat. No. 6,066,857) (
Tsao (U.S. Pat. No. 5,954,414) describes several image delivery systems that maintain not only focus but also constant magnification of projected image frames. One system is a moving reflector-pair placed between the projection lens and the moving screen. The moving reflectors compensate the change of optical path length caused by the motion of the screen. Tsao (U.S. Pat. No. 6,302,542) describes another image delivery system comprising a single moving flat reflector (see column 5, lines 1-5, 28-31, 37-39 of the referred patent). The reflector moves by a “Rotary Reciprocating mechanism” (similar to
Tsao U.S. Pat. No. 5,954,414 (column 7, line 47 to column 8 line 7) and U.S. Pat. No. 6,302,542 (column 5 line 55 to column 6 line 19) also describe a moving zoom lens system, which keeps the projected image in focus and maintains constant magnification. In general, a zoom lens can be separated into two lens groups. Zooming is achieved by moving the two lens groups separately but simultaneously. One method is using linear stages driven by a microcomputer-controlled servomotor or using cams to adjust the positions of lens groups. Another way to drive the stages is to use piezoelectric actuators. Another method is using lens with adjustable power. U.S. Pat. No. 6,302,542 (column 8, lines 24-55) also describes a “synchronized-focusing projector”, which achieves rapid focusing by adjusting lens position or power rapidly. The method also includes changing optical path length by moving a reflector, instead of moving the lens (U.S. Pat. No. 6,302,542,
A rapid focusing system can also be useful in the field of 3D shape imaging and recovery (or sometimes called volumetric measurement). In this field, one category of approach is based on the focus or defocus of multiple 2D images (pictures) of a 3D shape or a 3D scene. In the method of Shape from Focus (or Depth from Focus) (SFF or DFF), multiple 2D images of a 3D surface are taken at different focal depths. Image processing of the 2D images obtains a set of “focus measures” at each image point. The depth of a surface point is then obtained by finding the peak of the focus measure function by Gaussian interpolation of the focus measures. In the method of Depth from Defocus (DFD), depth information is computed by a “defocus function” from the blurred images of areas that are out of focus. The DFD method requires much fewer 2D images. Details of the SFF methods can be found in the following documents:
In order to image a 3D object at different focal depths, one can displace the image sensor with respect to the image plane, or move the lens, or move the object with respect to the object plane. In the paper of Nayar and Nakagawa, the 3D object is placed on and moved by a movable stage. The depth map is computed from 10-15 images. In the paper of Yun and Choi, the camera was moved by a motorized or piezoelectric-actuated stage. In Watanabe et al.'s paper, two cameras positioned at different depths were used. The depth map is computed from only 2 image frames. When the range of focal depth change is small, a rotating sector wheel with glass plates of different respective index of refraction is placed before the 3D object to change effective focal distance. This is described in Wallack U.S. Pat. No. 6,483,950. In general, these methods of depth scanning are either slow or unable to cover large depth.
Accordingly, the purpose of this invention is to develop a rapid focusing system that has a simple structure and occupies small space, especially for V3D displays based on reciprocating screen or Rotary Reciprocating screen. This invention is also to develop a focusing system that does not have the shortcomings of the aforementioned rotating disks, that is, discontinuous or non-smooth shape. The purpose of this invention is also to develop a focusing system that can be driven by simple mechanisms and can be easily synchronized with the motion of the moving screen. Further, the purpose of this invention is also to have a system of reasonable cost. This includes using parts that are easy to manufacture and using simple mechanical elements of low cost and high reliability.
A rapid focusing system can be used in a camera system to rapidly scan the image plane or to scan the object plane without moving the camera body. The scanning range can cover large depth and the scanning speed can allow real-time 3D motion capture.
In this invention, rapid variable focusing is achieved by rapid and repeated change of the object distance, or the spacing between lens groups of the projection lens, or both. When the object distance and the spacing between the lens groups are changed simultaneously, the rapid focusing system becomes a rapid zooming system. Rapid zooming not only keeps the projected images in focus but also maintains a constant magnification.
The preferred approaches of changing optical path length include a refractive displacement means and a reflective displacement means. The refractive displacement means is based on movement of one or more thin wedge prisms. By inserting the thin wedge prisms into the optical path, changing their positions relative to the optical path changes the thickness traveled through by the optical path. This results in effective change of optical path length. The thin wedge prisms can move in linear reciprocation motion, in Rotary Reciprocating motion or in rotation.
The reflective displacement means is based on a moving reflector system. Folding an optical path by the reflector system and moving the reflector system can effectively change the optical path length. The preferred reflector system includes a single flat mirror and a pair of reflectors arranged at right angle relative to each other.
For focusing purpose, the amount of change of optical path length can be very small yet has to be precise. This can be achieved by using a wedge-shaped optical device and moving it obliquely relative to the optical path. In terms of kinematics, a larger movement along the plane of one surface of the wedge creates a smaller displacement of the other surface of the wedge. The wedge-shaped optical device can be a thin wedge prism or a mirror on a wedge-shaped base.
In general, the refractive displacement means occupy smaller space as compared to the reflective displacement means. But their range of displacement is smaller than that of the reflective displacement means. The actual amount of change of optical path length required by a rapid focusing system or a rapid zooming system determines which displacement means to use. In general, a rapid zooming system has at least one path requiring large distance change that is more suitable for the reflective displacement means.
Optical layout analysis shows that the changes of the object distance, the spacing between two lens groups and the image distance are almost in proportion and can be correlated by linear relations. Therefore, the same type of motion function can be used to change these three optical path lengths to achieve focusing and constant magnification.
Especially for camera applications, a “discrete” approach can be used. Discrete reflective and refractive means can make rapid discrete change of the optical path length, so that a limited number of 2D image frames at discrete focusing positions can be captured in a short time.
Further, in camera applications, the rapid focusing system can allow an integrated system of structured illumination that uses the same imaging lens to project the illumination pattern. The rapid focusing system can also include a “Divisional Sensor Area” configuration to allow the capture of structurally illuminated image and naturally illuminated image at the same time in the same frame.
a-2b illustrates transparent disks of varying thickness for variable focusing systems in the prior art.
a-3b illustrates a cross-slider-crank mechanism and a cross-slider-eccentric mechanism for generating sinusoidal motion in the prior art.
c illustrates a reflector system driven by the Rotary Reciprocating mechanism in the prior art.
a-c illustrates means for rapid change of optical path length by “refractive displacement” using a moving thin wedge prism in an example of rapid focusing system according to this invention.
a-b illustrates means of moving a thin wedge prism by Rotary Reciprocating motion according to this invention.
a illustrates means for rapid change of optical path length by “refractive displacement” using two thin wedge prisms according to this invention.
b illustrates preferred means for rapid change of optical path length by “refractive displacement” using two transparent disks with helical surface.
a illustrates means for moving two thin wedge prisms with respect to each other according to this invention.
a-c illustrates means for rapid change of optical path length by “refractive displacement” using two rotating thin wedge prisms according to this invention.
a-b further explains the working principle of
a-b illustrates means of path shift correction in the method of rotating thin wedge prisms according to this invention.
a-b illustrates means for rapid change of optical path length by “reflective displacement” using a moving reflector system in examples of rapid zooming lens according to this invention.
a illustrates means for rapid change of optical path length by “reflective displacement” using an “obliquely moving reflector” according to this invention.
a-b illustrates means of “compensation of object path length” using moving reflectors for 3D imaging applications according to this invention.
a-d illustrates means for rapid change of optical path length by “reflective displacement” using rotating discrete reflector units according to this invention.
a illustrates a means for rapid change of optical path length by “reflective displacement” using a liquid crystal discrete reflector system according to this invention.
b illustrates a means for rapid change of optical path length by “refractive displacement” using a pair of rotating transparent stair-like disks according to this invention.
a-b illustrates means of integrated a structured illumination system in 3D imaging application according to this invention.
a-c illustrates a “Divisional Sensor Area” configuration for simultaneous capture of structurally illuminated image and naturally illuminated image according to this invention.
a-c Appendix C's explanation of focus shift by parallel transparent material
a-d Appendix D's optical design examples of a moving zoom lens system
a-b Appendix D's optical design examples of a moving zoom lens system
a-c Appendix E's geometric analysis of motions
In a typical V3D display, a small SLM (spatial light modulator) is used as the image source. The moving screen is large relative to the SLM. That is, magnification is large and Image distance is large relative to object distance. Referring to lens formula equations (A1) and (A2) of Appendix A, when a small object with a small So is projected to a large distance (large Si), a slight change of So creates large change of Si. Appendix B shows an example of projection design for a V3D display. A SLM of 8.75 mm height is projected to form a volume length of 5.625″ (142.88 mm). The required stroke (direct distance of screen motion from bottom to top) is 3″ (76.2 mm). A projection lens of f=36 mm is used. By lens formula equation (A1), a change of So of only 0.29 mm gives a change of Si of 3″ (76.2 mm). For cases of larger magnifications, the required change of So is even smaller. In addition, Appendix B indicates another important observation: the So to Si curve can be approximated by a linear relation, as illustrated in
The position of a Rotary Reciprocating screen (
The first preferred approach to create rapid and repeated change of So is by “refractive displacement”. It has been mentioned that placing a transparent material (of refractive index>1) between lens and object and changing the thickness of the material changes effective object distance. Referring to Appendix C, the following equation approximates the relation between the “focus shift” ds and the thickness of the transparent material W:
ds/W˜=1−1/n (3)
where n is refractive index of the transparent material. This relation holds in an optical system having converging rays or diverging rays. This focus shift is the change of optical path length caused by refraction of the inserted transparent material. This change of optical path length is proportional to the thickness of the transparent material. The direction of the change is toward the direction of travel of light (i.e. toward downstream direction).
Applying the design example of Appendix B and assuming a transparent material of n=1.512 (e.g. BK7 glass) is used, in order to produce a change of So of 0.29 mm, the change of thickness is:
ΔW˜=ΔSo/(1−1/n) (4)
ΔW˜=0.29 mm/(1−1/n)=0.856 mm
In general, ΔW is in millimeters to sub-millimeter range.
Placing a transparent material of varying thickness in the optical path and moving the transparent material with respect to the optical path changes the length of the optical inside the transparent material. Thereby, “refractive displacement” can be achieved. The preferred transparent material is a thin wedge prism.
When the centerline of projection beam 303 strikes one surface (4011) at normal direction, the exit beam 410 is slightly deflected away from the original centerline 303, because of refraction at the exit surface (4012). However, this angle of deflection is fixed regardless of prism position, because the angle of the exit surface does not change. Therefore, this deflection error can be easily corrected by making the new centerline 410 as the centerline of the projection. In addition, because of the thickness change, the location of the exit point of the beam moves from 413 to 414 during one stroke. Accordingly, the exit beam 410 makes a small parallel shift to position 419. But, because the thickness change is very small, this shift amount is even smaller. The resulted position shift on the screen is also small and can be pre-determined and then corrected by a small shift of image content on the SLM.
c illustrates an example design of the thin wedge prism in side view. The motion stroke can be set to a number S that is roughly equal to or smaller than the diameter of the projection beam. The height of the prism is 2S. During the motion strokes, the centerline of projection moves between location 4013 and 4015. If the desired thickness difference between 4013 and 4015 is ΔW, then the wedge angle can be determined from ΔW and S.
In order to generate the motion, one typical approach is to use a slider-crank mechanism 420, which includes a crank wheel 421, a sliding stage 423 carrying the prism, and a connecting rod 422 linking the crank and the sliding stage, as illustrated in
If the screen (Si) moves in sinusoidal function, then the thickness of the transparent material should change in sinusoidal function as well. The motion can be achieved by moving the prism in “rotary reciprocating” motion.
b illustrates an example of mechanism for the rotary reciprocating wedge prism. It has a pair of rotary arms 432A and 432B, which are also timing gears linked by a timing belt 435. A connecting rod 431 connects the two rotary arms. A third timing gear 433 is mounted to the same axis 437A of rotary arm 432A. Driving belt 436 drives the rotary arms. The arms rotate in unison and the connecting rod moves in “rotary reciprocating” motion. The wedge prism is mounted to the connecting rod such that the rotary reciprocating motion is on a plane parallel to one surface of the wedge prism. Counter weights (434A & 434B) are attached to the rotary arms to balance the centrifugal force caused by the rotation of the prism and the connecting rod. The motion is therefore smooth and vibration is minimized. Driving gear and belt (433 & 436) are linked to the driving mechanism of the screen so that the prism motion is synchronized to the screen motion. When the screen is at top (maximum Si), the prism is at bottom (maximum thickness). When the screen is at bottom, the prism is at top. One revolution of the screen matches one revolution of the wedge prism.
a illustrates a “dual wedge prisms” approach that eliminates the deflection of projection beam (410 of
Further, both prisms can be made to move, instead of only one.
b illustrates an example of embodiment of a dual moving wedge prism system. Basically, it includes two rotary reciprocating mechanisms of
Another way to create thickness variation is to rotate the wedge prism(s).
In order to eliminate the deflection of projection beam, a second rotating wedge prism 402F is added to form a “dual wedge prisms” configuration.
a shows a close-up view around the projection beam 303. The two prisms rotate 180 degree from the position of
If desired, a second pair of rotating wedge prisms can be added to eliminate the shift. As illustrated in
Another way to eliminate the shift is to use a mirror, as shown in
The “refractive displacement” methods described above do not maintain constant magnification. Therefore, the resulted display space has trapezoidal sides. Because the geometry of the display space is fixed and can be pre-determined, images can be pre-scaled before projection so that the resulted V3D image can be displayed with minimum distortion.
In case a constant magnification is desired, then a moving zoom lens is needed. In a zoom lens configuration, the two movable lens groups are called variator and compensator respectively. The variator's main function is adjusting magnification. The compensator's main function is focusing. This is according to the descriptions in the following reference: E. Betensky “Zoom lens principle and types”, in Lens Design, ed. by W. J. Smith, SPIE Optical Engineering Press, Bellingham, Wash., 1992, p.88. Appendix D outlines the optical design of a moving zoom lens and describes an example design for the purpose of V3D display. In general, as shown in Appendix D, when the image distance (Si) is not too large, the compensator is in the rear (close to SLM) and the required displacement is smaller than a few millimeters. Therefore, the focusing systems described previously can be used. However, the required displacement of the variator is in the range of several centimeters. In the conventional approach, the variator can be moved directly in reciprocating motion by using a slider-crank mechanism, a cross-slider-crank mechanism (
Alternatively, the preferred approach is to keep the variator physically fixed and adjust the optical path length between the variator and the compensator.
If the projector system generates polarized images, then a single flat reflector can be used to adjust the spacing between the variator and the compensator.
In
In summary, the preferred moving zoom lens system comprises a moving reflector system between the compensator and the variator and a focusing system on the path of object distance. The moving reflector system adjusts the (optical) distance between the compensator and the variator. The focusing system adjusts object distance (So). The moving reflector system can be a moving reflector-pair (
Tab. 1 shows an example design of a moving zoom lens for a V3D display. The magnification and the screen motion stroke are set according to the example in Appendix B. The change of object distance (ΔSo) is 1.11 mm. The change of distance between the compensator and the vatriator (ΔD) is 28.21 mm.
If the object distance is large enough, the obliquely moving reflector system of
Take note that the oblique moving reflector (
If So is fixed and only D is allowed to change, then a moving zoom lens of two lens groups becomes a variable focusing system. Appendix D Tab. D2 shows an example.
For applications in imaging, the rapid focusing and zooming systems are used in similar ways except that light travels in reversed direction. The object space of the projector becomes the image space of the camera. An image sensor replaces the SLM. The image space (display space) of the volumetric 3D display becomes the object space of the “volumetric 3D camera”. In stead of projecting multiple image frames from the SLM to the moving screen, the volumetric 3D camera captures multiple image frames from the object space onto the image sensor. Because of the principle of reversibility, object and image (and So and Si) are really interchangeable. For convenience and in order to match the formula of Appendix A, in this specification, the image sensor of the camera is placed at left side of
In general, maintaining constant magnification is not absolutely necessary in applications of volumetric 3D imaging. Variations of magnification may be corrected in image processing stage. However, it is preferred to have an optical system that provides constant magnification for the methods of SFF and DFD. In the paper of Nayar and Nakagawa (an SFF method), the image sensor and the lens are fixed. The object is moved by a stage such that the image plane “scans” the object. As a result, surface points of the object with best focus in every image frame have the same magnification. Surface points near those points with best focus also have about the same magnification. Accordingly, two types of scan can be used with this type of SFF method.
The first type is “compensation of object path by a moving reflector system”. This approach keeps all parameters (Si, D and So) unchanged. By using a moving reflector system placed after the imaging lens, the object plane can be moved without moving the camera body.
The second type is “scanning object plane by a moving zoom lens system”. The moving zoom lens system described previously can be used in the 3D camera system to maintain a constant magnification. The image plane can be kept always on the image sensor, while the object plane scans across the object. The effect is similar to the movable object stage of Nayar and Nakagawa. Further, by using the moving zoom lens, the camera can have a deep scan range at large distance.
In the paper by Watanabe et al. (a DFD method), the camera uses an aperture 107 placed at the front focal plane of the lens, as shown in
In the applications of 3D camera, the scanning motion of the focusing and zooming system does not have to be back and forth. Therefore, the rotating disk with a helical surface of Fantone et al. (U.S. Pat. No. 6,066,857) described in the background section could be used as a “refractive displacement means”. However, it should be noted that any portion of a helical surface having a finite area is not really a “flat” surface, but a slightly twisted surface. This may not be a problem for a barcode reader because the image is of “bar” shape. But this can cause distortion in full frame 2D imaging. To eliminate the distortion, a matching helical shape should replace the stationary wedge prism placed at the back of the helical disk. The simplest solution is to use an identical helical disk (or a portion of it).
Previously described optical path length changing means creates continuous changes. However, as already pointed out, in the method of Depth from Defocus, it is possible to reconstruct the 3D image of an object from a limited number of picture frames taken at discrete focus positions. By reducing the number of picture frames, the rate of scan can be increased to capture high-speed 3D motions. For this purpose, means for fast but discrete focusing and zooming are preferred. There are two preferred approaches: “discrete reflector system” and “discrete refractive system”.
a and 15b illustrate a preferred embodiment of the discrete reflector system. The discrete reflector 580 has stair-like reflective surfaces. Each stair step (581a, 581b . . . 581g) is parallel to one another but has a different elevation (thickness). The disk rotates around its centerline 590. The optical path 103 strikes the disk at a position off the rotating centerline 590. Therefore, the imaging beam 592 travels circularly 594 along different reflective steps.
c illustrates a dual-unit system that has the advantage of creating more discrete positions and of increasing sensor exposure time at each position. Using a single disk of stair-like reflectors, it needs M steps to create M different positions. In a dual-unit, a unit of M reflector steps and a unit of N reflector steps are combined to create M×N different reflector positions. This is illustrated in the example system in
Optically, a linear polarizer 571s converts the incoming beam into s-state. The polarization beam splitter reflects the s-polarized beam to the first disk. A quarter wave plate 393a turns the beam reflected from the first disk into p-state, which can pass the polarization beam splitter. The second quarter wave plate 393b turns the beam reflected from the second disk into s-state, which reflects at the PBS and reaches lens 305A.
Tab. 2 also shows that when the first disk rotates 1 turn (0-4-8-12), the 2nd disk rotates 4 times of 0-1-2-3). In practice, gears or timing belts/gears can be used to maintain the speed and phase relation between the two disks. As a result, a total of 4×4=16 different reflector positions can be created. If using only one disk, it needs 16 reflector steps to create 16 positions. Using 4-step disks, instead of 16-step disk, can reduce disk size and increase exposure time at every reflector position. This is further explained in
The second preferred embodiment of the discrete reflector system comprises a switchable multi-layer liquid crystal reflector unit. By applying a different voltage, one of the multiple reflective layers can be turned on. Buzak U.S. Pat. No. 4,670,744 and “A Field-sequential Discrete-Depth-Plane Three-Dimensional Display”, SID International Symposium v.16 1985, p. 345 (SID 85 Digest) describe this type of switchable multi-layer reflector unit. The two documents are incorporated herein for this current invention by reference.
When desired or convenient, a stair-like reflector and a switchable multi-layer reflector system can be combined.
The “discrete refractive system” changes the thickness of the transparent material discretely. The first preferred embodiment comprises a stair-like transparent disk. The disk looks exactly similar to the stair-like reflector disk 580 of
For the same reason explained in
In general, the discrete reflector systems have a range of centimeters and the discrete refractive systems have a range of sub-millimeter to millimeters.
The design examples below are mainly to illustrate approximate positions of optical components. Higher order image correction is not considered because it does not change component positions drastically. The design is based on first order layout formula of Appendix A.
This scenario is for imaging objects in short- to mid-range. Tab. 4 shows 3 cases of moving zoom lens design. In all cases, an image sensor size of about 8.8-10 mm is assumed. Case A is a mid-range case, roughly equivalent to imaging a person of 1.75 m height at a distance of 10 m with a scan depth of 2 m. Case B is a short-range case, roughly equivalent to imaging a 0.5 m high by 0.5 m deep object at 2.5 m distance. Case C is a very short-range case, roughly equivalent to imaging a 10 cm high by 10 cm deep object at a distance of 50 cm. The magnifications (m=size of object/image sensor height) and object distances (Si here) in these three cases are proportional. Under these conditions, a positive compensator as fA and a positive variator as fB (with significantly less power than fA) give roughly suitable So and D space for the required scanning mechanisms. The resulted ΔSo is in the range of millimeters and ΔD is in centimeters. For example, to construct a moving zoom lens system with continuous scanning object plane, a system of Rotary Reciprocating wedge prisms (
One of the most important parameters in the camera system of this invention is the longitudinal resolution. The longitudinal resolution can be related to longitudinal magnification m1 and depth of focus δ by the following equation:
Longitudinal resolution (minimum object slice thickness)ts=δ m1
The best (smallest) depth of focus of an imaging optics is limited by Rayleigh's criteria (λ/4 wavefront error) (Ref. Fischer and Tadic-Galeb, Optical System Design, p.57):
δ=±2λ(f/#)2˜=±(f/#)2 (in micrometer)
f/# is the working f-number of the optical system. Tab. 5 shows the best longitudinal resolution achievable in cases A-C, assuming f/#=2 or 3 respectively.
If the focusing system does not maintain a constant magnification, then the longitudinal resolution will be much worse. For a fixed focal length lens, the longitudinal magnification is the square of the lateral magnification: (Ref. Fischer and Tadic-Galeb, Optical System Design, p.16)
Longitudinal magnification m1=ΔSi/ΔSo=m2
Tab. 6 shows examples using a single fixed focal length lens. Object distance Si and scanning depth S are set to match the values of Tab. 4. The resulted best longitudinal resolution is shown in Tab. 7. We can see that a lens of fixed focal length can only have acceptable longitudinal resolution in low magnification and short-distance situations.
This scenario is imaging of large objects at long distance or imaging of a small volume at mid-range. Tab. 8 shows these two situations. It is still assumed that an image sensor of 8.8-10 mm high is used. Case G is imaging of an object (e.g. a part of a building) of 5 m height with a scan depth of 20 m at a distance of 100 m. Case H is imaging of an object of 14 cm height and 20 cm depth at a distance of 5 m. These cases require higher zoom power than cases A-C. Therefore, telephoto lens design (Magnar) (negative fA with positive fB) is preferred. Under these conditions, ΔSo is generally larger than ΔD. In Case G, both ΔSo and ΔD are in centimeter range. To construct a continuous scanning system, moving reflector systems are preferred for both the So changing unit and the D changing unit. To construct a discrete scanning system, use discrete reflector systems for both parameters too. In Case H, ΔD is in millimeter range. A refractive displacement unit can be used. When the lens groupings are moved directly, lens grouping 305B needs to be moved by an amount of (D+So), as shown in Tab. 4 and in Tab. 8.
This scenario is a microscopic 3D camera. In
In order to capture moving 3D image, high frame rate image sensor is needed. The preferred image sensor is high-speed CMOS image sensor. These sensors have resolutions from VGA up to 10 Mpixels. Frame rate can be up to 10,000 full frames per second. One typical example is a quantity production product from Micron (www.micron.com): MT9M413C36STC. The image sensor has 1280H×1024V pixels with a maximum full frame rate of 500 fps. At partial scan, the frame rate is inversely proportional to the number of vertical rows of the area of interest. For example, if partial scan covers 1280×256 pixels, then the frame rate is 2000 fps. Applying this number to the continuous scanning system of this invention, if each volume scan has 100 frames, then the volume sweep rate is 20 volumes per second. In the discrete system, if 16 frames are taken in each volume scan, then the volume sweep rate is 125 volumes per second. These rates are enough to provide moving 3D images.
In methods of SFF or DFD, the computation of focus or defocus relies on texture information on the 3D surface. However, if the part to be imaged does not have enough texture, then a structured illumination is needed to add texture information to the 3D surface. For DFD methods, structured illumination is especially helpful in the computation of defocus function.
A separate projector (such as a laser pattern projector) can project a structured pattern to the object. Alternatively, a pattern projection system can be integrated into the 3D camera. In general, this is to use a beam splitter to guide the pattern projection beam into the optical path of the 3D camera and to use the imaging lens as a projection lens.
In order to obtain color images, it is preferred that structured illumination does not interfere with the capture of actual colors of the target. One approach is to separate the capture of actual color image and the capture of structurally illuminated image in the time domain. That is, at or near each focus position, the camera captures two successive exposures. One exposure is under structured illumination, but the other is not. In practice, LEDs (light emitting diodes) or a lamp with a high speed FLC (ferroelectric liquid crystal) shutter can be used as the light source, so that the light source can be modulated at high speed to match the frame rate of the image sensor. This effectively cuts the usable frame rate of the image sensor in half.
In order to capture 3D motion in color in real time, it is preferred that the structurally illuminated frame and the naturally illuminated frame are captured in the same frame at the same time. The basic concept is creating structured illumination using invisible light, separating the path of visible light and structured light, and then capturing the structurally illuminated frame in a separate area on the same image sensor.
a illustrates an example of optical layout in side view. The n-IR path is in dotted lines and the visible path is in dashed lines. The D changing unit 1700 uses a moving reflector-pair system similar to
c illustrates another example of optical layout for Divisional Sensor Area approach in side view. The D changing unit 1700 uses a single moving reflector system similar to the one of
The double-layer structure can also be applied to discrete reflector such as the one of
The optical layouts for the Divisional Sensor Area approach described above works even if the structured illumination is not an integrated part of the 3D camera. If an integrated illumination system is desired, the illumination beam can enter the n-IR path from 103b and then follows the n-IR imaging path in the reversed direction.
The rapid focusing and zooming systems described above are based on mechanical motion. Therefore, position errors due to manufacturing tolerance and alignment deviation are inevitable. However, the motion mechanisms described above, especially the Rotary Reciprocating mechanism, are periodical. As a result, the motion error is also of periodical nature. For example, if one of the object planes deviates away from the ideal centerline during an outward scan, this object plane will deviate to the same direction with the same amount during the next outward scan. Such errors of periodical nature can therefore be corrected by a pre-calibration. In general, the calibration and correction process includes the following steps:
This section includes the formula that describes the relation of the locations and the powers of lens in an optical system. These formula are from Hecht, E., Optics, 2nd ed., Addison-Wesley, Reading Massachusetts, 1987, pp. 138 and from Smith, W. J. “First-order Layout: from imagery to achromatism to athermalization to cost” in Lens Design, ed. by W. J. Smith, SPIE Optical Engineering Press, Bellingham, Wash., 1992.
Gaussian lens formula (or Lens marker's formula) describes image formation by a single lens:
where f is the focal length of the lens, So is object distance (the distance between the lens and the object), Si is image distance (the distance between the lens and the image of the object formed by the lens), and M is magnification. (In equations (A1)-(A2), by Hecht's sign convention, So takes a positive value for a real object.)
In general, most optical systems are either limited to two components or can be separated into two-component segments. For a two-component system operates at finite conjugates as shown in
where fA and fB are the focal lengths of the components (positive for converging lens and negative for diverging lens), D is the distance between the components, So is the distance between the object and component A, Si is the distance between the image and component B.
In case the component powers (defined as the reciprocal of the focal length), the object-to-image distance T, and the magnification m are known, the component locations can be determined from the following equations:
(In equations (A3)-(A7), by Smith's sign convention, So takes negative value for a real object.) There are six parameters, fA, fB, D, So, Si and m. Given any four parameters, the rest two parameters can be determined by solving equations (A3)-(A4) or (A5)-(A6).
A SLM of 8.75 mm height is used to generate volumetric 3D images in a display space of depth 3″ (76.2 mm). At the middle of the display space (i.e. at depth 1.5″), the height of the space is 5.625″ (142.88 mm). A lens of f=36 mm is used. Therefore,
magnification at the middle position m(mid)=−142.88/8.75=−16.33
By Gaussian lens formula (Appendix A, eqn. (A1)-(A2)), the corresponding Si and So at the middle position of the image space can be obtained as follows:
Si (mid)=24.56″ (623.83 mm), So (mid)=1.504″ (38.21 mm)
Therefore, Si should scan the image space from about 23″ to about 26″.
Si (far point)=26.06″ (661.93 mm), So (mid)=1.499″ (38.07 mm), m=−17.39
Si (near point)=23.06″ (585.73 mm), So (mid)=1.510″ (38.36 mm), m=−15.27
a illustrates a geometric analysis in the situation of a parallel transparent material between a converging lens and an object. The focus shift ds can be determined as follows:
sin θ1/sin θ2=n (Snell's law)
ds/W=(W tan θ1−W tan θ2)/W tan θ1=1−tan θ2/tan θ1=1−(sin θ2/sin θ1) (cos θ1/cos θ2)=1−(cos θ1/cos θ2)/n (C1)
(cos θ1/cos θ2)˜1 (see FIG. 22b)
ds/W˜=1−1/n (C2)
The equations also apply to the case of a diverging lens, as shown in
The design here is for illustrating design principle only. Therefore, the 2-component system layout formula of Appendix A is used. The example of Appendix B is used as a reference case (Tab. D1 row #1). It is assumed that component A is compensator and fA=36 mm. Adding a negative lens (as variator) to the front can maintain the projected image at the same magnification in a range within the image distance of the reference case (i.e. for Si<624 mm). Table D1 Example 1 shows an example. The negative lens (component B) and the image plane move in opposite directions. The displacement of the negative lens is on the same order as that of the object plane.
Adding a positive lens (as variator) to the front can maintain the projected image at the same magnification in a range (slightly) beyond the image distance of the reference case (i.e. for Si>624 mm). Example 2 shows an example. The variator and the image plane move in the same direction. The displacement of the variator is smaller than that of the image plane.
In the 2-component system, if So is fixed and only D is allowed to change, then this is another variable focusing system. Tab. D2 shows an example. This example is similar to Example 1 except that So is fixed. The change of D is under 1 centimeter. The change of m is slightly bigger than that of Appendix B (variable focusing by changing So).
The configuration of Example 2 can be used at larger Si and m for camera applications, as shown in Example 3. So and D changes are within a few millimeters and centimeters respectively. To maintain constant m at even larger distance, a telephoto lens configuration is preferred. A telephoto lens has a negative fA and a positive fB (Ref. Hecht, E., Optics, 2nd ed., Addison-Wesley, Reading Massachusetts, 1987, pp. 202). Example 4 shows an example. Both D and So changes are in centimeters' range.
d,
Plane 2031 revolves about point O while keeping its surface always facing upward direction. R is radius of rotation (i.e. R=ON=OP). The change of vertical position of the screen Δz as a function of revolving angle θ is as follows:
Δz=ON−OP cos θ=R−R cos θ (E1)
Δz is a sinusoidal function.
c shows a crank rotates around O. A connecting rod connects the crank at P and the slider at Q*. The slider moves along MN line.
Δz=TQ+QU=R−R cos θ+L−L cos α=R−R cos θ+L−L(sqrt(1−(sin α)2))=R−R cos θ+L(1−sqrt(1−(Rsin θ/L)2)) (E2)
This application claims the benefit of prior U.S. provisional application No. 60/995,295, filed Sep. 26, 2007, the contents of which are incorporated herein by reference. This invention relates to the following US patents by Tsao: U.S. Pat. No. 5,954,414, U.S. Pat. No. 6,302,542 B1, U.S. Pat. No. 6,765,566 B1, and U.S. Pat. No. 6,961,045 B2. This invention also relates to the following co-pending U.S. application Ser. No. by Tsao: application Ser. No. 11/156,792 (claiming domestic priority of provisional application No. 60/581,422 filed Jun. 21, 2004). The above documents are therefore incorporated herein for this invention by reference.
Number | Date | Country | |
---|---|---|---|
60995295 | Sep 2007 | US |