Method for high resolution incremental imaging

Information

  • Patent Grant
  • 7233351
  • Patent Number
    7,233,351
  • Date Filed
    Friday, February 23, 2001
    23 years ago
  • Date Issued
    Tuesday, June 19, 2007
    17 years ago
Abstract
A method and apparatus to capture a high resolution photograph of a target. A focal zone of a linear image sensing array is displaced across an area containing a target to be photographed. The displacement may be angular or linear with appropriate scaling to yield the end photograph. By changing the focal depth, relief of the target may be fully focused in one or more passes.
Description
BACKGROUND

1. Field of the Invention


The invention relates to high resolution photography. More specifically, the invention relates to capturing a photographic image using an angularly displaced image sensing array.


2. Background


Standard photography has existed for decades. A lens or series of lenses focuses light onto a light-sensitive emulsion when a shutter opens. The lens is focused on a plane at some distance from the camera and captures in acceptable focus those things in that plane and some distance in either direction from the plane. That area in which an acceptably focused image may be captured is the depth of field. The depth of field dictates the focus of more distant features of the object photographed as well as its surroundings. A standard photograph is a planar representation of a focal plane from the perspective of the camera.


Various techniques for capturing digital images have proliferated. Digital photography is becoming increasingly mainstream. Relatively high resolution pictures may be captured using existing megapixel cameras which are widely commercially available. One advantage of digital images is the ability to manipulate them on computer. In particular, the ability to zoom in to see fine detail in the image. The general depth of field of existing digital cameras, as well as the resolution, causes the image to break down relatively rapidly on successive zooms.





BRIEF DESCRIPTION OF THE DRAWINGS

The invention is illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. It should be noted that references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references mean at least one.



FIG. 1 is a schematic diagram of a full focus capture system of one embodiment of the invention.



FIG. 2 is a flow diagram of capturing a full focus image in one embodiment of the invention.



FIG. 3 is a flow diagram of scanning an object of one embodiment of the invention.



FIG. 4 is a schematic diagram of a system of one embodiment of the invention.



FIG. 5 is a schematic diagram of an alternative embodiment of the invention.





DETAILED DESCRIPTION


FIG. 1 is a schematic diagram of a full focus capture system of one embodiment of the invention. A capture device 10 is angularly displaced about an axis. By sweeping some element in the optical path through an arc, a focal zone of an image sensor moves across a target and image data is captured. As used herein, a “focal zone” is deemed to be the view of the sensing element of a capture device during a capture period. In one embodiment, capture device 10 uses a linear image sensing array (LISA) to capture lines of data to be assembled into a photograph. Capture device 10 may be of the type described in copending application Ser. No. 09/660,809 entitled DIGITIZER USING INTENSITY GRADIENT TO IMAGE FEATURES OF THREE-DIMENSIONAL OBJECTS, now U.S. Pat. No. 6,639,684. Such a capture device enables the system to directly derive three-dimensional information about the object. Such three-dimensional data may be used to generate a full focus photograph as described below. It is, however, not essential to the instant invention that the capture device be able to derive three-dimensional information about the target object.


The optics of capture device 10 are assumed to have a depth of field “d” at a given distance. Depth of field tends to increase with distance from the capture device. As previously indicated, depth of field is the range of distance over which the acceptable focus can be achieved without varying the lens arrangement. As used herein, depth refers to distance from a reference to a point on the surface of the target (rather than, for example, thickness of the material of the target itself). For one embodiment, acceptable focus is defined to be where the defocusing blur is less than one pixel width. A target 12 which has a surface relief greater than d cannot be fully focused in a single pass. As used herein, a target may be one or more objects or an environment. Accordingly, the capture device 10 establishes a first focal distance r to, for example, bring the leading edge 34 of a target into focus. Given the depth of field d, all points within zone 14 are in focus during a first pass. However, portions of the target 12 outside of zone 14, such as surface 36 and surface 38, will be out of focus. On subsequent passes the focal distances r′ and r″ are established to bring zones 16 and 18 into focus and thereby achieve focus of surfaces 36 and 38. After three passes, three images of target object 12 have been captured. Those images may then be composited either within the capture device or on a host (not shown), such that only those points within the photograph that are in the best focus of the three images are selected for the composite picture. In some embodiments, the images may be used to create a composite texture map for a three-dimensional model of the target. Accordingly, as used herein, “image” may include all or a portion of a texture map. In one embodiment, the best focus may be determined by processing the image in the frequency domain. Sharpness of feature edges is greater and therefore focus is greater where the rate of change of data between adjacent pixels is the greatest. This is reflected as peaks in the frequency domain. In alternate embodiments, three dimensional data about the object may be used to select pixels from the various images captured. In either case, a full focus of surfaces 34, 36 and 38 can be achieved in the composite photograph. While three passes are described, the disclosed techniques may be generalized to N passes where N is an arbitrarily large number.


Because the capture device operates based on angular displacement, a planar assumption must be imposed to yield a standard photograph. Those points off the perpendicular from the capture device 10 to the target 12 will need to be scaled to compensate for the greater distance. Additionally, the angular displacement will be shorter at the edges of the arc and longer closer to the perpendicular in view of the fact that it is desirable to have the same linear displacement S between respective captures regardless of where on the plane the capture is to occur. As shown, the angular displacement between the two captures defining area 20 is less than the angular displacement between the two captures defining area 22, while the linear displacement between the two captures on the photo plane remains S.


It is within the scope and contemplation of the invention to adjust the angular velocity while maintaining a constant capture rate or adjust the capture rate while maintaining a constant angular velocity to effect the consistent linear displacement between captures. It is also within the scope and contemplation of the invention to dynamically change the angular displacement between captures during scanning based on data capture or known characteristics of the target. For example, for target 12 the importance of close displacements between captures on surface 34 at the focal distance for zone 36 is negligible assuming the surface is homogeneous.


In another embodiment of the invention, the capture device 10 automatically changes the focal distance between displacements to compensate for distance from a reference position. For example, the focal distance for the captures defining area 20 would be longer than the focal distances defining area 22. In this manner, the capture 10 device may impose a focal plane on the image where without the capture device 10 this changing focal distance would typically have a focal cylinder resulting from the angular displacements. The plane need not be imposed perpendicular to the capture device 10 and other capture patterns such as to more closely match a surface relief of the target 12 are within the scope and contemplation of the invention.


In one embodiment, before beginning actual image capture, the image capture device 10 performs a rough scan to discern the number of passes of capture required to achieve a full focus end photograph. In another embodiment, the capture device 10 begins capturing at a preestablished focal distance and iteratively captures subsequent depths of field until a prescribed number of passes have occurred. In still another embodiment, the system infers from data captured and dynamically determines what additional depths should be captured.


In one embodiment, the capture device 10 captures a texture map of a facing surface the target 12 through one or more passes. As used herein, “facing surface” is deemed to mean the surface of the target object visible from the point of view of the capture device assuming an infinite field of view. In some embodiments, the target object may be repositioned relative to the capture device by, for example, a turntable. In one embodiment, the capture occurs while the object is illuminated by non-coherent broad spectrum illumination, such that no laser is required for the capture.



FIG. 2 is a flow diagram of capturing a full focus image. Generally speaking a target to be photographed has some relief, i.e., depth characteristics. Except for the limiting case where the object is arcuate, it is relatively likely that the relief will be greater than the depth of field of any static imaging device. Thus, an image captured with any particular focal point on the object in focus will necessarily result in other points being out of focus. Accordingly, in one embodiment of the invention, the relief of the object to be scanned is identified at functional block 100. Then at functional block 102 a determination of the number of passes desired or to be used to create a full focus image is determined. By way of example, if the depth of field of the lens assembly in use is 1″, three passes would be required to achieve a full focus image of an object having a 2.5″ relief. At functional block 104 a capture device is set to have a first depth of field. At functional block 106 the object is scanned at the first depth of field. At functional block 108 the depth of field is incremented. At decision block 110 a determination is made if the number of passes for a full focus is complete. If it is not, the object is rescanned and further incrementation of the depth of field occurs. When the number of passes for full focus is complete, pixels are selected from a plurality of scans to form a full focus image at functional block 112. Selection of the pixel at functional block 112 may be accomplished as the result of knowledge about three-dimensional characteristics of the target object, or may be inferred by looking at the pixels from each respective image and comparing the relative focus of the pixels in the different images corresponding to the same region.



FIG. 3 is a flow diagram of scanning an object of one embodiment of the invention. At functional block 200 an image capture device is angularly displaced about an axis. At functional block 202 a line image is captured corresponding to the current orientation of the capture device. At functional block 204 the displacement between captures is adjusted for a distance from the reference position. At functional block 206 the line image is scaled consistent with a target plane. At functional block 208 a determination is made if capture of the target plane is complete. If it is not, the capture device is again angularly displaced based on the adjusted displacement rate, and further line captures occur consistent with functional blocks 200206. If capture of the target plane is complete the line images are aggregated into a photograph at functional block 210.



FIG. 4 is a schematic diagram of a system of one embodiment of the invention. The capture device 410 captures the image of a target 400 that resides within the field of view 414. The image is captured by successive displacements of a focal zone of a linear image sensor within capture device 410. In one embodiment, the linear image sensor is displaced linearly across an aperture to capture a full frame with successive linear captures. Because the field of view is insufficiently wide to capture a desired photograph of target 400 after the first image is captured, the image capture device 410 is automatically repositioned to change its field of view to be field of view 416. The subsequent image may be captured through the linear displacements of the focal zone of the capture device 410. Showing example field of views 414 and 416 overlap so that a portion of the target 400 is redundantly captured. It is within the scope and contemplation of the invention to reduce or increase the amount of such overlap, though some overlap is desirable to ensure data is not lost at the margin. The two images captured may be processed to append them together to form a single photograph of the entire target 400. It is also within the scope and contemplation of the invention that the repositioning may be linear rather than angular. For example, the capture device 410 could translate along a guide rod (not shown) to take successive pictures along a plane parallel to the guide rod.



FIG. 5 is schematic diagram of an alternative embodiment invention. In this embodiment, capture device 510 adjusts the focus of its lens assembly as it moves through a series of angular displacements. This effectively creates a planar focal zone consistent with target object 512. If r is the perpendicular distance from the capture device to the desired focal point, the focal distance for the other displacements is given by r/sin θ. By appropriately adjusting the focal distance, a high resolution image of a planar surface can be captured. In one embodiment, analogous focus adjustment is used to appropriately adjust the focal distance where three-dimensional depth data for the object in conjunction with the appropriate trigonometric relationship is used to establish the focal distance.


In the foregoing specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes can be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims
  • 1. A method comprising: acquiring depth data about a target;determining from the depth data a number of images at an available depth of field required to achieve acceptable focus of the target;capturing images of the target at successive depths until the number of images has been captured; andcombining data from the images to form a composite image which has a greater percentage of total area at an acceptable focus than any single image.
  • 2. The method of claim 1 wherein combining comprises: analyzing corresponding regions of pixels from the images captured at the number of depths; andselecting pixels from the corresponding region having a greatest clarity to be a pixel of the composite image.
  • 3. The method of claim 1 wherein combining comprises: identifying from the depth data, regions likely to have acceptable focus in an image captured at a particular depth; andassembling pixels from the identified regions to form the composite image.
  • 4. The method of claim 1 wherein acquiring comprises: conducting an initial scan of the target to capture depth data.
  • 5. The method of claim 1 wherein capturing is performed using a linear image sensing array.
  • 6. A method comprising: acquiring depth data about a target including accessing a data file containing information about the target;determining from the depth data a number of images at an available depth of field required to achieve acceptable focus of the target;capturing images of the target at successive depths until the number of images has been captured; andcombining data from the images to form a composite image which as a greater percentage of total area at an acceptable focus than any single image.
US Referenced Citations (81)
Number Name Date Kind
3636250 Haeff Jan 1972 A
4089608 Hoadley May 1978 A
4404594 Hannan Sep 1983 A
4564295 Halioua Jan 1986 A
4590608 Chen et al. May 1986 A
4641972 Halioua et al. Feb 1987 A
4657394 Halioua Apr 1987 A
4705401 Addleman et al. Nov 1987 A
4724525 Purcell et al. Feb 1988 A
4737032 Addleman et al. Apr 1988 A
4802759 Matsumoto et al. Feb 1989 A
4846577 Grindon Jul 1989 A
5067817 Glenn Nov 1991 A
5131844 Marinaccio et al. Jul 1992 A
5132839 Travis Jul 1992 A
5135309 Kuchel et al. Aug 1992 A
5148502 Tsujiuchi et al. Sep 1992 A
5175601 Fitts et al. Dec 1992 A
5216817 Misevich et al. Jun 1993 A
5218427 Koch Jun 1993 A
5231470 Koch Jul 1993 A
5282045 Mimura et al. Jan 1994 A
5285397 Heier et al. Feb 1994 A
5289264 Steinbichler Feb 1994 A
5307292 Brown et al. Apr 1994 A
5315512 Roth May 1994 A
5335317 Yamashita et al. Aug 1994 A
5337149 Kozah et al. Aug 1994 A
5377011 Koch Dec 1994 A
5414647 Ebenstein et al. May 1995 A
5432622 Johnson et al. Jul 1995 A
5453784 Krishnan et al. Sep 1995 A
5471303 Ai et al. Nov 1995 A
5531520 Grimson et al. Jul 1996 A
5559334 Gupta et al. Sep 1996 A
5592563 Zahavi Jan 1997 A
5611147 Raab Mar 1997 A
5617645 Wick et al. Apr 1997 A
5627771 Makino May 1997 A
5636025 Bieman et al. Jun 1997 A
5646733 Bieman Jul 1997 A
5659804 Keller Aug 1997 A
5661667 Rueb et al. Aug 1997 A
5678546 Truppe Oct 1997 A
5689446 Sundman et al. Nov 1997 A
5701173 Rioux Dec 1997 A
5704897 Truppe Jan 1998 A
5708498 Rioux et al. Jan 1998 A
5745175 Anderson Apr 1998 A
5747822 Sinclair et al. May 1998 A
5748194 Chen May 1998 A
5771310 Vannah Jun 1998 A
5794356 Raab Aug 1998 A
5805289 Corby, Jr. et al. Sep 1998 A
5864640 Miramonit et al. Jan 1999 A
5870220 Migdal et al. Feb 1999 A
5880846 Hasman et al. Mar 1999 A
5907359 Watanabe May 1999 A
5910845 Brown Jun 1999 A
5944598 Tong et al. Aug 1999 A
5946645 Rioux et al. Aug 1999 A
5978102 Matsuda Nov 1999 A
5988862 Kacyra et al. Nov 1999 A
5995650 Migdal et al. Nov 1999 A
5999641 Miller et al. Dec 1999 A
6016487 Rioux et al. Jan 2000 A
6037584 Johnson et al. Mar 2000 A
6057909 Yahav et al. May 2000 A
6078701 Hsu et al. Jun 2000 A
6091905 Yahav et al. Jul 2000 A
6100517 Yahav et al. Aug 2000 A
6111582 Jenkins Aug 2000 A
6115146 Suzuki et al. Sep 2000 A
6137896 Chang et al. Oct 2000 A
6157747 Szeliski et al. Dec 2000 A
6192393 Tarantino et al. Feb 2001 B1
6233014 Ochi et al. May 2001 B1
6535250 Okisu et al. Mar 2003 B1
6721465 Nakashima et al. Apr 2004 B1
7058213 Rubbert et al. Jun 2006 B2
7068836 Rubbert et al. Jun 2006 B1
Foreign Referenced Citations (2)
Number Date Country
4134546 Aug 1993 DE
WO 0109836 Feb 2001 WO