The present disclosure relates to a method and optical inspection apparatus for determining a three-dimensional structure of a surface. In another aspect, the present disclosure relates to material inspection systems, such as computerized systems for the inspection of moving webs of material.
Online measurement and inspection systems have been used to continuously monitor the quality of products as the products are manufactured on production lines. The inspection systems can provide real-time feedback to enable operators to quickly identify a defective product and evaluate the effects of changes in process variables. Imaging-based inspection systems have also been used to monitor the quality of a manufactured product as it proceeds through the manufacturing process.
The inspection systems capture digital images of a selected part of the product material using sensors such as, for example, CCD or CMOS cameras. Processors in the inspection systems apply algorithms to rapidly evaluate the captured digital images of the sample of material to determine if the sample, or a selected region thereof, is suitably defect-free for sale to a customer.
Online inspection systems can analyze two-dimensional (2D) image characteristics of a moving surface of a web material during the manufacturing process, and can detect, for example, relatively large-scale non-uniformities such as cosmetic point defects and streaks. Other techniques such as triangulation point sensors can achieve depth resolution of surface structure on the order of microns at production line speeds, but cover only a single point on the web surface (since they are point sensors), and as such provide a very limited amount of useful three-dimensional (3D) information on surface characteristics. Other techniques such as laser line triangulation systems can achieve full 3D coverage of the web surface at production line speeds, but have a low spatial resolution, and as such are useful only for monitoring large-scale surface deviations such as web curl and utter.
3D inspection technologies such as, for example, laser profilometry, interferometry, and 3D microscopy (based on Depth from Focus (DFF)) have been used for surface analysis. DFF surface analysis systems image an object with a camera and lens having a narrow depth of field. As the object is held stationary, the camera and lens are scanned depth-wise over various positions along the z-axis (i.e., parallel to the optical axis of the lens), capturing an image at each location. As the camera is scanned through multiple z-axis positions, points on the object's surface come into focus at different image slices depending on their height above the surface. Using this information, the 3D structure of the object surface can be estimated relatively accurately.
In one aspect, the present disclosure is directed to a method including imaging a surface with at least one imaging sensor, wherein the surface and the imaging sensor are in relative translational motion, and wherein the sensor includes a lens with a focal plane aligned at a non-zero viewing angle with respect to an x-y plane in a surface coordinate system; registering a sequence of images of the surface; stacking the registered images along a z direction in a camera coordinate system to form a volume; determining a sharpness of focus value for each (x,y) location in the volume, wherein the (x,y) locations lie in a plane normal to the z direction in the camera coordinate system; determining, using the sharpness of focus values, a depth of maximum focus zm along the z direction in the camera coordinate system for each (x,y) location in the volume; and determining, based on the depths of maximum focus zm, a three dimensional location of each point on the surface.
In another aspect, the present disclosure is directed to a method including capturing with an imaging sensor a sequence of images of a surface, wherein the surface and the imaging sensor are in relative translational motion, and wherein the imaging sensor includes a telecentric lens having a focal plane aligned at a non-zero viewing angle with respect to an x-y plane in a surface coordinate system; aligning a reference point on the surface in each image in the sequence to form a registered sequence of images; stacking the registered sequence of images along a z direction in a camera coordinate system to form a volume, wherein each image in the registered sequence of images comprises a layer in the volume; computing a sharpness of focus value for each pixel within the volume, wherein the pixels lie in a plane normal to the z direction in the camera coordinate system; computing, based on the sharpness of focus values, a depth of maximum focus value zm for each pixel within the volume; determining, based on the depths of maximum focus zm, a three dimensional location of each point on the surface; and constructing a three-dimensional model of the surface based on the three dimensional point locations.
In yet another aspect, the present disclosure is directed to an apparatus, including an imaging sensor with a telecentric lens, wherein the lens has a focal plane aligned at a non-zero viewing angle with respect to an x-y plane in a surface coordinate system, wherein the surface and the imaging sensor are in relative translational motion, and wherein the sensor images the surface to form a sequence of images thereof; a processor that: aligns in each image in the sequence a reference point on the surface to form a registered sequence of images; stacks the registered sequence of images along a z direction in a camera coordinate system to form a volume, wherein each image in the registered sequence of images comprises a layer in the volume; computes a sharpness of focus value for each pixel within the volume, wherein the pixels lie in a plane normal to the z direction in the camera coordinate system; computes, based on the sharpness of focus values, a depth of maximum focus value zm for each pixel within the volume; determines, based on the depths of maximum focus zm, a three dimensional location of each point on the surface; and constructs a three-dimensional model of the surface based on the three dimensional locations.
In yet another aspect, the present disclosure is directed to a method including positioning a stationary imaging sensor at a non-zero viewing angle with respect to a moving web of material, wherein the imaging sensor includes a telecentric lens to image a surface of the moving web and form a sequence of images thereof; processing the sequence of images to: register the images; stack the registered images along a z direction in a camera coordinate system to form a volume; determine a sharpness of focus value for each (x,y) location in the volume, wherein the (x,y) locations lie in a plane normal to the z direction in the camera coordinate system; determine a depth of maximum focus zm along the z direction in the camera coordinate system for each (x,y) location in the volume; and determine, based on the depths of maximum focus zm, a three dimensional location of each point on the surface of the moving web.
In yet another aspect, the present disclosure is directed to a method for inspecting a moving surface of a web material in real time and computing a three-dimensional model of the surface, the method including capturing with a stationary sensor a sequence of images of the surface, wherein the imaging sensor includes a camera and a telecentric lens having a focal plane aligned at a non-zero viewing angle with respect to an x-y plane of a surface coordinate system; aligning a reference point on the surface in each image in the sequence to form a registered sequence of images; stacking the registered sequence of images along a z direction in a camera coordinate system to form a volume, wherein each image in the registered sequence of images comprises a layer in the volume; computing a sharpness of focus value for each pixel within the volume, wherein the pixels lie in a plane normal to the z direction in the camera coordinate system; computing, based on the sharpness of focus values, a depth of maximum focus value zm for each pixel within the volume; determining, based on the depths of maximum focus zm, a three dimensional location of each point on the surface; and constructing the three-dimensional model of the surface based on the three dimensional locations.
In yet another aspect, the present disclosure is directed to an online computerized inspection system for inspecting web material in real time, the system including a stationary imaging sensor including a camera and a telecentric lens, wherein the lens has a focal plane aligned at a non-zero viewing angle with respect to a plane of a moving surface, and wherein the sensor images the surface to form a sequence of images thereof; a processor that: aligns in each image in the sequence a reference point on the surface to form a registered sequence of images; stacks the registered sequence of images along a z direction in a camera coordinate system to form a volume, wherein each image in the registered sequence of images comprises a layer in the volume; computes a sharpness of focus value for each pixel within the volume, wherein the pixels lie in a plane normal to the z direction in the camera coordinate system; computes, based on the sharpness of focus values, a depth of maximum focus value zm for each pixel within the volume; determines, based on the depths of maximum focus zm, a three dimensional location of each point on the surface; and constructs a three-dimensional model of the surface based on the three dimensional locations.
In yet another aspect, the present disclosure is directed to a non-transitory computer readable medium including software instructions to cause a computer processor to:receive, with an online computerized inspection system, a sequence of images of a moving surface of a web material, wherein the sequence of images is captured with a stationary imaging sensor including a camera and a telecentric lens having a focal plane aligned at a non-zero viewing angle with respect to an x-y plane of a surface coordinate system; align a reference point on the surface in each image in the sequence to form a registered sequence of images; stack the registered sequence of images along a z direction in a camera coordinate system to form a volume, wherein each image in the registered sequence of images comprises a layer in the volume; compute a sharpness of focus value for each pixel within the volume, wherein the pixels lie in a plane normal to the z direction in the camera coordinate system; compute, based on the sharpness of focus values, a depth of maximum focus value zm for each pixel within the volume; determine, based on the depths of maximum focus zm, a three dimensional location of each point on the surface; and construct the three-dimensional model of the surface based on the three dimensional locations.
In a further aspect, the present disclosure is directed to a method including translating an imaging sensor relative to a surface, wherein the sensor includes a lens with a focal plane aligned at a non-zero viewing angle with respect to an x-y plane of a surface coordinate system; imaging the surface with the imaging sensor to acquire a sequence of images; estimating the three dimensional locations of points on the surface to provide a set of three dimensional points representing the surface; and processing the set of three dimensional points to generate a range-map of the surface in a selected coordinate system.
In yet another aspect, the present disclosure is directed to a method, including: (a) imaging a surface with at least one imaging sensor to acquire a sequence of images, wherein the surface and the imaging sensor are in relative translational motion, and wherein the sensor includes a lens with a focal plane aligned at a non-zero viewing angle with respect to an x-y plane in a surface coordinate system; (b) determining a sharpness of focus value for every pixel in a last image in the sequence of images; (c) computing a y-coordinate in the surface coordinate system at which the focal plane intersects the y axis; (d) based on the apparent shift of the surface in the last image, determining transitional points on the surface, wherein the transitional points have exited a field of view of the lens in the last image, but were in the field of view of the lens in an image in the sequence previous to the last image; (e) determining the three dimensional location in a camera coordinate system of all the transitional points on the surface; (f) repeating steps (a) to (f) for each new image acquired by the imaging sensor; and (g) accumulating the three dimensional location in the camera coordinate system of the transitional points from the images in the sequence to form a point cloud representative of the translating surface.
In yet another embodiment, the present disclosure is directed to an apparatus, including an imaging sensor with a lens having a focal plane aligned at a non-zero viewing angle with respect to an x-y plane of a surface coordinate system, wherein the surface and the imaging sensor are in relative translational motion, and wherein the sensor images the surface to form a sequence of images thereof; a processor that: (a) determines a sharpness of focus value for every pixel in a last image in the sequence of images; (b) computes a y-coordinate in a surface coordinate system at which the focal plane intersects the y axis; (c) based on the apparent shift of the surface in the last image, determines transitional points on the surface, wherein the transitional points have exited a field of view of the lens in the last image, but were in the field of view of the lens in an image in the sequence previous to the last image; (d) determines the three dimensional location in a camera coordinate system of all the transitional points on the surface; (e) repeats steps (a) to (d) for each new image acquired by the imaging sensor; and (f) accumulates the three dimensional location in the camera coordinate system of the transitional points from the images in the sequence to form a point cloud representative of the translating surface.
In yet another aspect, the present disclosure is directed to an online computerized inspection system for inspecting web material in real time, the system including a stationary imaging sensor including a camera and a telecentric lens, wherein the lens has a focal plane aligned at a non-zero viewing angle with respect to an x-y plane of a moving surface, and wherein the sensor images the surface to form a sequence of images thereof; a processor that: (a) determines a sharpness of focus value for every pixel in a last image in the sequence of images; (b) computes a y-coordinate in a surface coordinate system at which the focal plane intersects the y axis; (c) based on the apparent shift of the surface in the last image, determines transitional points on the surface, wherein the transitional points have exited a field of view of the lens in the last image, but were in the field of view of the lens in an image in the sequence previous to the last image; (d) determines the three dimensional location in a camera coordinate system of all the transitional points on the surface; (e) repeats steps (a) to (d) for each new image acquired by the imaging sensor; and (f) accumulates the three dimensional location in the camera coordinate system of the transitional points from the images in the sequence to form a point cloud representative of the translating surface.
In yet another aspect, the present disclosure is directed to a non-transitory computer readable medium including software instructions to cause a computer processor to: (a) receive, with an online computerized inspection system, a sequence of images of a moving surface of a web material, wherein the sequence of images is captured with a stationary imaging sensor including a camera and a telecentric lens having a focal plane aligned at a non-zero viewing angle with respect to an x-y plane of a surface coordinate system; (b) determine a sharpness of focus value for every pixel in a last image in the sequence of images; (c) compute a y-coordinate in a surface coordinate system at which the focal plane intersects the y-axis; (d) based on the apparent shift of the surface in the last image, determine transitional points on the surface, wherein the transitional points have exited a field of view of the lens in the last image, but were in the field of view of the lens in an image in the sequence previous to the last image; (e) determine the three dimensional location in a camera coordinate system of all the transitional points on the surface; (f) repeat steps (a) to (e) for each new image acquired by the imaging sensor; and (g) accumulate the three dimensional location in the camera coordinate system of the transitional points from the images in the sequence to form a point cloud representative of the translating surface.
The details of one or more embodiments of the invention are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the invention will be apparent from the description and drawings, and from the claims.
Currently available surface inspection systems have been unable to provide useful online information about 3D surface structure of a surface due to constraints on their resolutions, speeds, or fields-of-view. The present disclosure is directed to an online inspection system including a stationary sensor, and unlike DFF systems does not require translation of the focal plane of the imaging lens of the sensor. Rather, the system described in the present disclosure utilizes the translational motion of the surface to automatically pass points on the surface through various focal planes to rapidly provide a 3D model of the surface, and as such is useful for online inspection applications in which a web of material is continuously monitored as it is processed on a production line.
In the embodiment shown in
The lens 20 has a focal plane 24 that is aligned at a non-zero angle θ with respect to an x-y plane of the surface coordinate system of the surface 14. The viewing angle θ between the lens focal plane and the x-y plane of the surface coordinate system may be selected depending on the characteristics of the surface 14 and the features 16 to be analyzed by the system 10. In some embodiments θ is an acute angle less than 90°, assuming an arrangement such as in
The lens system 20 may include a wide variety of lenses depending on the intended application of the apparatus 10, but telecentric lenses have been found to be particularly useful. In this application the term telecentric lens means any lens or system of lenses that approximates an orthographic projection. A telecentric lens provides no change in magnification with distance from the lens. An object that is too close or too far from the telecentric lens may be out of focus, but the resulting blurry image will be the same size as the correctly-focused image.
The sensor system 10 includes a processor 30, which may be internal, external or remote from the imaging sensor system 18. The processor 30 analyzes a series of images of the moving surface 14, which are obtained by the imaging sensor system 18.
The processor 30 initially registers the series of images obtained by the imaging sensor system 18 in a sequence. This image registration is calculated to align points in the series of images that correspond to the same physical point on the surface 14. If the lens 20 utilized by the system 10 is telecentric, the magnification of the images collected by the imaging sensor system 18 does not change with distance from the lens. As a result, the images obtained by the imaging sensor system 18 can be registered by translating one image with respect to another, and no scaling or other geometric deformation is required. While non-telecentric lenses 20 may be used in the imaging sensor system 18, such lenses may make image registration more difficult and complex, and require more processing capacity in the processor 30.
The amount that an image must be translated to register it with another image in the sequence depends on the translation of the surface 14 between images. If the translation speed of the surface 14 is known, the motion of the surface 14 sample from one image to the next as obtained by the imaging sensor system 18 is also known, and the processor 30 need only determine how much, and in which direction, the image should be translated per unit motion of the surface 14. This determination made by the processor 30 depends on, for example, the properties of the imaging sensor system 18, the focus of the lens 20, the viewing angle θ of the focal plane 24 with respect to the x-y plane of the surface coordinate system, and the rotation (if any) of the camera 22.
Assume two parameters Dx and Dy, which give the translation of an image in the x and y directions per unit motion of the physical surface 14. The quantities Dx and Dy are in the units of pixels/mm. If two images It1(x,y) and It2(x,y) are taken at times t1 and t2, respectively, and the processor 30 is provided with the distance d that the sample surface 14 moved from t1 to t2, then these images should be registered by translating It2(x,y) according to the following formula:
Î
T
(x, y)=It
The scale factors Dx and Dy can also be estimated offline through a calibration procedure. In a sequence of images, the processor 30 automatically selects and tracks distinctive key points as they translate through the sequence of images obtained by the imaging sensor system 18. This information is then used by the processor to calculate the expected displacement (in pixels) of a feature point per unit translation of the physical sample of the surface 14. Tracking may be performed by the processor using a normalized template matching algorithm.
Once all images of the surface 14 have been aligned, the processor 30 then stacks the registered sequence of images together along the direction zc normal to the focal plane of the lens 20 to form a volume. Each layer in this volume is an image in the sequence, shifted in the x and y directions as computed in the registration. Since the relative position of the surface 14 is known at the time each image in the sequence was acquired, each layer in the volume represents a snapshot of the surface 14 along the focal plane 24 as it slices through the sample 14 at angle θ (see
Once the image sequence has been aligned, the processor 30 then computes the sharpness of focus at each (x,y) location in the volume, wherein the plane of the (x,y) locations is normal to the zc direction in the volume. Locations in the volume that contain no image data are ignored, since they can be thought of as having zero sharpness. The processor 30 determines the sharpness of focus using a sharpness metric. Several suitable sharpness metrics are described in Nayar and Nakagawa, Shape from Focus, IEEE Transactions on Pattern Recognition and Machine Intelligence, vol. 16, no. 8, pages 824-831 (1994).
For example, a modified Laplacian sharpness metric may be applied to compute the quantity
at each pixel in all images in the sequence. Partial derivatives can be computed using finite differences. The intuition behind this metric is that it can be thought of as an edge detector—clearly regions of sharp focus will have more distinct edges than out-of-focus regions. After computing this metric, a median filter may be used to aggregate the results locally around each pixel in the sequence of images.
Once the processor 30 has computed the sharpness of focus value for all the images in the sequence, the processor 30 computes a sharpness of focus volume, similar to the volume formed in earlier steps by stacking the registered images along the zc direction. To form the sharpness of focus volume, the processor replaces each (x,y) pixel value in the registered image volume by the corresponding sharpness of focus measurement for that pixel. Each layer (corresponding to an x-y plane in the plane xc-yc) in this registered stack is now a “sharpness of focus” image, with the layers registered as before, so that an image location corresponding to the same physical location on the surface 14 are aligned. As such, if one location (x,y) in the volume is selected, the sharpness of focus values observed moving through different layers in the zc-direction, the sharpness of focus comes to a maximum value when the point imaged at that location comes into focus (i.e., when it intersects with the focal plane 24 of the camera 22), and that the sharpness value will decrease moving away from that layer in either direction along the zc axis.
Each layer (corresponding to an x-y plane) in the sharpness of focus volume corresponds to one slice through the surface 14 at the location of the focal plane 24, so that as the sample 14 moves along the direction A, various slices through the surface 14 are collected at different locations along the surface thereof. As such, since each image in the sharpness of focus volume corresponds to a physical slice through the surface 14 at a different relative location, ideally the slice where a point (x,y) comes into sharpest focus determines the three dimensional (3D) position on the sample of the corresponding point. However, in practice the sharpness of focus volume contains a discrete set of slices, which may not be densely or uniformly spaced along the surface 14. So most likely the actual (theoretical) depth of maximum focus (the depth at which sharpness of focus is maximized) will occur between slices.
The processor 30 then estimates the 3D location of each point on the surface 14 by approximating the theoretical location of the slice in the sharpness of focus volume with the sharpest focus through that point.
In one embodiment, the processor approximates this theoretical location of sharpest focus by fitting a Gaussian curve to the measured sharpness of focus values at each location (x,y) through slice depths zc in the sharpness of focus volume. The model for sharpness of focus values as a function of slice depth zc is given by
where zm is the theoretical depth of maximum focus for the location (x,y) in the volume and σ is the standard deviation of the Gaussian that results at least in part from the depth of field of the imaging lens (see lens 20 in
In another embodiment, if the Gaussian algorithm is prohibitively computationally expensive or time consuming for use in a particular application, an approximate algorithm can be used that executes more quickly without substantially sacrificing accuracy. A quadratic function can be fit to the sharpness profile samples at each location (x,y), but only using the samples near the location with the maximum sharpness value. So, for each point on the surface, first the depth is found with the highest sharpness value, and a few samples are selected on either side of this depth. A quadratic function is fit to these few samples using the standard Least-Squares formulation, which can be solved in closed form. In rare cases, when there is noise in the data, the parabola in the quadratic function may open upwards—in this case, the result of the fit is discarded, and the depth of the maximum sharpness sample is simply used instead. Otherwise, the depth is taken as the location of the theoretical maximum of the quadratic function, which may in general lie between two of the discrete samples.
Once the theoretical depth of maximum focus zm is approximated for each location (x,y) in the volume, the processor 30 estimates the 3D location of each point on the surface of the sample. This point cloud is then converted into a surface model of the surface 14 using standard triangular meshing algorithms.
In the overall procedure described in
Referring to the process 500 in
Although the steps in
Referring to the process 550 of
For example, the surface analysis method and apparatus described herein are particularly well suited, but are not limited to, inspecting and characterizing the structured surfaces 14 of web-like rolls of sample materials 12 that include piece parts such as the feature 16 (
In the exemplary embodiment of an inspection system 300 shown in
The system 300 further includes one or more stationary sensor systems 318A-318N, which each include an optional light source 332 and a telecentric lens 320 having a focal plane aligned at an acute angle with respect to the surface 314 of the moving web 312. The sensor systems 318 are positioned in close proximity to a surface 314 of the continuously moving web 312 as the web is processed, and scan the surface 314 of the web 312 to obtain digital image data.
An image data acquisition computer 327 collects image data from each of the sensor systems 318 and transmits the image data to an analysis computer 329. The analysis computer 329 processes streams of image data from the image acquisition computers 327 and analyzes the digital images with one or more of the batch or incremental image processing algorithms described above. The analysis computer 329 may display the results on an appropriate user interface and/or may store the results in a database 331.
The inspection system 300 shown in
The analysis computer 329 may store the defect rating or other information regarding the surface characteristics of the sample region of the web 314, including roll identifying information for the web 314 and possibly position information for each measured feature, within the database 331. For example, the analysis computer 329 may utilize position data produced by fiducial mark controller 301 to determine the spatial position or image region of each measured area including defects within the coordinate system of the process line. That is, based on the position data from the fiducial mark controller 301, the analysis computer 329 determines the xs, ys, and possibly zs position or range for each area of non-uniformity within the coordinate system used by the current process line. For example, a coordinate system may be defined such that the x dimension (xs) represents a distance across web 312, a y dimension (ys) represents a distance along a length of the web, and the z dimension (zs) represents a height of the web, which may be based on the number of coatings, materials or other layers previously applied to the web. Moreover, an origin for the x, y, z coordinate system may be defined at a physical location within the process line, and is typically associated with an initial feed placement of the web 312.
The database 331 may be implemented in any of a number of different forms including a data storage file or one or more database management systems (DBMS) executing on one or more database servers. The database management systems may be, for example, a relational (RDBMS), hierarchical (HDBMS), multidimensional (MDBMS), object oriented (ODBMS or OODBMS) or object relational (ORDBMS) database management system. As one example, the database 331 is implemented as a relational database available under the trade designation SQL Server from Microsoft Corporation, Redmond, Wash.
Once the process has ended, the analysis computer 329 may transmit the data collected in the database 331 to a conversion control system 340 via a network 339. For example, the analysis computer 329 may communicate the roll information as well as the feature dimension and/or anomaly information and respective sub-images for each feature to the conversion control system 340 for subsequent, offline, detailed analysis. For example, the feature dimension information may be communicated by way of database synchronization between the database 331 and the conversion control system 340.
In some embodiments, the conversion control system 340 may determine those products of products for which each anomaly may cause a defect, rather than the analysis computer 329. Once data for the finished web roll has been collected in the database 331, the data may be communicated to converting sites and/or used to mark anomalies on the web roll, either directly on the surface of the web with a removable or washable mark, or on a cover sheet that may be applied to the web before or during marking of anomalies on the web.
The components of the analysis computer 329 may be implemented, at least in part, as software instructions executed by one or more processors of the analysis computer 329, including one or more hardware microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or any other equivalent integrated or discrete logic circuitry, as well as any combinations of such components. The software instructions may be stored within in a non-transitory computer readable medium, such as random access memory (RAM), read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), flash memory, a hard disk, a CD-ROM, a floppy disk, a cassette, magnetic media, optical media, or other computer-readable storage media.
Although shown for purposes of example as positioned within a manufacturing plant, the analysis computer 329 may be located external to the manufacturing plant, e.g., at a central location or at a converting site. For example, the analysis computer 329 may operate within the conversion control system 340. In another example, the described components execute on a single computing platform and may be integrated into the same software system.
The subject matter of the present disclosure will now be described with reference to the following non-limiting examples.
An apparatus was constructed in accordance with the schematic in
A processor associated with an analysis computer analyzed the images of the sample surface acquired by the camera. The processor registered a sequence of the images, stacked the registered images along a zc direction to form a volume, and determined a sharpness of focus value for each (x,y) location in the volume using the modified Laplacian sharpness of focus metric described above. Using the sharpness of focus values, the processor computed a depth of maximum focus zm along the zc direction for each (x,y) location in the volume and determined, based on the depths of maximum focus zm, a three dimensional location of each point on the surface of the sample. The computer formed, based on the three-dimensional locations, a three-dimensional model of the surface of
The reconstructed surface in the images shown in
Several samples of an abrasive material were scanned by the incremental process described in this disclosure. The samples were also scanned by an off line laser profilometer using a confocal sensor. Two surface profiles of each sample were then reconstructed from the data sets obtained from the different methods, and the results were compared by registering the two reconstructions using a variant of the Iterated Closest-Point (ICP) matching algorithm described in Chen and Medioni, Object Modeling by Registration of Multiple Range Images, Proceedings of the IEEE International Conference on Robotics and Automation, 1991. The surface height estimates zs for each location (x, y) on the samples were then compared. Using a lens with a magnification of 2, sample 1 showed a median range residual value of 12 μm, while Sample 2 showed a median range residual value of 9 μm. Even with an imprecise registration, the scans from the incremental processing technique described above matched relatively closely to a scan taken by the off-line laser profilometer.
In this example, the effect on the reconstructed 3D surface of the camera incidence angle θ (
First, surfaces reconstructed with smaller viewing angles exhibit larger holes in the estimated surface. This is especially pronounced behind tall peaks, as shown in
Second, it can also be observed that, while larger viewing angles (such as in
Based on these observations, as well as subjective visual inspection of all the results of this experiment, it appears that the middle viewing angle (38:1°) yields the most pleasing results of all the configurations evaluated in this Example. Sequences reconstructed in this manner seem to strike a balance between completeness and low noise levels.
Various embodiments of the invention have been described. These and other embodiments are within the scope of the following claims.
This application claims the benefit of U.S. Provisional Application No. 61/593,197, filed Jan. 31, 2012, the disclosure of which is incorporated by reference herein in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US13/23789 | 1/30/2013 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
61593197 | Jan 2012 | US |