1. Field of the Invention
The present invention generally relates to taking and visualizing digital impressions of rigid or deformable objects through a clear elastomer that conforms to the shape of the measured object.
2. Description of Related Art
U.S. Pat. No. 8,411,140, entitled Tactile Sensor Using Elastomeric Imaging, filed on Jun. 19, 2009, and issued Apr. 2, 2013, (incorporated by reference herein) discloses a tactile sensor that includes a photosensing structure, a volume of elastomer capable of transmitting an image, and a reflective membrane (called a “skin” in the patent) covering the volume of elastomer. The reflective membrane is illuminated through the volume of elastomer by one or more light sources, and has particles that reflect light incident on the reflective membrane from within the volume of elastomer. The reflective membrane is geometrically altered in response to pressure applied by an entity touching the reflective membrane, the geometrical alteration causing localized changes in the surface normal of the membrane and associated localized changes in the amount of light reflected from the reflective membrane in the direction of the photosensing structure. The photosensing structure receives a portion of the reflected light in the form of an image, the image indicating one or more features of the entity producing the pressure.
This application provides methods of and systems for three-dimensional digital impression and visualization of objects through an elastomer.
Under one aspect of the invention, a method of estimating optical correction parameters for an imaging system includes providing an optical sensor system having an image capturing system, an illumination source, and a substantially optically clear elastomer. The elastomer has a first surface facing the image capturing system and a second surface facing away from the image capturing system. The image capturing system has a plurality of views of the second surface through the elastomer. The method also includes pressing an object of known surface topography against the second surface of the elastomer so that features of the surface topography are disposed relative to the second surface of the elastomer by predetermined distances and imaging a plurality of views of the surface topography of the object through the elastomer with the image capturing system. The method further includes estimating a three-dimensional model of at least a portion of the object based on the plurality of views of the surface topography of the object and estimating optical correction parameters based on the known surface topography of the object and the estimated three-dimensional model. The optical correction parameters correct distortions in the estimated three-dimensional model to better match the estimated three-dimensional model to the known surface topography.
Under another aspect of the invention, estimating the optical parameters includes mapping distorted measurements of three-dimensional features estimated from the plurality of views to known measurements of three-dimensional features from the known surface topography.
Under a further aspect of the invention, the methods also include establishing a reference feature using a target image positioned a known distance from the image capturing system and using the reference feature to determine the predetermined distances.
Under still another aspect of the invention, a method of visualizing at least one of a surface shape and a surface topography of an object includes providing an optical sensor system having an image capturing system, an illumination source, and a substantially optically clear elastomer, wherein. The elastomer has a first surface facing the image capturing system and a second surface facing away from the image capturing system. The image capturing system has a plurality of views of the second surface through the elastomer. The method also includes providing an alignment object on the second surface of the elastomer that has surface features and imaging a plurality of views of the surface features of the alignment object through the elastomer with the image capturing system. The method also includes estimating a set of transform parameters that align the images of the plurality of views. The method further includes pressing an object to be visualized into the second surface of the elastomer and imaging a plurality of views of at least one of a surface shape and a surface topography of the object to be visualized through the elastomer with the image capturing system. The method also includes applying the estimated set of transform parameters to the images of the plurality of views to create a plurality of transformed images and displaying at least two of the transformed images as a stereo image pair.
Under still a further aspect of the invention, a surface of the alignment object on the second surface of the elastomer is substantially planar when in contact with the second surface and includes an alignment image.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
In one embodiment of the present invention, a three-dimensional (3-D) imaging system is provided that captures multi-view images of a rigid or deformable object through an elastomer to visualize and quantify the shape and/or the surface topography of an object in two dimensions (2-D) and in three dimensions either under static or under dynamic conditions. In one implementation, the captured images or stream of images are used to stereoscopically visualize and quantitatively measure the micron scale, three-dimensional topography of surfaces (e.g., leather, abrasive, micro-replicated surface, optical film etc.), or to visualize and quantitatively measure the overall shape of large-scale three-dimensional structures (e.g., foot, hand, teeth, implant etc.). In at least one embodiment, a calibration correction procedure is provided to reduce distortion artifacts in the captured images and 3-D data due to the optical effect of changing thickness of the applied elastomer.
Optionally, the elastomer can have a reflective surface 120, of varying degrees of reflection directionality, as described in more detail below. However, the reflective surface is not required, as the system 100 can image objects based only on the appearance of the surface of the object 110 in contact with the elastomer 115. For example, if a human foot is being imaged, a sock can be placed on the foot before the foot is pressed into contact with the elastomer 115.
In some implementations, a glass plate 125 is placed in between the elastomer and the cameras. The glass plate 125 enables applying pressure uniformly on the elastomer 115. This pressure enables the system 100 to take an instantaneous impression of the measured object by the elastomer. Due to the applied pressure, the elastomer 115 conforms to the shape of the measured object 110 both at the macro and micro scales. In addition, the glass plate 125 provides support to the elastomer 115 when the object 110 is pressed against the elastomer 115, as shown in
Illumination of the imaged object 110 may be provided from the camera side of the elastomer 115 by light sources 130 (such as LEDs, for example). In addition to or in substitution of light sources 130, the imaged object can be illuminated through the edge of the glass plate 125 by light sources 135. Both of these options are shown in
The glass plate may also have light extraction micro features to provide simulated distant illumination, as shown in
Referring again to
The illumination provided by illumination sources 130 can be uniform, sequential, spatially or spectrally multiplexed. The light sources 130 can also implement gradient illumination whether that is defined spatially or spectrally. Illumination can also be linearly or circularly polarized, in which case orthogonal polarization may be used on the imaging path. Illumination may also be understood as creating a pattern or texture on a coated surface of the elastomer that could be used for quantitative 3-D reconstruction of the shape of the object.
For example, when illumination is provided within the hemisphere on the camera side, some of the illumination sources may not be sufficient to illuminate into deep structures, or they may create unwanted shadows. To reduce these unwanted effects, many illumination sources can be implementing to provide different illumination directions. In such an implementation, this solution basically provides light from all possible illumination directions.
Further embodiments include creating a sequential illumination by turning on one or a segment of illumination sources 130 at a time. Spatially multiplexed illumination can be implemented by providing multiple illumination sources with different patterns turned on at the same time. Further still, spectrally multiplexed illumination can be implemented by providing illumination sources with different color turned on at the same time. Certain implementations provide radiant illumination by spatially varying the intensity of the illumination sources within the hemisphere. Alternatively, this can be combined with illumination in different spectral bands (e.g., red, green, and blue channels implementing spatially varying illumination in the different directions, x, y, and z).
Illumination can also be polarized to reduce specularity or sub-surface scattering. Optionally, imaging can be cross-polarized. Patterned or textured illumination can be used to implement structured light projection based 3-D reconstruction.
The clear elastomer 115 can be made from thermoplastic elastomers, polyurethane, silicone rubber, acrylic foam or any other material that is optically clear and can elastically conform to the shape of the measured object. Illustrative examples of suitable materials and designs of the elastomer 115 are found in U.S. Pat. No. 8,411,140. As mentioned above, the clear elastomer facing the imaged object can have, but need not have, an opaque reflective coating. The coating layer facing the cameras may have diffuse Lambertian, or specially engineered reflectance properties. The coating may also be patterned to facilitate registration of the images captured by multiple cameras.
The coating may be a stretchable fabric such as spandex, lycra, or similar in properties to these. The fabric may be dark to minimize secondary reflections, and can have monochrome or colored patterns to facilitate registration between the images. Also, as mentioned above, the object itself can be covered in a fabric, and this fabric covering (e.g., a sock on a foot) can have a textured or patterned surface. The pattern may encode spatial location on the fabric. For example, a matrix barcode (or two-dimensional barcode) may be provided to increase the efficiency of registration. Such an implementation would enable finding corresponding image regions without the time consuming and error prone image registration method (e.g., cross-correlation) as one need only read the encoded position information in the spatial locations encoded in the image.
In this context, “registration” generally means finding corresponding image regions in two or more images. Image registration, or finding correspondences between two images is one of the first steps in multi-view stereo processing. The separation between corresponding or registered image regions determines depth.
Visualization implementations include displaying 2-D images or a 2-D video stream of the object from a pre-selected camera, or displaying a 3-D image or 3-D video stream of the object captured by at least two image paths. A 3-D image or video stream may mean an anaglyph image or anaglyph video stream, or a stereo image or stereo video stream displayed using 3-D display technologies that may or may not require glasses. Other visualization techniques known by those having ordinary skill in the art are within the scope of the invention.
Certain implementations have separate cameras, with each camera having its own lens, to capture multi-view images. Meanwhile, other implementations have a single camera with a lens capable of forming a set of images from different perspectives, such as a lens with a bi-prism, a multi-aperture lens, a lens with pupil sampling or splitting, or a lens capturing the so called integral- or light field image that captures multi-view images. Images through a single lens can be captured on separate or on a single sensor. In the latter case, images may be overlapping, or spatially separated with well-defined boundaries between them.
In certain embodiments, the captured images go through multiple pre-processing steps. Such preprocessing can include lens distortion correction, alignment of multiple images onto a reference image to reduce stereo parallax, enforcement of horizontal image disparities, or finding corresponding sub-image regions for three-dimensional reconstruction.
One illustrative purpose of the pre-processing steps is to create high quality 3-D stereo image pairs for viewing the instantaneous impressions of the measured object as anaglyph images on any 2-D display (e.g., tablet computer or other display device). Such stereo visualization complements 3-D reconstruction of the shape of the measured object, and allows evaluating static or dynamic shapes of the object on any display for medical or industrial purposes. Alternatively, the created high quality 3-D stereo image pairs can be viewed on a 3-D display.
High quality stereo image pairs can be created from images captured by widely separated cameras even when the cameras have different lenses and sensors. Such cameras may capture an overlapping view with different magnification. In order to create high quality stereo image pairs from such raw images, first, the intrinsic camera and lens distortion parameters (e.g., focal length, skew, and distortion parameters) are determined by calibrating each camera using techniques known in the relevant fields. Next, the camera setup is calibrated in order to determine the relative pose and orientation between cameras. For this purpose, a backlit calibration target (having, e.g., a checkerboard pattern) can be placed behind a clear elastomer without the coating/reflective layer.
This procedure produces a set of lens distortion parameters that can later be used to “undistort” images captured by the camera(s). Distortions could be barrel, pincushion type, and/or other distortions. While the process above describes the checkerboard target 625 as stationary about which the cameras are moved, one of skill in the art will understand that the cameras may remain stationary while the target is moved about the cameras.
As mentioned above,
Next, the undistorted images captured by the cameras are aligned on top of each other using a homography that is recovered by registering an image of an overlapping region captured by one of the cameras onto the image of the same region captured by the other camera. To do this, local image features are detected in the undistorted L, C, and R images (step 320) and the undistorted L and R image features are registered to the undistorted C image features (step 325). Feature detection may be done by any of the standard feature detector methods such as SIFT (Scale Invariant Feature Transform) or Harris feature detector. The image registration can be accomplished using techniques known in the art. Next, the outlier correspondences are removed using epipolar constraints (step 330) and the homographies are fit onto the L-to-C correspondence and R-to-C correspondence (step 335).
Because the homography is recovered when no object is pressed against the elastomer, the two images of the frontal surface of the elastomer (the surface facing away from the camera(s)) are brought into alignment. When an object is pressed against the elastomer, the images aligned by the previously recovered homography show the effect of stereo parallax, thereby creating a stereo disparity field between the images according to the shape of the object. This preprocessing step can, optionally, include a stereo rectification step by applying different homographies to the images such that the created image disparities are oriented primarily in the horizontal direction, thereby correcting for vertical mis-alignment between cameras.
A left and right image pair (e.g., L-C, C-R, or L-R) is selected for 3-D display (step 435) to create an anaglyph (step 440). The anaglyph can, optionally, be shown on a 3-D display (step 445). Or, also optionally, an anaglyph red, green, and blue image can be created for display by loading the left image to the red channel and the right image to the green and blue channels of a display (step 450), thereby showing the anaglyph on a standard video display (step 455). In implementations providing live stereo or anaglyph images, the undistortion and alignment steps are combined into a single processing step to create the stereo image pairs at the rate with which the cameras capture the images (e.g., video rate or 30 fps).
Referring again to
Since the dimension of the ridge target 705 are known, so are the coordinates of points on the ridge surfaces 710 and gaps 715. The required correction parameters are computed as the difference between the measured and known coordinates of these points. In one embodiment, the reference plane parameters for the plane in which the gaps 715 lie are determined as a known distance from the top surface of the glass plate as shown in
Because the geometry of the ridge target 705 is known relative to the reference surface, one can measure how much the X, Y, and Z coordinates need to be shifted (corrected) to bring the measurement into alignment with the known geometry (again, relative to the reference surface). The procedure is repeated with different ridge heights to enable determining the required corrections as a function of image location (x, y), and image disparity (dx, dy) or measured depth (ZMeas). In certain embodiments, it is sufficient to estimate the dZ(x, y, dx, dy) or dZ(x, y, ZMeas) correction as the (x, y) coordinates of an image point together with the corrected depth (Z+dZ) are sufficient to compute the corresponding corrected X and Y coordinates.
After the correction parameters for the distortions introduced by the reference geometry are computed and stored, a decision is made as to whether to repeat the process (step 840) in order to provide dense sampling of the space of the correction parameters. If no further correction parameters are desired, the process 800 terminates (step 845). If further correction parameters are desired, then a different reference geometry can be used and the process repeated. A different reference geometry will introduce different distortion at different positions within the elastomer, thereby providing further correction parameters. Likewise, one may use the same reference geometry placed at a different arbitrary position relative to the cameras and pressed into the elastomer. Doing so would also introduce different distortions that the previous arbitrary position.
Certain aspects of the elastomer, optional reflective surface or membrane, light sources, fabric, and surface features of the elastomer disclosed in U.S. Pat. No. 8,411,140 can be used in conjunction with the embodiments disclosed herein. For example, in embodiments using an optional reflective membrane, the elastomer membrane can be made by adding reflective particles to the elastomer when it is in a liquid state, via solvent or heat, or before curing. This makes a reflective paint that can be attached to the surface by standard coating techniques such as spraying or dipping. The membrane may be coated directly on the surface of the bulk elastomer, or it may be first painted on a smooth medium such as glass and then transferred to the surface of the bulk material and bound there. Also, the particles (without binder) can be rubbed into the surface of the bulk elastomer, and then bound to the elastomer by heat or with a thin coat of material overlaid on the surface. Also, it may be possible to evaporate, precipitate, sputter, other otherwise attach thin films to the surface.
As described above, a reflective membrane on the surface of the elastomer is optional. Thus, the imaging of objects through a clear elastomer, with no reflective membrane is within the scope of the invention. In such an embodiment, the system 100 of
As with the other systems set forth herein, the system without a reflective membrane can be used in conjunction with the various calibration, alignment, and correction processes set forth herein. Likewise, the system without a reflective membrane can provide images for use in stereo reconstruction and/or 3-D model estimation.
Furthermore, the embodiments herein need not rely only on reflection of light from the illumination sources as the image source for the one or more cameras of the system. A fluorescent pigment can be used in the surface of the elastomer in contact with the object to be imaged and that surface illuminated by Ultraviolet (UV) light or blacklight. If the blacklight comes at a grazing angle, it can readily reveal variations in surface normal. The material can be fairly close to Lambertian. To reduce interreflections, one would select a surface that appears dark to emitted wavelengths. This principle is true with ordinary light as well. In certain embodiments, if one is using a Lambertian pigment in the membrane, it is better for it to be gray than white, to reduce interreflections.
Blacklight or UV can be used to illuminate the resulting fluorescent surface, which would then serve as a diffuse source. In some cases, it would be useful to use a single short flash (for instance, recording the instantaneous deformation of an object against the surface) or multiple periodic (strobed) flashes (to capture rapid periodic events or to modulate one frequency down to another frequency.)
The techniques and systems disclosed herein may be implemented as a computer program product for use with a computer system or computerized electronic device. Such implementations may include a series of computer instructions, or logic, fixed either on a tangible/non-transitory medium, such as a computer readable medium (e.g., a diskette, CD-ROM, ROM, flash memory or other memory or fixed disk) or transmittable to a computer system or a device, via a modem or other interface device, such as a communications adapter connected to a network over a medium.
The medium may be either a tangible medium (e.g., optical or analog communications lines) or a medium implemented with wireless techniques (e.g., Wi-Fi, cellular, microwave, infrared or other transmission techniques). The series of computer instructions embodies at least part of the functionality described herein with respect to the system. Those skilled in the art should appreciate that such computer instructions can be written in a number of programming languages for use with many computer architectures or operating systems.
Furthermore, such instructions may be stored in any tangible memory device, such as semiconductor, magnetic, optical or other memory devices, and may be transmitted using any communications technology, such as optical, infrared, microwave, or other transmission technologies.
It is expected that such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the network (e.g., the Internet or World Wide Web). Of course, some embodiments of the invention may be implemented as a combination of both software (e.g., a computer program product) and hardware. Still other embodiments of the invention are implemented as entirely hardware, or entirely software (e.g., a computer program product).
As will be apparent to one of ordinary skill in the art from a reading of this disclosure, the present disclosure can be embodied in forms other than those specifically disclosed above. The particular embodiments described above are, therefore, to be considered as illustrative and not restrictive. Those skilled in the art will recognize, or be able to ascertain, using no more than routine experimentation, numerous equivalents to the specific embodiments described herein. Thus, it will be appreciated that the scope of the present invention is not limited to the above described embodiments, but rather is defined by the appended claims; and that these claims will encompass modifications of and improvements to what has been described.
This application claims priority under 35 U.S.C. §119(e) to U.S. Provisional Application No. 61/714,762, filed Oct. 17, 2012, entitled Three-Dimensional Digital Impression and Visualization of Objects Through a Clear Elastomer, the contents of which are incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
61714762 | Oct 2012 | US |