A multi-channel 3D camera system obtains digital images of an object from multiple view points, which can be used to generate a 3D image of the object. These multi-channel cameras have advantages of high accuracy and non-moving parts compared with other methods for obtaining 3D images. However, the use of multiple channels requires a particular amount of physical space to accommodate those channels within a scanning wand incorporating the 3D camera system, which can affect the size and form factor of the wand. The complexity of using multiple channels can also increase the cost of the 3D system. As a result, the 3D image capturing market is driving to develop more compact and cost-effective 3D cameras, while maintaining the high accuracy of them. Accordingly, a need exists for such an improved 3D camera system.
A first sole channel 3D image capture apparatus, consistent with the present invention, includes an image sensor, a lens adjacent the image sensor, and an active optical component adjacent the lens and opposite the image sensor. An aperture component is located between the active optical component and the lens, and the aperture component has an aperture for allowing passage of light to the image sensor. The active optical component is changeable between first and second shapes. The first shape provides a first optical wavefront through the aperture and lens to the image sensor from a first view angle of an object, and the second shape provides a second optical wavefront through the aperture and lens to the image sensor from a second view angle of the object. The second optical wavefront is shifted by the active optical component on the image sensor with respect to the first optical wavefront in order to provide multiple view-angle images along a single optical channel.
A second sole channel 3D image capture apparatus, consistent with the present invention, includes an image sensor, a lens adjacent the image sensor, and a mirror adjacent the lens and opposite the image sensor. An aperture component is located between the mirror and the lens, and the aperture component has an aperture for allowing passage of light to the image sensor. The mirror is changeable between first and second positions. The first position provides a first optical wavefront through the aperture and lens to the image sensor from a first view angle of an object, and the second position provides a second optical wavefront through the aperture and lens to the image sensor from a second view angle of the object. The second optical wavefront is shifted by the mirror on the image sensor with respect to the first optical wavefront in order to provide multiple view-angle images along a single optical channel.
The accompanying drawings are incorporated in and constitute a part of this specification and, together with the description, explain the advantages and principles of the invention. In the drawings,
Embodiments of the present invention use a single optical channel to capture multiple views of an object from varying viewpoints that can be used to generate a 3D image of it. The single optical channel can use, for example, an active optical wedge or a moveable mirror to obtain the multiple views by creating virtually spatially separated apertures in a time sequential manner. An electronic digital imager sensor captures a scene of a 3D object through the multiple virtual apertures to obtain different view-angle images. Software algorithms can rebuild the 3D scene into a 3D image or model based on the captured different view-angle images of the scene.
Systems to generate 3D images or models based upon image sets from multiple views are disclosed in U.S. Pat. Nos. 7,956,862 and 7,605,817, both of which are incorporated herein by reference as if fully set forth. These systems can be included in a housing providing for hand-held use, and an example of such a housing is disclosed in U.S. Pat. No. D674,091, which is incorporated herein by reference as if fully set forth.
As shown in
The components of system 10 can be contained within a housing 11, which can have a variety of shapes. For example, housing 11 can be configured for hand-held use. Housing 11 can include a window 13 for receiving light from the object, and window 13 can be implemented, for example, as an aperture in housing 11 or with a transparent piece of material. A light source 15, such as one or more light emitting diodes (LEDs), can optionally be located on the housing adjacent window 13 for illuminating the object. System 10 can optionally include a mirror in front of wedge 16 and within or adjacent housing 11 to image the object at a non-zero angle to central axis 14, for example downward from housing 11 when scanning an object.
The electrically driven mirror or micro-mirror array 36 has on and off status as controlled by mirror control 37. When mirror 36 is on by receiving an electrical signal from mirror control 37, the single channel optics captures the object point A(x, y, z) wavefront and forms an image A′(x′, y′, Δy=0) on image sensor 42. When the mirror is off and shifts to a different position as represented by angle Θ, the optical channel samples a different wavefront of A(x, y, z) and forms an image A″(x″, y″, Δy) on image sensor 42. Analyzing the shift Δy from A″(x″, y″, Δy) with respect to A′(x′, y′, Δy=0), the object A spatial location A(x, y, z) can be determined. By repeatedly obtaining images of the object at different views and repeating this computation, a 3D image or model of the object can be generated. An example of a rotatable mirror is the Digital Micromirror Device (DMD) product by Texas Instruments Incorporated.
The components of system 30 can be contained within a housing 31, which can have a variety of shapes. For example, housing 31 can be configured for hand-held use. Housing 31 can include a window 33 for receiving light from the object, and window 33 can be implemented, for example, as an aperture in housing 31 or with a transparent piece of material. A light source 35, such as one or more LEDs, can optionally be located on the housing adjacent window 33 for illuminating the object.
The active optical wedge shown in
The active optical wedge for the embodiment shown in
In optical systems 50, 66, and 82, the active optical wedge provides an optical wavefront along the z-axis through the aperture of the aperture component and focused onto the image sensor by the lenses. By changing states between on and off positions, the active optical wedge provides for shifted images of an object from the same perspective along a single optical channel and effectively provides two virtual channels. A single channel 3D system can alternatively use multiple active optical wedges or other active optical components. The aperture component in the single channel systems can be implemented with, for example, an opaque plate having a substantially circular aperture or an aperture of other shapes.
Image sensor 107 can be implemented with, for example, any digital imager such as a CMOS or CCD sensor having approximately 1.6-3.0 mega-pixels or other resolutions. The image sensor is positioned with a single channel 3D imager to conjugate with the nominal object plane. The 3D system can generate a 3D image or model at a particular volume of object space depending on the optical design. For example, the system can map 3D object space from 5 mm to 15 mm if the optical system design has a focal length of approximately 3.0 mm.
The image sensor can include a single sensor, as shown, partitioned into multiple partially overlapping image data regions. Alternatively, the image sensor can be implemented with multiple sensors with the image data regions distributed among them.