Different techniques are known for three dimensional imaging.
It is known to carry out three dimensional particle imaging with a single camera. This is also called quantitative volume imaging. One technique, described by Willert and Gharib uses a special defocusing mask relative to the camera lens. This mask is used to generate multiple images from each scattering site on the item to be imaged. This site can include particles, bubbles or any other optically-identifiable image feature. The images are then focused onto an image sensor e.g. a charge coupled device, CCD. This system allows accurately, three dimensionally determining the position and size of the scattering centers.
Another technique is called aperture coded imaging. This technique uses off-axis apertures to measure the depth and location of a scattering site. The shifts in the images caused by these off-axis apertures are monitored, to determine the three-dimensional position of the site or sites.
U.S. Pat. No. 7,006,132 describes a geometric analysis in which a camera lens of focal length f is located at z=0. Two small apertures are placed within the lens, separated a distance d/2 away from the optical centerline which also corresponds to the z axis.
U.S. Pat. No. 7,006,132 describes using lens laws and self similar triangle analysis to find the geometrical center (X0,Y0) of the image pair, and then to solve for the image separation.
More generally, this can determine the pose of a camera that is obtaining the information, and the information from the camera, to determine 3D information about structures that is pieced together from multiple different images. US 2009/0295908 describes how camera pose can be simultaneously measured by using an additional set of apertures that measure a pose of the camera relative to the object when using a feature matching algorithm, using the same apertures that are used for defocusing.
There are often tradeoffs in aperture coding systems.
The present application describes a lens system and assembly that is intended for attachment to an SLR camera or other camera with a replaceable lens assembly, and which includes coded apertures of a type that can be used for determining defocus information.
An embodiment describes using a camera for taking pictures and also for obtaining 3D information.
In the drawings:
A camera system is used with a conventional lens in a first embodiment to obtain a picture, and with a special lens assembly in a second embodiment to obtain defocused information. In the second embodiment, the lens and camera assembly is used with a computer to reconstruct a three-dimensional (3D) object from a number of different pictures at different camera orientations. The measuring camera can be handheld or portable. In one embodiment, the camera is handheld, and the computer reconstruct information indicative of the camera pose in order to stitch together information from a number of different camera exposures.
A system using defocusing to measure 3D objects was described by Willert & Gharib in 1992 using a feature matching system such as the Scale Invariant Feature Transform (SIFT) or an error minimization method (such as the Levenberg-Marquardt) method. This is described in general in US 2009/0295908, entitled “Method and Device for High Resolution Three Dimensional Imaging Which Obtains Camera Pose Using Defocusing.”
However, the inventors realized that this could be improved in a measuring camera which has interchangeable lenses. An embodiment describes how this system would be used in interchangeable lens cameras, such as C mount video cameras and single lens reflex (SLR) cameras. More generally, any camera of any type which has interchangeable lenses can be used for this.
A first embodiment is shown in
The lens also may include a lens determination device 201 which indicates that the lens is a defocused obtaining lens.
Object reconstruction can be done with defocusing, which uses two or more off-axis apertures to measure distance from the degree of defocusing of the image. Defocusing causes the images generated by each aperture to be displaced in a rigorously defined manner. See U.S. Pat. No. 7,006,132.
This technique can be implemented with any interchangeable lens camera. The 3D+pose information is encoded by the apertures.
Coding of the aperture(s) is required when using feature matching to measure camera pose. Color coding can be used in one embodiment. For example, three green coded apertures can be placed in a triangular pattern for defocusing with one red and one blue aperture placed on opposite sides for pose.
Alternatively, one blue and one red aperture can be used as shown in
Other forms of coding can be used—for example, polarization coding, position coding, or shape coding where the different apertures have different shapes.
When used with an appropriate color imaging camera, virtually any camera with interchangeable lenses can be converted into a 3D+pose measurement system when connected to a camera. Because all of the depth and position information is coded in the specially designed aperture, there is very little constraint on the camera. It only needs to have a mount for interchangeable lenses (like C-Mount, Nikon, Canon, or F-Mount), a color sensor, which is found in virtually every camera, and a method for outputting the images in a format that a computer can process. For example, the “raw” output from the camera may be streamed to a computer via USB.
In one embodiment, the imaging may be carried out using painted on features, for example features that are applied using contrast. In an embodiment, a contrast may be used of white and black particles sprayed with an aerosol.
At 300, the system obtains image information at a first time t1. In an embodiment, this can capture multiple images through multiple apertures, in a way that allows distinguishing between the apertures. In the embodiment, color filters are used to separate between the channels, as described above. One of the apertures can be associated with a red filter to only or predominately pass the red light. The other aperture can be associated with a blue filter to only or predominately pass the blue light. This forms two channels of information, one having passed through each of two separated apertures.
According to another embodiment, rather than separating the channels by colors, the apertures are arranged in a specified arrangement such as an equilateral triangle, and the processor 153 recognizes that equilateral triangle in the final image to find the different channels.
According to another embodiment, the different channels can be formed by modifying polarizations of different channels, by putting different polarizations on the different apertures and using those different polarizations as the different channels. The channels can be formed in different ways by providing different physical mask shapes, or by time division changing the aperture e.g. by moving it from place to place.
At 305, robust feature detector is used to determine references on the current image frame using a reduced data set, here the blue channel. Note that the image frame obtained at any one time will be smaller than the overall image frame. The references are referred to herein as being keypoints, but any reference that can be recognized in different images can be used.
The robust feature detector finds two-dimensional information (x, y) of the position of the keypoints as well as a feature vector style descriptor for those keypoints. That feature vector style descriptor will stay constant at any scale rotation and lighting for the points.
The feature vector style descriptor can be obtained by extracting interesting points on the object to provide a “feature description” of the object. This description has enough information that can be used to identify the object when attempting to locate the object in other images containing other objects. The features extracted from the training image are selected to be robust to changes in image scale, noise, illumination, and local geometric distortion to perform reliable recognition. This may use techniques, for example, described in U.S. Pat. No. 6,711,293. The commercially available scale invariant feature transform (SIFT) software can be used for this detection.
Three dimensional point information for each of the keypoints is also determined at 310. This can use, for example, a cross correlating technique for determining 3D information from defocused points within the two channels.
At 315, a map of the different ts is formed. It matches the different parts of the three dimensional information corresponding to the keypoints in order to obtain a transformation T (translation and rotation) between the ts. This finds information indicative of the “pose” of the camera at this time.
At 320, the dense point cloud for each obtained frame to is transformed using the pose information.
At 325, the dense point set is saved.
Although only a few embodiments have been disclosed in detail above, other embodiments are possible and the inventors intend these to be encompassed within this specification. The specification describes specific examples to accomplish a more general goal that may be accomplished in another way. This disclosure is intended to be exemplary, and the claims are intended to cover any modification or alternative which might be predictable to a person having ordinary skill in the art. For example, other forms of processing can be used. Other camera types and mounts besides the specific type herein can be used.
The cameras described herein can be handheld portable units, to machine vision cameras, or underwater units.
Although only a few embodiments have been disclosed in detail above, other embodiments are possible and the inventors intend these to be encompassed within this specification. The specification describes specific examples to accomplish a more general goal that may be accomplished in another way. This disclosure is intended to be exemplary, and the claims are intended to cover any modification or alternative which might be predictable to a person having ordinary skill in the art. For example, other stage operated and operable devices can be controlled in this way including winches and movable trusses, and moving light holders.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the exemplary embodiments of the invention.
The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein, may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. The processor can be part of a computer system that also has a user interface port that communicates with a user interface, and which receives commands entered by a user, has at least one memory (e.g., hard drive or other comparable storage, and random access memory) that stores electronic information including a program that operates under control of the processor and with communication via the user interface port, and a video output that produces its output via any kind of video output format, e.g., VGA, DVI, HDMI, displayport, or any other form.
A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. These devices may also be used to select values for devices as described herein.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory storage can also be rotating magnetic hard disk drives, optical disk drives, or flash memory based storage drives or other such solid state, magnetic, or optical storage devices. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. The computer readable media can be an article comprising a machine-readable non-transitory tangible medium embodying information indicative of instructions that when performed by one or more machines result in computer implemented operations comprising the actions described throughout this specification.
Operations as described herein can be carried out on or over a website. The website can be operated on a server computer, or operated locally, e.g., by being downloaded to the client computer, or operated via a server farm. The website can be accessed over a mobile phone or a PDA, or on any other client. The website can use HTML code in any form, e.g., MHTML, or XML, and via any form such as cascading style sheets (“CSS”) or other.
Also, the inventors intend that only those claims which use the words “means for” are intended to be interpreted under 35 USC 112, sixth paragraph. Moreover, no limitations from the specification are intended to be read into any claims, unless those limitations are expressly included in the claims. The computers described herein may be any kind of computer, either general purpose, or some specific purpose computer such as a workstation. The programs may be written in C, or Java, Brew or any other programming language. The programs may be resident on a storage medium, e.g., magnetic or optical, e.g. the computer hard drive, a removable disk or media such as a memory stick or SD media, or other removable medium. The programs may also be run over a network, for example, with a server or other machine sending signals to the local machine, which allows the local machine to carry out the operations described herein.
Where a specific numerical value is mentioned herein, it should be considered that the value may be increased or decreased by 20%, while still staying within the teachings of the present application, unless some different range is specifically mentioned. Where a specified logical sense is used, the opposite logical sense is also intended to be encompassed.
The previous description of the disclosed exemplary embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these exemplary embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
This application is a continuation of U.S. patent application Ser. No. 12/853,181, filed Aug. 9, 2010, which claims priority from provisional application No. 61/232,947, filed Aug. 11, 2009, both of which are incorporated by reference herein in their entireties and for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
4101917 | Ueda | Jul 1978 | A |
4264921 | Pennington et al. | Apr 1981 | A |
4512656 | Shinoda et al. | Apr 1985 | A |
4650466 | Luther | Mar 1987 | A |
4727471 | Driels et al. | Feb 1988 | A |
4879664 | Suyama et al. | Nov 1989 | A |
4948258 | Caimi | Aug 1990 | A |
5018854 | Rioux | May 1991 | A |
5031154 | Watanabe | Jul 1991 | A |
5071407 | Termin et al. | Dec 1991 | A |
5075561 | Rioux | Dec 1991 | A |
5168327 | Yamawaki | Dec 1992 | A |
5206498 | Sensui | Apr 1993 | A |
5216695 | Ross et al. | Jun 1993 | A |
5222971 | Willard et al. | Jun 1993 | A |
5235857 | Anderson | Aug 1993 | A |
5270795 | Blais | Dec 1993 | A |
5351078 | Lemelson | Sep 1994 | A |
5373151 | Eckel, Jr. et al. | Dec 1994 | A |
5476100 | Galel | Dec 1995 | A |
5496277 | Termin et al. | Mar 1996 | A |
5527282 | Segal | Jun 1996 | A |
5579444 | Dalziel et al. | Nov 1996 | A |
5604344 | Finarov | Feb 1997 | A |
5714762 | Li et al. | Feb 1998 | A |
5745067 | Chou et al. | Apr 1998 | A |
5864359 | Kazakevich | Jan 1999 | A |
5922961 | Hsu et al. | Jul 1999 | A |
5928260 | Chin et al. | Jul 1999 | A |
5986694 | Iino | Nov 1999 | A |
6045623 | Cannon | Apr 2000 | A |
6112029 | Suda | Aug 2000 | A |
6115553 | Iwamoto | Sep 2000 | A |
6157747 | Szeliski et al. | Dec 2000 | A |
6229913 | Nayar et al. | May 2001 | B1 |
6229959 | Suda et al. | May 2001 | B1 |
6262803 | Hallerman et al. | Jul 2001 | B1 |
6271918 | Blais | Aug 2001 | B2 |
6278847 | Gharib et al. | Aug 2001 | B1 |
6304284 | Dunton et al. | Oct 2001 | B1 |
6344048 | Chin et al. | Feb 2002 | B1 |
6519359 | Nafis et al. | Feb 2003 | B1 |
6545701 | Sinclair et al. | Apr 2003 | B2 |
6563543 | Doron | May 2003 | B1 |
6711293 | Lowe | Mar 2004 | B1 |
6748112 | Nguyen et al. | Jun 2004 | B1 |
6750904 | Lambert | Jun 2004 | B1 |
6765569 | Neumann et al. | Jul 2004 | B2 |
6912293 | Korobkin | Jun 2005 | B1 |
6915008 | Barman et al. | Jul 2005 | B2 |
6943349 | Adamec et al. | Sep 2005 | B2 |
6955656 | Bergheim et al. | Oct 2005 | B2 |
6965690 | Matsumoto | Nov 2005 | B2 |
7006132 | Pereira et al. | Feb 2006 | B2 |
7106375 | Venturino et al. | Sep 2006 | B2 |
7171054 | Fiete et al. | Jan 2007 | B2 |
7236622 | Chen et al. | Jun 2007 | B2 |
7260274 | Sawhney et al. | Aug 2007 | B2 |
7271377 | Mueller et al. | Sep 2007 | B2 |
7340077 | Gokturk et al. | Mar 2008 | B2 |
7372642 | Rohaly et al. | May 2008 | B2 |
7423666 | Sakakibara et al. | Sep 2008 | B2 |
7496226 | Negahdaripour et al. | Feb 2009 | B2 |
7565029 | Zhou et al. | Jul 2009 | B2 |
7605817 | Zhang et al. | Oct 2009 | B2 |
7612869 | Pereira et al. | Nov 2009 | B2 |
7612870 | Graff et al. | Nov 2009 | B2 |
7668388 | Bryll | Feb 2010 | B2 |
7715018 | Gharib et al. | May 2010 | B2 |
7715918 | Melvin | May 2010 | B2 |
7747151 | Kochi et al. | Jun 2010 | B2 |
7819591 | Rohaly et al. | Oct 2010 | B2 |
7826067 | Gharib et al. | Nov 2010 | B2 |
7894078 | Gharib et al. | Feb 2011 | B2 |
7916309 | Gharib et al. | Mar 2011 | B2 |
8089635 | Gharib et al. | Jan 2012 | B2 |
8179424 | Moller | May 2012 | B2 |
8190020 | Numako et al. | May 2012 | B2 |
8472032 | Gharib et al. | Jun 2013 | B2 |
8514268 | Gharib et al. | Aug 2013 | B2 |
8576381 | Gharib et al. | Nov 2013 | B2 |
8619126 | Gharib et al. | Dec 2013 | B2 |
8773507 | Gharib et al. | Jul 2014 | B2 |
20010031920 | Kaufman et al. | Oct 2001 | A1 |
20020038120 | Duhaylongsod et al. | Mar 2002 | A1 |
20030025811 | Keelan et al. | Feb 2003 | A1 |
20030096210 | Rubbert et al. | May 2003 | A1 |
20030125719 | Furnish | Jul 2003 | A1 |
20030160970 | Basu et al. | Aug 2003 | A1 |
20030210407 | Xu | Nov 2003 | A1 |
20040136567 | Billinghurst et al. | Jul 2004 | A1 |
20040155975 | Hart et al. | Aug 2004 | A1 |
20050025116 | Chen et al. | Feb 2005 | A1 |
20050104879 | Kaye et al. | May 2005 | A1 |
20050119684 | Guterman et al. | Jun 2005 | A1 |
20050168616 | Rastegar et al. | Aug 2005 | A1 |
20050251116 | Steinke et al. | Nov 2005 | A1 |
20050264668 | Miyamoto | Dec 2005 | A1 |
20060044546 | Lewin et al. | Mar 2006 | A1 |
20060092314 | Silverstein et al. | May 2006 | A1 |
20060098872 | Seo et al. | May 2006 | A1 |
20060209193 | Pereira et al. | Sep 2006 | A1 |
20060228010 | Rubbert et al. | Oct 2006 | A1 |
20060285741 | Subbarao | Dec 2006 | A1 |
20070008312 | Zhou et al. | Jan 2007 | A1 |
20070056768 | Hsieh et al. | Mar 2007 | A1 |
20070076090 | Alexander | Apr 2007 | A1 |
20070078500 | Ryan et al. | Apr 2007 | A1 |
20070146700 | Kowarz et al. | Jun 2007 | A1 |
20070188769 | Rohaly et al. | Aug 2007 | A1 |
20070195162 | Graff et al. | Aug 2007 | A1 |
20070236694 | Gharib et al. | Oct 2007 | A1 |
20080031513 | Hart | Feb 2008 | A1 |
20080091691 | Tsuji | Apr 2008 | A1 |
20080180436 | Kraver | Jul 2008 | A1 |
20080201101 | Hebert et al. | Aug 2008 | A1 |
20080218604 | Shikano et al. | Sep 2008 | A1 |
20080239316 | Gharib et al. | Oct 2008 | A1 |
20080259354 | Gharib et al. | Oct 2008 | A1 |
20080278570 | Gharib et al. | Nov 2008 | A1 |
20080278572 | Gharib et al. | Nov 2008 | A1 |
20080278804 | Gharib et al. | Nov 2008 | A1 |
20090016642 | Hart | Jan 2009 | A1 |
20090129667 | Ho et al. | May 2009 | A1 |
20090238449 | Zhang et al. | Sep 2009 | A1 |
20090295908 | Gharib et al. | Dec 2009 | A1 |
20100007718 | Rohaly, Jr. et al. | Jan 2010 | A1 |
20100094138 | Gharib et al. | Apr 2010 | A1 |
20110074932 | Gharib et al. | Mar 2011 | A1 |
Number | Date | Country |
---|---|---|
1175106 | Jan 2002 | EP |
2242270 | Mar 1991 | GB |
2655885 | Sep 1997 | JP |
2001-16610 | Aug 2002 | JP |
2001-61165 | Sep 2002 | JP |
2003-289293 | Oct 2003 | JP |
2004-191240 | Jul 2004 | JP |
WO 8800710 | Jan 1988 | WO |
WO 9641304 | Dec 1996 | WO |
WO 0069357 | Nov 2000 | WO |
WO 0186281 | Nov 2001 | WO |
WO 02096478 | Dec 2002 | WO |
WO 2006009786 | Jan 2006 | WO |
WO 2007041542 | Apr 2007 | WO |
WO 2007056768 | May 2007 | WO |
WO 2007095307 | Aug 2007 | WO |
WO 2007130122 | Nov 2007 | WO |
WO 2008091691 | Jul 2008 | WO |
Entry |
---|
Horstmeyer et al., Pupil plane multiplexing for multi-domain imaging sensors, 2008, The MITRE Corp. & Univ. of California/San Diego, pp. 10. |
Bando, Y ., “How to Disassemble the Canon EF 50mm f/1.8 II Lens”, 2008, pp. 1-21. |
Chang, N. L., Efficient Dense Correspondences using Temporally Encoded Light Patterns, IEEE, Oct. 12, 2003. |
Dellaert, F., et al., Structure from Motion without Correspondence, Computer Vision & Pattern Recognition, 2000. |
El-Hakim S.F., et al., A System for Indoor 3-D Mapping and Virtual Environments, Proceedings of the SPIE, 1997. |
Favaro, P., et al., “Observing Shape from Defocused Images”, Int'l Journal of Computer Vision, vol. 52, No. 1, 2003, pp. 25-43. |
Guarnieri, A., et al., “3D Modeling of Real Artistic Objects with Limited Computers Resources”, Proc. Of XVIII CIPA Symposium on Architectural & Archaeological Photogrammetry, Oct. 1999. |
Hasinoff, S.W, “Variable-Aperture Photography”, Graduate Department of Computer Science, University of Toronto, Thesis submitted in 2008, pp. 1-180. |
Horn, E., et al., Toward Optimal Structured Light Patterns, 3DIM, 1997. |
Koninckx, T.P., et al., A. Graph Cut based Adaptive Structured Light approach for real-time Range Acquisition, 3EDPVT, 2004. |
Kordelas, G., et al., State-of-the-art Algorithms for Complete 3D Model Reconstruction, “Engage” Summer School, 2010. |
Lepetit, V., et al., “Monocular Model-Based 3D Tracking of Rigid Objects: A Survey”, Foundation and Trends in Computer Graphics and Vision, vol. 1, No. 1, 2005, pp. 1-89. |
Levenberg, K., “A Method for the Solution of Certain Non-Linear Problems in Least Squares”, Quarterly of Applied Mathematics, vol. II, No. 2, Jul. 1944. |
Levin, A., et al., “Image and Depth from a Conventional Camera with a Coded Aperture”, ACM Transactions on Graphics, vol. 26, No. 3, Jul. 1999, pp. 70-1-70-9. |
Li, S.Z., Markov Random Field Models in Computer Vision, Springer-Verlag, 1995. |
Lowe, D.G., Three-Dimensional Object Recognition from Single Two-Dimensional Images, Artificial Intelligence, vol. 31, No. 3, Mar. 1987, pp. 355-395. |
Lowe, D.G., Object Recognition from Local Scale-Invariant Features, Proc. of the Int'l Conference on Computer Vision, Sep. 1999. |
Makadia, A., et al., Fully Automatic Registration of 3D Point Clouds, Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2006. |
Marquardt, D.W., An Algorithm for Least-Squares Estimation of Nonlinear Parameters, Journal of the Society for Industrial and Applied Mathematics, vol. 11, No. 2, Jun. 1963, pp. 431-441. |
Mouaddib, E., et al., Recent Progress in Structured Light in order to Solve the Correspondence Problem in Stereo Vision, Proceedings of the 1997 IEEE, Apr. 1997. |
Neugebauer, P.J., Geometrical Cloning of 3D Objects via Simultaneous Registration of Multiple Range Images, Shape Modeling & Application, Mar. 1997. |
Nguyen, V.A., et al., Detection of the depth order of defocused images, Vision Research 45, 2005, pp. 1003-1011. |
Pagés, J., et al., “Implementation of a Robust Coded Structured Light Technique for Dynamic 3D Measurements”, ICIP, 2003. |
Pereira, F., et al., “Two-frame 3D particle tracking”, Measurement Science and Technology, vol. 17, 2006, pp. 1680-1692. |
Raji, A., et al., “PixelFlex2: A Comprehensive, Automatic, Casually-Aligned Multi-Projector Display”, PROCAMS, 2003. |
Raskar, R., et al., Multi-Projector Displays Using Camera-Based Registration, IEEE Visualization, 1999. |
Rocchini, C., et al., A low cost 3D scanner based on structured light, Computer Graphics Forum (Eurographics 2001 Conf. Issue). |
Rusinkiewicz, S., et al., Real-Tiime 3D Model Acquisition, ACM Transactions on Graphics, 2002. |
Salvi, J., et al., Pattern codification strategies in structured light systems, Pattern Recognition, 2004. |
Scharstein, D., et al., A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms, IJCV, 2002. |
Scharstein, D., et al., “High-Accuracy Stereo Depth Maps Using Structured Light”, IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 2003, vol. 1, pp. 195-202. |
Sinofsky, “Measurement of Laser Beam Spreading in Biological Tissue Scattering”, SPIE, vol. 712, Lasers in Medicine (1986). |
Smith, E.R., et al., “Registration of combined range-intensity scans: Initialization through Verification”, Computer Vision and Image Understanding, vol. 110, 2008, pp. 226-244. |
Subbarao, M., et al., “Analysis of Defocused Image Data for 3D Shape Recovery using a Regularization Technique”, SPIE, 1997. |
Tardif, J., “Multi-projectors for arbitrary surfaces without explicit calibration and reconstruction”, DIM, 2003. |
Tardif, J., et al., “A MRF formulation for coded structured light”, Proceedings of the 5th Int'l Conf. on 3-D Digital Imaging & Modeling, 2005. |
Wang, Z., et al., “Extraction of the Corner of Checkerboard image”, Proceedings of the 7th World Congress on Intelligent Control and Automation, Jun. 25-27, 2008. |
Weisstein, E., Gray Code, http://mathworld.wolfram.com/GrayCode.html. |
Willert, C.E., et al., “Three-dimensional particle imaging with a single camera”, Experiments in Fluids, vol. 12, 1992, pp. 353-358. |
Williams, J.A., et al., “Multiple View 3D Registration: A Review and a New Technique”, Systems Man. & Cybernetics, vol. 10, 1999. |
Wu, M., et al., “Three-dimensional fluorescent particle tracking at micron-scale using a single Camera”, Experiments in Fluids, 2005, pp. 461-465. |
Yang, R., et al., PixelFlex: A Reconfigurable Multi-Projector Display System, IEEE Visualization, 2001. |
Zhang, S., et al., High-resolution, Real-time 3D Shape Acquisition, IEEE Workshop of real-time 3D sensors & their uses, 2004. |
WO PCT/US2008/000991 ISR, Jul. 28, 2009. |
WO PCT/US2008/000882 ISR, Jul. 5, 2009. |
WO PCT/US2005/021326 ISR, Feb. 1, 2007. |
WO PCT/US2009/004362 ISR, May 27, 2010. |
AU 2008244494 Examiner's First Report, Aug. 18, 2010. |
Battle, J., et al., “Recent Progress in Coded Structured Light as a Technique to Solve The Correspondence Problem: A Survey”, Pattern Recognition, 1998, vol. 31, No. 7, pp. 963-982. |
Kieβling, A.,“A Fast Scanning Method for Three-Dimensional Scenes”, IEEE Proceedings 3rd International Conference on Pattern Recognition, 1976, pp. 586-589. |
WO PCT/US2007/008598 ISR, Apr. 11, 2008. |
WO PCT/US2007/008598 IPRP, Oct. 8, 2008. |
WO PCT/US2008/000991 ISR, May 21, 2008. |
WO PCT/US2008/000991 IPRP, Jul. 28, 2009. |
WO PCT/US2008/000882 ISR, Mar. 20, 2009. |
WO PCT/US2008/000882 IPRP, Jul. 28, 2009. |
WO PCT/US2008/005311 ISR, Sep. 8, 2008. |
WO PCT/US2008/005311 IPRP, Oct. 27, 2009. |
WO PCT/US2008/005313 ISR, Sep. 8, 2008. |
WO PCT/US2008/005313 IPRP, Oct. 27, 2009. |
WO PCT/US2008/005314 ISR, Sep. 8, 2008. |
WO PCT/US2008/005314 IPRP, Oct. 27, 2009. |
WO PCT/US2008/005315 ISR, Sep. 8, 2008. |
WO PCT/US2008/005315 IPRP, Oct. 27, 2009. |
WO PCT/US2008/012947 ISR, Jul. 14, 2009. |
WO PCT/US2008/012947 IPRP, May 25, 2010. |
WO PCT/US2009/003167 ISR, Oct. 29, 2010. |
WO PCT/US2009/003167 IPRP, Mar. 1, 2011. |
WO PCT/US2009/004362 ISR, Apr. 8, 2010. |
WO PCT/US2009/004362 IPRP, Jan. 25, 2011. |
WO PCT/US2010/046908 ISR, Apr. 29, 2011. |
WO PCT/US2010/046908 IPRP, Feb. 28, 2012. |
WO PCT/US2010/057532 ISR, Oct. 25, 2011. |
WO PCT/US2010/057532 IPRP, Mar. 5, 2013. |
WO PCT/US2011/032580 ISR, Jul. 7, 2011. |
WO PCT/US2011/032580 IPRP, Oct. 23, 2012. |
WO PCT/US2012/046557 ISR, Oct. 2, 2012. |
WO PCT/US2012/046557 IPRP, Jan. 13, 2015. |
WO PCT/US2012/046484 ISR, Sep. 24, 2012. |
WO PCT/US2012/046484 IPRP, Jan. 13, 2015. |
Number | Date | Country | |
---|---|---|---|
20140368617 A1 | Dec 2014 | US |
Number | Date | Country | |
---|---|---|---|
61232947 | Aug 2009 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12853181 | Aug 2010 | US |
Child | 14303988 | US |