The present disclosure generally relates to an inspection apparatus, method, and computer program product. More specifically, the disclosure relates to a machine vision inspection apparatus, method, and computer program product using three-dimensional information for inspection of a target object.
Over the past two or three decades, machine vision has been used increasingly and plays an important role in the design of automated manufacturing systems. A large variety of products, such as printed circuit boards (PCBs), integrated circuits, liquid crystal displays (LCDs), transistors, automotive parts, agricultural machines, and other products that are made in factories may need to be inspected during the production process. An improperly manufactured component may cause extensive damage to, render entirely or at least partially useless, ineffective, or at least not-fully functional, or otherwise impair a system containing the improperly manufactured component. Therefore, there is a need to ensure that all components are properly manufactured before they are used due to the high cost associated with functional failure. Machine vision systems have been used for quality control of products such as by identifying defects of the products, such as missing components, skewed components, reversed components, incorrectly placed components, or wrong valued components. Variance of placement and rotation of an object can result in position error and/or distortion error and negatively impact detection and accuracy. And variance of different objects on the same production line can negatively impact detection and accuracy. Rapid detection and analysis of an object and to quickly assess correct assembly of the object is desirable. Accordingly, there is a need for improved machine vision systems.
Through applied effort, ingenuity, and innovation, solutions to improve machine vision systems have been realized and are described herein. Inspection apparatus, methods, and non-transitory computer program products are described herein that provide improved machine vision systems, such as including real-time three dimensional information of a target object combined with color pixel information and, thereby, are configured to identify a defect of the target object, such as in a predetermined inspecting area and such as identifying an improper manufacturing of the target object. Embodiments of the disclosure combine stereoscopic 2D image-based features and objects with 3D depth and position information for the rapid detection and analysis of objects.
According to one exemplary embodiment of the subject disclosure, a method is described. The method comprises determining real-time three-dimensional information of a target object in a predetermined inspecting area based on depth information of at least one real-time image acquired by an image capturing system. The method further comprises projecting color pixel information of a real-time color image of the target object to a three-dimensional virtual model based on the real-time three-dimensional information. The real-time color image may be acquired by a color camera system. The method further comprises generating a color three-dimensional virtual model. The color three-dimensional virtual model may comprise the color pixel information.
According to one exemplary embodiment of the subject disclosure, an apparatus for vision machine inspection is described. The apparatus comprises a processor. The processor is configured to receive depth information of a target acquired by an image capturing system and determine real-time three-dimensional information of a target object in a predetermined inspecting area based on the depth information of the target. The processor is further configured to receive at least one real-time color image of the target acquired by a color camera system and project color pixel information of a real-time color image of the target object to a three-dimensional virtual model based on the real-time three-dimensional information and the real-time color image. The processor is further configured to generate a color three-dimensional virtual model. The color three-dimensional virtual model comprises the color pixel information.
According to one exemplary embodiment of the subject disclosure, a computer program product is described. The computer program product comprises a non-transitory computer readable storage medium and computer program instructions stored therein. The computer program instructions comprising program instructions are configured to determine real-time three-dimensional information of a target object in a predetermined inspecting area based on depth information of at least one real-time image acquired by an image capturing system, project color pixel information of a real-time color image of the target object to the three-dimensional virtual model based on the real-time three-dimensional information, the real-time color image acquired by a color camera and generate a color three-dimensional virtual model, wherein the color three-dimensional virtual model comprises the color pixel information.
These characteristics as well as additional features, functions, and details of various embodiments are described below. Similarly, corresponding and additional embodiments are also described below.
Having thus described some embodiments in general terms, reference will now be made to the accompanying drawings, which are not necessarily drawn to scale. The embodiments illustrated in the figures of the accompanying drawings herein are by way of example and not by way of limitation, and wherein:
The subject disclosure now will be described more fully hereinafter with reference to the accompanying drawings, in which preferred embodiments of the disclosure are shown. This disclosure may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. In this regard, reference may be made herein to a number of mathematical or numerical expressions or values, and to a number of positions of various components, elements or the like. It should be understood, however, that these expressions, values, positions or the like may refer to absolute or approximate expressions, values or positions, such that exemplary embodiments may account for variations that may occur in the multi-channel optical cell, such as those due to engineering tolerances. Like numbers refer to like elements throughout.
As used herein, the word “exemplary” is used herein to refer to serving as an example, instance, or illustration. Any aspect, feature, function, design, etc. described herein as “exemplary” or an “example” or “examples” is not necessarily to be construed as preferred or advantageous over other aspects, features, functions, designs, etc. Rather, use of the word exemplary is intended to present concepts in a concrete fashion.
Example embodiments of the present disclosure may use, at least, cameras to obtain depth information about a target object and calculate the location (e.g., position) and orientation of the target object, such as using known relative relationships about the cameras and camera parameters to calculate a spatial relationship between the target object and the cameras. And example embodiments of the present disclosure may use, at least, cameras to obtain color information about the target object, and based on calculated spatial relationships between the color cameras and color information and the target object, color information may be projected onto three-dimensional information of the target object. Color information may comprise grayscale information. Such real-time three-dimensional information of a target object may be used and effective, for example, to accommodate for variance of placement and angle of rotation of a target object, such as to convert and/or correct position offset, viewing angles, and dimensional perspective for machine vision inspection, thereby facilitating rapid detection of a target object, or of different target objects, such as multiple different target objects on the same production line, and thereby reducing error due to placement and perspective distortions causes by the position and orientation of the target object. Further, use of multi-angle detection and perspective distortion correction can further improve the machine vision inspection of certain exemplary embodiments of the present disclosure.
With references to
Referring back to
The color camera system 104 may comprise at least one color camera, comprising one or more sensors, such as a color image sensor, which may be, for example, a Bayer sensor with an RGB color filter array (CFA), a FOVEON X3™ CMOS sensor, or using three discrete color image sensors, such as three charged-coupled-device (3CCD) image sensors. The position of the at least one color camera of color camera system 104 relative to the at least one image capturing camera of the image capturing system 102 may be fixed and predetermined. The position information may be provided to the vision machine apparatus 106. The vision machine apparatus 106 may determine three dimensional information of the target object 108 relative to the at least one capturing camera of the image capturing system 102 based on, for example, the depth information provided by the image capturing system 102. When the relative position between the at least one image capturing camera of image capturing system 102 and the at least one color camera of color camera system 104 is predetermined, the position and/or orientation of the target object 108 relative to the at least one color camera of the color camera system 104 may be determined by the vision machine apparatus 106 based on the depth information of the target object 108 and the position of the at least one color camera of the color camera system 104 relative to the at least one image capturing camera of the image capturing system 102.
The color camera system 104 may capture color pixel information in the real-time image in the predetermined inspecting area 110 and provide the color pixel information to the vision machine apparatus 106. Based on the position of the at least one color camera of the color camera system 104 relative to the target object 108, the vision machine apparatus 106 may project the color pixel information onto a three-dimensional virtual model at step S214 to generate a color three-dimensional virtual model having the color pixel information. The three-dimensional virtual model may be a real-time three-dimensional virtual model provided by the image capturing system 102, a reference three-dimensional virtual model previously provided by the image capturing system 102, or a predetermined three-dimensional virtual model, such as provided by a three-dimensional scanner, and stored in a computer-readable storage medium accessible to the vision machine apparatus 106. Alternatively, the three-dimensional virtual model may be a three-dimensional virtual model generated by a combination of at least two of a real-time three-dimensional virtual model provided by the image capturing system 102, a reference three-dimensional virtual model previously provided by the image capturing system 102, and a predetermined three-dimensional virtual model and stored in a computer-readable storage medium accessible to the vision machine apparatus 106.
The vision machine apparatus 106 may then analyze to find correspondence, or lack of correspondence, between the color three-dimensional virtual model and a reference to identify similarities and/or differences in color pixel information in the predetermined inspecting area 110. The reference may be a reference color image or a reference color three-dimensional model. As shown in
The vision machine apparatus 106 may change the viewpoint of the color three-dimensional virtual model to a predetermined viewpoint at step S216. For example, as shown in
Vision machine apparatus 106 may include circuitry, networked processors, or the like configured to perform some or all of the functions and processes described herein and may be any suitable processing device. In some embodiments, vision machine apparatus 106 may function as a “cloud” with respect to the image capturing system 102 and/or color camera system 104. In that sense, vision machine apparatus 106 may include one or more networked processing devices performing interconnected and/or distributed functions. To avoid unnecessarily overcomplicating the disclosure, vision machine apparatus 106 is shown and described herein as a single processing device.
In some embodiments, such as when circuitry 300 is included in vision machine apparatus 106, machine vision module 310 may also or instead be included with processor 302. As referred to herein, “module” includes hardware, software, and/or firmware configured to perform one or more particular functions. In this regard, the means of circuitry 300 as described herein may be embodied as, for example, circuitry, hardware elements (e.g., a suitably programmed processor, combinational logic circuit, integrated circuit, and/or the like), a computer program product comprising computer-readable program instructions stored on a non-transitory computer-readable medium (e.g., memory 304) that is executable by a suitably configured processing device (e.g., processor 302), or some combination thereof.
Processor 302 may, for example, be embodied as various means for processing including one or more microprocessors with accompanying digital signal processor(s), one or more processor(s) without an accompanying digital signal processor, one or more coprocessors, one or more multi-core processors, one or more controllers, processing circuitry, one or more computers, various other processing elements including integrated circuits such as, for example, an ASIC (application specific integrated circuit) or FPGA (field programmable gate array), or some combination thereof. Accordingly, although illustrated in
Whether configured by hardware, firmware/software methods, or by a combination thereof, processor 302 may comprise an entity capable of performing operations according to embodiments of the present disclosure while configured accordingly. Thus, for example, when processor 302 is embodied as an ASIC, FPGA, or the like, processor 302 may comprise specifically configured hardware for conducting one or more operations described herein. As another example, when processor 302 may be embodied as an executor of instructions, such as may be stored in memory 304, the instructions may specifically configure processor 302 to perform one or more algorithms, methods, operations, or functions described herein. For example, processor 302 may be configured to determine real-time 3D information of a target object, project color pixel information onto a 3D virtual model of a target object, change the viewpoint of a 3D virtual model of a target object, or identify a defect of the target object based upon a reference, among other things.
Memory 304 may comprise, for example, volatile memory, non-volatile memory, or some combination thereof. Although illustrated in
Communications module 306 may be embodied as any component or means for communication embodied in circuitry, hardware, a computer program product comprising computer readable program instructions stored on a computer readable medium (e.g., memory 304) and executed by a processing device (e.g., processor 302), or a combination thereof that is configured to receive and/or transmit data from/to another device, such as, for example, a second circuitry 300 and/or the like. In some embodiments, communications module 306 (like other components discussed herein) can be at least partially embodied as or otherwise controlled by processor 302. In this regard, communications module 306 may be in communication with processor 302, such as via a bus. Communications module 306 may include, for example, an antenna, a transmitter, a receiver, a transceiver, network interface card and/or supporting hardware, and/or firmware/software for enabling communications. Communications module 306 may be configured to receive and/or transmit any data that may be stored by memory 304 using any protocol that may be used for communications. Communications module 306 may additionally and/or alternatively be in communication with the memory 304, input/output module 308, and/or any other component of circuitry 300, such as via a bus. Communications module 306 may be configured to use one or more communications protocols such as, for example, short messaging service (SMS), Wi-Fi (e.g., a 802.11 protocol, Bluetooth, etc.), radio frequency systems (e.g., 900 MHz, 1.4 GHz, and 5.6 GHz communication systems), infrared, GSM, GSM plus EDGE, CDMA, quadband, and other cellular protocols, VOIP, or any other suitable protocol.
Input/output module 308 may be in communication with processor 302 to receive an indication of an input and/or to provide an audible, visual, mechanical, or other output. In that sense, input/output module 308 may include means for implementing analog-to-digital and/or digital-to-analog data conversions. Input/output module 308 may include support, for example, for a display, touch screen, keyboard, button, click wheel, mouse, joystick, an image capturing device, microphone, speaker, biometric scanner, and/or other input/output mechanisms. In embodiments where circuitry 300 may be implemented as a server or database, aspects of input/output module 308 may be reduced as compared to embodiments where circuitry 300 may be implemented as an end-user machine or other type of device designed for complex user interactions. In some embodiments (like other components discussed herein), input/output module 308 may even be eliminated from circuitry 300. Alternatively, such as in embodiments wherein circuitry 300 is embodied as a server or database, at least some aspects of input/output module 308 may be embodied on an apparatus used by a user that is in communication with circuitry 300. Input/output module 308 may be in communication with memory 304, communications module 306, and/or any other component(s), such as via a bus. Although more than one input/output module and/or other component can be included in circuitry 300, only one is shown in
In some embodiments, machine vision module 310 may also or instead be included and configured to perform the functionality discussed herein related to determining real-time 3D information of a target object, projecting color pixel information onto a 3D virtual model of a target object, changing the viewpoint of a 3D virtual model of a target object, or identifying a defect of the target object based upon a reference, among other things. In some embodiments, some or all of the functionality of machine vision module 310 may be performed by processor 302. In this regard, the example processes discussed herein can be performed by at least one processor 302 and/or machine vision module 310. For example, non-transitory computer readable storage media can be configured to store firmware, one or more application programs, and/or other software, which include instructions and other computer-readable program code portions that can be executed to control processors of the components of circuitry 300 to implement various operations, including the examples shown herein. As such, a series of computer-readable program code portions may be embodied in one or more computer program products and can be used, with a device, server, database, and/or other programmable apparatus, to produce the machine-implemented processes discussed herein.
Any such computer program instructions and/or other type of code may be loaded onto a computer, processor, and/or other programmable apparatus's circuitry to produce a machine, such that the computer, processor, or other programmable circuitry that executes the code may be the means for implementing various functions, including those described herein. In some embodiments, one or more external systems (such as a remote cloud computing and/or data storage system) may also be leveraged to provide at least some of the functionality discussed herein.
As described above and as will be appreciated based on this disclosure, various embodiments may be implemented as methods, mediums, devices, servers, databases, systems, and the like. Accordingly, embodiments may comprise various forms, including entirely of hardware or any combination of software and hardware. Furthermore, embodiments may take the form of a computer program product on at least one non-transitory computer readable storage medium having computer readable program instructions (e.g., computer software) embodied in the storage medium. Any suitable computer-readable storage medium may be utilized including non-transitory hard disks, CD/DVD-ROMs, flash memory, optical storage devices, quantum storage devices, chemical storage devices, biological storage devices, magnetic storage devices, etc.
Embodiments have been described above with reference to components, such as functional modules, system components, and circuitry. Below is a discussion of an example process flow chart describing functionality that may be implemented by one or more components and/or means discussed above and/or other suitably configured circuitry.
According to one aspect of the subject disclosure, a vision machine apparatus 106 of exemplary embodiments of the subject disclosure generally operates under control of a computer program. The computer program for performing the methods of exemplary embodiments of the disclosure may include one or more computer-readable program code portions, such as a series of computer instructions, embodied or otherwise stored in a computer-readable storage medium, such as the non-volatile storage medium.
These computer program instructions may also be stored in a computer-readable storage device (e.g., memory 304) that may direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable storage device produce an article of manufacture including instruction computer-readable instructions for implementing the functions described herein, such as the functions specified in the block(s) or step(s) of the flow chart of
Accordingly, blocks or steps of the flow chart support means and combinations of means for performing and/or implementing the specified functions, combinations of steps for performing and/or implementing the specified functions and program instruction means for performing and/or implementing the specified functions. It will also be understood that one or more blocks or steps of the flow chart, and combinations of blocks or steps in the flow chart, may be implemented by special purpose hardware-based computer systems which perform the specified functions or steps, or combinations of special purpose hardware and computer instructions.
It will be appreciated by those skilled in the art that changes could be made to the examples described above without departing from the broad inventive concept. It is understood, therefore, that this disclosure is not limited to the particular examples disclosed, but it is intended to cover modifications within the spirit and scope of the disclosure as defined by the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
Number | Name | Date | Kind |
---|---|---|---|
4649504 | Krouglicof et al. | Mar 1987 | A |
4833383 | Skarr et al. | May 1989 | A |
4891762 | Chotiros | Jan 1990 | A |
4893025 | Lee | Jan 1990 | A |
5100229 | Lundberg et al. | Mar 1992 | A |
5160977 | Utsumi | Nov 1992 | A |
5608642 | Onodera | Mar 1997 | A |
5621807 | Eibert et al. | Apr 1997 | A |
5760415 | Hauck et al. | Jun 1998 | A |
5819206 | Horton et al. | Oct 1998 | A |
5930384 | Guillemaud et al. | Jul 1999 | A |
6084594 | Goto | Jul 2000 | A |
6175644 | Scola et al. | Jan 2001 | B1 |
6175652 | Jacobson et al. | Jan 2001 | B1 |
6194860 | Seelinger et al. | Feb 2001 | B1 |
6205243 | Migdal et al. | Mar 2001 | B1 |
6664529 | Pack et al. | Dec 2003 | B2 |
6697761 | Akatsuka et al. | Feb 2004 | B2 |
6968084 | Satoh | Nov 2005 | B2 |
6993450 | Takemoto et al. | Jan 2006 | B2 |
7057614 | Akatsuka et al. | Jun 2006 | B2 |
7092109 | Satoh et al. | Aug 2006 | B2 |
7516421 | Asano et al. | Apr 2009 | B2 |
7574018 | Luo | Aug 2009 | B2 |
7676079 | Uchiyama et al. | Mar 2010 | B2 |
7689003 | Shannon et al. | Mar 2010 | B2 |
7698094 | Aratani et al. | Apr 2010 | B2 |
7720554 | DiBernardo et al. | May 2010 | B2 |
7747150 | Anai et al. | Jun 2010 | B2 |
7747151 | Kochi et al. | Jun 2010 | B2 |
7822264 | Balslev et al. | Oct 2010 | B2 |
7831094 | Gupta et al. | Nov 2010 | B2 |
7853072 | Han et al. | Dec 2010 | B2 |
7857021 | Boyd et al. | Dec 2010 | B2 |
7860302 | Sato et al. | Dec 2010 | B2 |
7924441 | Milanovie | Apr 2011 | B1 |
7925049 | Zhu et al. | Apr 2011 | B2 |
8031906 | Fujimura et al. | Oct 2011 | B2 |
8036452 | Pettersson et al. | Oct 2011 | B2 |
8059889 | Kobayashi et al. | Nov 2011 | B2 |
8064686 | Wagner et al. | Nov 2011 | B2 |
8073201 | Satoh et al. | Dec 2011 | B2 |
8111904 | Wallack et al. | Feb 2012 | B2 |
8144238 | Kotake et al. | Mar 2012 | B2 |
8160302 | Wagg | Apr 2012 | B2 |
8203487 | Hol et al. | Jun 2012 | B2 |
8270730 | Watson | Sep 2012 | B2 |
8276088 | Ke et al. | Sep 2012 | B2 |
8280115 | Matsumura et al. | Oct 2012 | B2 |
8311342 | Schopp et al. | Nov 2012 | B2 |
8314939 | Kato | Nov 2012 | B2 |
8326021 | Kobayashi et al. | Dec 2012 | B2 |
8379014 | Wiedemann et al. | Feb 2013 | B2 |
20020033818 | Lin | Mar 2002 | A1 |
20020041327 | Hildreth et al. | Apr 2002 | A1 |
20020044682 | Weil et al. | Apr 2002 | A1 |
20020050924 | Mahbub | May 2002 | A1 |
20020052709 | Akatsuka et al. | May 2002 | A1 |
20020054297 | Lee et al. | May 2002 | A1 |
20030043270 | Rafey et al. | Mar 2003 | A1 |
20030144813 | Takemoto et al. | Jul 2003 | A1 |
20030214502 | Park et al. | Nov 2003 | A1 |
20050018045 | Thomas et al. | Jan 2005 | A1 |
20050058337 | Fujimura et al. | Mar 2005 | A1 |
20060048853 | Boyd et al. | Mar 2006 | A1 |
20060119614 | Fukui et al. | Jun 2006 | A1 |
20060140473 | Brooksby | Jun 2006 | A1 |
20060152507 | Lee et al. | Jul 2006 | A1 |
20070009149 | Wagner et al. | Jan 2007 | A1 |
20070065004 | Kochi et al. | Mar 2007 | A1 |
20070152037 | Chen et al. | Jul 2007 | A1 |
20070217672 | Shannon et al. | Sep 2007 | A1 |
20070236561 | Anai et al. | Oct 2007 | A1 |
20080031513 | Hart | Feb 2008 | A1 |
20080143721 | Liou et al. | Jun 2008 | A1 |
20080144925 | Zhu et al. | Jun 2008 | A1 |
20080250842 | Nobis et al. | Oct 2008 | A1 |
20080298672 | Wallack et al. | Dec 2008 | A1 |
20080310757 | Wolberg et al. | Dec 2008 | A1 |
20090046895 | Pettersson et al. | Feb 2009 | A1 |
20090066784 | Stone et al. | Mar 2009 | A1 |
20090122146 | Zalewski et al. | May 2009 | A1 |
20100002942 | Watson | Jan 2010 | A1 |
20100013908 | Chiu et al. | Jan 2010 | A1 |
20100014710 | Chen et al. | Jan 2010 | A1 |
20100034427 | Fujimura et al. | Feb 2010 | A1 |
20100158361 | Grafinger et al. | Jun 2010 | A1 |
20100324737 | Handa et al. | Dec 2010 | A1 |
20100328682 | Kotake et al. | Dec 2010 | A1 |
20110043610 | Ren et al. | Feb 2011 | A1 |
20110096957 | Anai et al. | Apr 2011 | A1 |
20120026296 | Lee et al. | Feb 2012 | A1 |
20120059624 | Madhavan | Mar 2012 | A1 |
20120113110 | Lou | May 2012 | A1 |
20120121135 | Kotake et al. | May 2012 | A1 |
20120128204 | Aoba | May 2012 | A1 |
20120155751 | Aoba | Jun 2012 | A1 |
20120194644 | Newcombe et al. | Aug 2012 | A1 |
20120243774 | Chen et al. | Sep 2012 | A1 |
20120275654 | Fujiki et al. | Nov 2012 | A1 |
20120294534 | Watanabe et al. | Nov 2012 | A1 |
20120320162 | Lo et al. | Dec 2012 | A1 |
20130010079 | Zhang | Jan 2013 | A1 |
20130038609 | Tsai et al. | Feb 2013 | A1 |
20130076865 | Tateno et al. | Mar 2013 | A1 |
20130094759 | Yagi et al. | Apr 2013 | A1 |
20130113913 | Scheid | May 2013 | A1 |
20140142486 | Summit | May 2014 | A1 |
20150022669 | Hall | Jan 2015 | A1 |
Number | Date | Country |
---|---|---|
102236749 | Nov 2011 | CN |
102288613 | Dec 2011 | CN |
102422327 | Apr 2012 | CN |
102486462 | Jun 2012 | CN |
488145 | May 2002 | TW |
527518 | Apr 2003 | TW |
M340479 | Sep 2008 | TW |
200907826 | Feb 2009 | TW |
I318756 | Dec 2009 | TW |
201006236 | Feb 2010 | TW |
I369899 | Feb 2010 | TW |
I322392 | Mar 2010 | TW |
I322393 | Mar 2010 | TW |
I332453 | Nov 2010 | TW |
I362281 | Apr 2012 | TW |
201234278 | Aug 2012 | TW |
M443156 | Dec 2012 | TW |
201300734 | Jan 2013 | TW |
WO 2013027773 | Feb 2013 | WO |
Entry |
---|
Duan, G. et al., Automatic Optical Inspection of Micro Drill Bit in Printed Circuit Board Manufacturing Based on Pattern Classification, IEEE International Instrumentation and Measurement Technology Conference, (2008) 279-283. |
Germann, M. et al., Automatic Pose Estimation for Range Images on the GPU, Sixth International conference on 3-D Digital Imaging and Modeling (3DIM), IEEE Computer Society (2007) 8 pages. |
Goal of machine vision system: using 2D to generate 3D, real world: 3D, Images 2D, projections of the 3D world: projecting 3D to 2D (undated) 1-24. |
Gordon, I. et al., What and Where: 3D Object Recognition with Accurate Pose, Lecture Notes in Computer Science, (undated) 1-16. |
Henry, P. et al., RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments (Undated) 1-15. |
Liebelt, J. et al., Viewpoint-Independent Object Class Detection using 3D Feature Maps, Computer Vision and Pattern Recognition (CVPR) IEEE, (2008) 8 pages. |
Liu, L. et al., Automatic 3D to 2D Registraton for the Photorealistic Rendering of Urban Scenes, Computer Vision and Pattern Recognition (2005) 8 pages. |
Lu, Z. et al., Probabilistic 3D object recognition and pose estimation using multiple interpretations generation, Journal of the Optical Society of America A, vol. 28, No. 12 (Dec. 2011) 2607-2618. |
Najafi, H. et al., Fusion of 3D and Appearance Models for Fast Object Detection and Post Estimation, Lecture Note in Computer Science (undated) 10 pages. |
Nasir, H. et al., Image Registration for Super Resolution Using Scale Invariant Feature Transform, Belief Propagation and Random Sampling Consensus, 18th European Signal Processing Conference, EURASIP (Aug. 2010) 299-303. |
Office Action for corresponding Taiwanese Application No. 102147210 dated Dec. 23, 2014. |
Ramani, R. et al., Comparative Analysis of Image Registration using SIFT and RANSAC method, International Journal of Computational Engineering Research, vol. 2, No. 3, (May-Jun. 2012) 800-805. |
Scaramuzza, D. et al., Exploiting Motion Priors in Visual Odometry for Vehicle-Mounted Cameras with Non-holonomic Constraints (undated) 8 pages. |
Office Action for corresponding Chinese Patent Application No. 201310737172.2 dated Nov. 28, 2016. |
Number | Date | Country | |
---|---|---|---|
20150146964 A1 | May 2015 | US |