Method and apparatus of automatic optical inspection using scanning holography

Information

  • Patent Grant
  • 11809134
  • Patent Number
    11,809,134
  • Date Filed
    Wednesday, June 3, 2020
    3 years ago
  • Date Issued
    Tuesday, November 7, 2023
    5 months ago
Abstract
Disclosed are a method and apparatus of automatic optical inspection using scanning holography. The apparatus for automatic optical inspection using scanning holography includes: a hologram capturer that takes a hologram of an object existing on an objective plate using a scanning hologram camera; a depth position/rotation angle extractor that extracts a depth position and a rotation angle about an objective surface of the objective plate on the basis of the hologram or the detected monitoring-light; a rotated coordinate system generator that generates a rotated coordinate system corresponding to the objective surface using the depth position and the rotation angle; and a hologram restorer that obtains an image of the object by restoring the hologram in a plane formed in a depth direction of the rotated coordinate system.
Description
ACKNOWLEDGEMENT

The present patent application has been filed as a research project as described below.

    • National Research & Development Project for Supporting Invention
    • Project Number: 1711081181
    • Name of Department: Ministry of Science and ICT
    • Specialized Institution for Research Management: Institution for Information & Communication Technology Promotion
    • Research Name: Digital Contents Technology Development (R&D)
    • Subject Name: Open Library Technology Development for Production & Simulation of Digital Hologram Contents
    • Contribution Ratio: 1/1
    • Main Agency: KETI
    • Research Period: 2019 Jan. 1˜2019 Dec. 31


CROSS-REFERENCE TO PRIOR APPLICATIONS

This application is a National Stage Patent Application of PCT International Patent Application No. PCT/KR2020/007189 (filed on Jun. 3, 2020) under 35 U.S.C. § 371, which claims priority to Korean Patent Application Nos. 10-2019-0066639 (filed on Jun. 5, 2019) and 10-2019-0066643 (filed on Jun. 5, 2019), which are all hereby incorporated by reference in their entirety.


BACKGROUND

The present disclosure relates to an apparatus and method of automatic optical inspection and, more particularly, a scanning hologram camera apparatus and a method that can obtain image information to which high-precision automatic optical inspection can be applied regardless of rotation and defocus by taking a hologram of an imaging object and numerically processing the hologram.


An apparatus and method of automatic optical inspection based on an optical microscope extracts focused image information of an object on an objective plate using an optical microscope into a digital signal type and numerically recognizes the digital signal using a computer, thereby inspecting the object for defects or distortion. However, in order to obtain an image of an object with high resolution using such an automatic optical inspection apparatus and method, the depth of focus of the objective lens is decreased to the level of several micrometers, so it is required to precisely align the objective plate, precisely mechanically adjust the focus of the objective lens, etc.


Further, in order to obtain a 3D image of an object, an objective plate is sequentially positioned in the depth direction, focused images are obtained at the sequential depth positions, and then the images are combined, thereby obtaining a 3D image of the object. Alternatively, an objective lens is sequentially positioned in the depth direction, focused images are sequentially obtained in the depth direction, and then the images are combined, thereby obtaining a 3D image of the object.


Therefore, according to the automatic inspection apparatus based on the existing optical imaging system according to mechanical alignment and focus adjustment, not only there is a need for an additional device for precise mechanical control, but it is difficult to perform ultrahigh-speed inspection through mechanical motions. In an example of inspecting an object on a conveyer belt, the object on the conveyer belt goes out of the region of the depth of focus of an objective lens due to shaking of the conveyer belt and is inclined by rotation at a defocused depth position.


In order to realign the object inclined in the defocused region into the region of the depth of focus of the objective lens, it is required to realign the object not only by precisely detecting the defocused depth position and the rotation angle of the inclination, but by performing precise mechanical control using the detected depth position and the rotation angle of the inclination. Actually, it is difficult to add a mechanical device, which rotates or moves an object positioned on a conveyer belt in a depth direction on the conveyer belt, in terms of structure on the conveyer belt, so a focused image is obtained by positioning an object onto an objective plate, which is precisely aligned, using a robot arm, etc. on a conveyer belt and then adjusting the focus through mechanical movement of an objective lens. Accordingly, a hologram camera and a numerical processing method that can obtain image information to which high-precision automatic optical inspection can be applied without precise mechanical realignment by obtaining a focused clear image regardless of rotation and defocus even at rotated and defocused positions of an objective plate by taking a hologram of an object without precise mechanical calibration and by numerically processing the hologram.


PRIOR ART DOCUMENT
Patent Document

Korean Patent No. 10-1304695 (2013.08.30)


SUMMARY

An embodiment of the present disclosure provides a method and apparatus of automatic optical inspection using scanning holography, the method and apparatus being able to obtain a depth position and a rotation angle of an objective plate by taking a single-shot hologram of an imaging object using a scanning hologram camera and by numerically processing the hologram.


An embodiment of the present disclosure provides a method and apparatus of automatic optical inspection using scanning holography, the method and apparatus being able to obtain a depth position and a rotation angle of an objective plate by taking a single-shot hologram of an imaging object using a scanning hologram camera and by using information obtained through monitoring-light.


An embodiment of the present disclosure provides a method and apparatus of automatic optical inspection using scanning holography, the method and apparatus being able to obtain image information, to which high-precision automatic optical inspection can be applied, regardless of precise mechanical calibration by obtaining a focused and clear image regardless of rotation and defocus at an inclined position of an objective plate.


In the embodiments, an apparatus for automatic optical inspection using scanning holography includes: a hologram capturer that takes a hologram of an object existing on an objective plate using a scanning hologram camera; a depth position/rotation angle extractor that extracts a depth position and a rotation angle about an objective surface of the objective plate on the basis of the hologram; a rotated coordinate system generator that generates a rotated coordinate system corresponding to the objective surface using the depth position and the rotation angle; and a hologram restorer that obtains an image of the object by restoring the hologram in a plane formed in a depth direction of the rotated coordinate system.


The apparatus may further include a light-monitoring processor that radiates monitoring-light toward a surface of any one of the objective plate and the object and detects monitoring-light reflected by the surface, in which the depth position/rotation angle extractor may extract a depth position and a rotation angle about the objective surface of the objective plate on the basis of the detected monitoring-light.


The scanning hologram camera may include a light source that generates an electromagnetic wave, a splitting unit that splits the electromagnetic wave, a scan unit that scans the object using interference beams generated by the split electromagnetic waves, and a light detection unit that detects a reflected, fluorescent, or transmitted beam from the object; and the hologram capturer may generate a complex-number hologram as the result of taking a hologram.


The light-monitoring processor may include: a monitoring-light generation module that is disposed on the objective plate and generates monitoring-light that is radiated to the object; and a light position detection unit that detects the monitoring-light that is reflected by a surface of the object.


The light-monitoring processor may further include an optical element including an objective lens and a beam splitter on a travel path of the monitoring-light, and positions of the monitoring-light generation module and the light position detection module may be determined in accordance with operations therebetween and disposition of the optical element.


The monitoring-light generation module may generate at least two beams of light as monitoring-light and the light position detection module may be composed of an array of image elements including CCD and CMOS.


The depth position/rotation angle extractor may calculate the position of monitoring-light, which is reflected by a surface of the object or the objective plate, as a vector.


The depth position/rotation angle extractor may extract a depth position in each of three regions spaced apart from each other and independently defined on the objective surface, and then extracts a depth position and a rotation angle about the objective surface.


The depth position/rotation angle extractor may perform: a first step of restoring the hologram at each of sequential depth positions; a second step of calculating a focus metric in each of the three regions for restored images; and a third step of determining a depth position at which the focus metric is a maximum value as a depth position of each of the three regions.


The depth position/rotation angle extractor may obtain a depth position in each of the three regions as output by inputting hologram data into a Convolutional Neural Network (CNN) model and then may extract a depth position and a rotation angle of the objective surface.


The depth position/rotation angle extractor may input any one of a complex-number hologram obtained by the taking of a hologram, a real-number hologram corresponding to a real number part of the complex-number hologram, an imaginary-number hologram corresponding to an imaginary number part of the complex-number hologram, and an off-axis hologram combined using the complex-number hologram as the hologram data.


The depth position/rotation angle extractor may extract a depth position and a rotation angle about the objective surface using a CNN model generated through training in which a specific region for forming the hologram is input and a rotation angle of a rotated objective surface as output.


The depth position/rotation angle extractor may Fourier-transform the hologram into a spatial frequency region and then may use a region corresponding to the specific region in the spatial frequency region as input.


The depth position/rotation angle extractor may use at least one of a real number part, an imaginary number part, an amplitude part, and a phase part of a complex-number hologram about the specific region as the input.


The depth position/rotation angle extractor may extract a depth position and a rotation angle about the objective surface using gradient descent on the basis of a portion of or the entire region of the hologram.


The depth position/rotation angle extractor may guide a portion of or the entire region of the hologram for a rotation region in which the objective plate can be rotated and a depth region in which the objective plate can be positioned, and then may search out a rotation angle and a depth position at which a focus metric of the guided hologram has a maximum value.


The rotated coordinate system generator may generate a transform matrix that is generated using a depth position and a rotation angle extracted by the depth position/rotation angle extractor and transforms a reference coordinate system into a rotated coordinate system.


The hologram restorer may restore an image of the object by transforming the hologram into the rotated coordinate system and guiding the hologram in a depth direction of the rotated coordinate system.


The hologram restorer may obtain an image of the object by transforming the hologram through angular spectrum rotational transformation and guiding the hologram in a depth direction of a rotated coordinate system.


The hologram restorer may obtain a 3D image of the object by transforming the hologram into the rotated coordinate system and restoring the hologram in each of planes formed in a depth direction in the rotated coordinate system.


The hologram restorer may obtain an image of the object by restoring the hologram at a depth position of a reference coordinate system and then guiding the hologram to a plane formed in a depth direction of the rotated coordinate system.


The hologram restorer may obtain a 3D image of the object by restoring the hologram at a depth position of a reference coordinate system and then guiding the hologram to sequential planes formed in the depth direction of the rotated coordinate system.


The hologram restorer may restore the hologram in each of sequential planes formed in a depth direction of a reference coordinate system and then may interpolate the restored holograms to the rotated coordinate system.


The hologram restorer may obtain an image of the object by generating a 3D matrix using a depth position of each of the sequential planes as an axis for images restored in the sequential planes and interpolating the 3D matrix to axes of the rotated coordinate system.


In embodiments, a method of automatic optical inspection using scanning holography includes: taking a hologram of an object existing on an objective plate using a scanning hologram camera; extracting a depth position and a rotation angle about an objective surface of the objective plate on the basis of the hologram; generating a rotated coordinate system corresponding to the objective surface using the depth position and the rotation angle; and obtaining an image of the object by restoring the hologram in a plane formed in a depth direction of the rotated coordinate system.


The method of automatic optical inspection using scanning holography may further include radiating monitoring-light toward a surface of any one of the objective plate and the object and detecting monitoring-light reflected by the surface, in which the extracting of a depth position and a rotation angle may include extracting a depth position and a rotation angle about the objective surface of the objective plate on the basis of the detected monitoring-light.


The present disclosure can have the following effects. However, a specific embodiment is not intended to have to include all of the following effects or only the following effects, so the scope of a right of the present disclosure should not be construed as being limited by the embodiment.


A method and apparatus of automatic optical inspection using scanning holography according to an embodiment of the present disclosure can obtain a depth position and a rotation angle of an objective plate by taking a single-shot hologram of an imaging object using a scanning hologram camera and by numerically processing the hologram.


The method and apparatus of automatic optical inspection using scanning holography according to an embodiment of the present disclosure can obtain a depth position and a rotation angle of an objective plate by taking a single-shot hologram of an imaging object using a scanning hologram camera and by using information obtained through monitoring-light.


The method and apparatus of automatic optical inspection using scanning holography according to an embodiment of the present disclosure can obtain image information, to which high-precision automatic optical inspection can be applied, regardless of precise mechanical calibration by obtaining a focused and clear image regardless of rotation and defocus at an inclined position of an objective plate.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a diagram illustrating the configuration of a system for automatic optical inspection using scanning holography according to the present disclosure.



FIG. 2 is a block diagram illustrating the functional configuration of the automatic optical inspection apparatus shown in FIG. 1.



FIG. 3 is a flowchart illustrating an automatic optical inspection process using scanning holography that is performed in the automatic optical inspection apparatus shown in FIG. 1.



FIG. 4 is a diagram illustrating an embodiment of automatic optical inspection equipment using a scanning hologram camera.



FIG. 5 is a diagram illustrating imaging an inspection object using optical scanning hologram.



FIG. 6 is a diagram illustrating images restored images at sequential depth positions and depth position measurement regions.



FIG. 7 is a diagram illustrating a hologram and depth position measurement regions.



FIG. 8 is a diagram illustrating a reference coordinate system and a rotated coordinate system.



FIG. 9 is a diagram illustrating a hologram and a depth position measurement region.



FIG. 10 is a diagram illustrating a monitoring-light generation and detection apparatus for measuring a depth and a rotation angle.



FIG. 11 is a diagram illustrating a monitoring-light generation and detection apparatus for measuring a depth and a rotation angle and an additional optical structure.





DETAILED DESCRIPTION

The description of the present disclosure is merely an example for structural or functional explanation, and therefore, the scope of the present disclosure should not be construed as being limited by the embodiments described in the text. That is, since the embodiments can be variously embodied and have various forms, the scope of the present disclosure should be understood to include equivalents capable of realizing technical ideas. Also, since the purpose or effect set forth in the present disclosure is not intended imply that to the specific embodiment, the scope of the present disclosure should not be construed as being limited thereto.


Meanwhile, the meaning of the terms described in the present application should be understood as follows.


The terms such as “the first”, “the second”, and the like, are intended to distinguish one element from another, and the scope of the right should not be limited by these terms. For example, the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.


It is to be understood that when an element is referred to as being “connected” to another element, it may be directly connected to the other element, but there may also be other elements in between. On the other hand, when an element is referred to as being “directly connected” to another element, it should be understood that there is no other element in between. On the other hand, other expressions that describe the relationship between elements, that is, “between˜” and “just between˜” or “adjacent to˜” and “directly adjacent to˜” should be interpreted likewise as well.


The singular expressions should be understood to include plural expressions unless the context clearly dictates otherwise. It is also to be understood that the terms “comprise”, “include”, “have”, and the like, are to designate the presence of practiced features, numbers, steps, operations, elements, parts, or combinations thereof, but do not preclude the presence or addition, possibility of one or more other features, numbers, steps, operations, elements, parts, or combinations thereof.


In each step, the identification code (e.g., a, b, c, etc.) is used for convenience of explanation, but the identification code does not describe the order of each step, and unless otherwise explicitly stated, it may occur differently from the stated order. That is, each of steps may occur in the same order as described, may also be performed substantially at the same time, and may be performed in reverse order.


The present disclosure can be embodied as a computer-readable code on a computer-readable recording medium, and the computer-readable recording medium includes all kinds of recording devices for storing data, which can be read by a computer system. Examples of the computer-readable recording medium include ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like. In addition, the computer-readable recording medium may be distributed over network-connected computer systems so that computer readable codes can be stored and executed in a distributed manner.


All terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs, unless otherwise defined. Terms defined in commonly used dictionaries should be interpreted to be consistent with meaning in the context of the related art and cannot be interpreted as having ideal or overly formal meaning unless explicitly defined in the present application.



FIG. 1 is a diagram illustrating the configuration of a system for automatic optical inspection using scanning holography according to the present disclosure.


Referring to FIG. 1, a system 100 for automatic optical inspection using scanning holography may include a scanning hologram camera 110, an automatic optical inspection apparatus 130, and a database 150.


The scanning hologram camera 110 may be an apparatus that can take a hologram of an inspection object positioned on an objective plate. The scanning hologram camera 110 may be connected with the automatic optical inspection apparatus 130 through a network and a plurality of scanning hologram cameras 110 may be all connected with the automatic optical inspection apparatus 130.


In an embodiment, the scanning hologram camera 110 may be included as a component of the automatic optical inspection apparatus 130, and in this case, the scanning hologram camera 110 may be an independent module that performs an operation of taking a hologram of an object.


In an embodiment, the scanning hologram camera 110 may include a light source that generates an electromagnetic wave, a splitting unit that splits the electromagnetic wave, a scan unit that scans an object using interference beams that are produced by the split electromagnetic waves, and a light detection unit that detects beams reflected, fluorescent, or transmitted beams from the object.


The light source may include various means such as a laser generator that can generate an electromagnetic wave, a Light Emitting Diode (LED), and a beam having low coherence such as halogen light having a small coherence length.


The splitting unit can split the electromagnetic wave, for example, a laser beam generated by the light source into a first beam and a second beam. In an embodiment, the splitting unit may include an optical fiber coupler, a beam splitter, and a geometric phase lens, can be implemented in a type of transmitting a beam by guiding a free space, and can also split a beam into a first beam and a second beam on an in-line using a unit that can split a beam on an in-line such as a geometric phase lens.


The scan unit can scan an imaging object using interference beams (or an interference pattern) formed by split electromagnetic waves. The scan unit may be a mirror scanner, but is not necessarily limited thereto and may be replaced with well-known various scan units. For example, the scan unit can scan an imaging object by moving a Fresnel zone plate across the imaging object. In this case, the scan unit can adjust the scanning position in response to a control signal. Further, the scan unit can scan an imaging object by positioning the imaging object on an objective plate and horizontally moving the objective plate.


The light detection unit can detect and convert a beam into a current signal. In this case, the light detection unit can generate a current in accordance with the intense of the detected beam. The light detection unit may be implemented using an optical diode, but is not necessarily limited thereto and may include various light detection units such as a photo-multiplier tube. Further, the light detection unit may include a condenser that condenses reflected, fluorescent, or transmitted beams from an imaging object.


The automatic optical inspection apparatus 130 may be implemented as a server corresponding to a computer or a program that can obtain clear image regardless of defocus and rotation by taking a hologram of an inspection object and numerically processing the hologram. The automatic optical inspection apparatus 130 can be wirelessly connected with an external system (not shown in FIG. 1), which performs independent operations, through Bluetooth, WiFi, a network, and can transmit/receive data to/from the external system through the network.


In an embodiment, the automatic optical inspection apparatus 130 can keep information that is needed in an automatic optical inspection process in cooperation with the database 150. Meanwhile, the automatic optical inspection apparatus 130, unlike FIG. 1, may include the database 150 therein. Further, the automatic optical inspection apparatus 130 may include a processor, a memory, a user I/O device, and a network I/O device as physical components.


The database 150 may be a storage device that keeps various items of information that are required in a process of performing automatic optical inspection using scanning holography. The database 150 can keep information about a hologram of an inspection object obtained from the scanning hologram camera 110, but is not necessarily limited thereto, and can keep information collected and processed in various types in a process of obtaining a clear image of an inspection object regardless of defocus and rotation by numerically processing the hologram obtained by the automatic optical inspection apparatus 130.



FIG. 2 is a block diagram illustrating the functional configuration of the automatic optical inspection apparatus shown in FIG. 1.


Referring to FIG. 2, the automatic optical inspection apparatus 130 may include a hologram capturer 210, a light-monitoring processor 220, a depth position/rotation angle extractor 230, a rotated coordinate system generator 240, a hologram restorer 250, and a controller 260.


The hologram capturer 210 can take a hologram of an object existing on an objective plate using the scanning hologram camera 110. In an embodiment, the hologram capturer 210 can generate a complex-number hologram as the result of imaging.


In an embodiment, the hologram capturer 210 can take a hologram of an object using an optical scanning hologram-based imaging technique. In more detail, the hologram capturer 210 can take a hologram of an object using an optical scanning hologram camera through the configuration shown in (a) of FIG. 5. Although reflected or fluorescent beams from an object are detected in (a) of FIG. 5, it is also possible to detect beams transmitted through an object by positioning a light detector under an objective surface. It is preferable that the objective is made of transparent glass or is bored at a portion corresponding to an object. In this case, the taken hologram can be expressed as the following equations 1 to 5.











i
H
l



(

x
,
y

)


=





O


(


x
0

,


y
0



;


z


)




exp


[

j






π

λ





z




(


x
0
2

+

y
0
2


)


]




dz






[

Equation





1

]








where O(x0,y0;z) is a 3D image of an object as 3D distribution of reflectance of the object and ⊗ is convolution. Further, (x, y) is the scan position of a scan beam designated by a scan unit and z, which is a depth position of the object, is the distance from the focus of a spherical wave to the object.











i
H
2



(

x
,
y

)


=





O


(


x
0

,


y
0



;


z


)




exp


[

j







π





d



λ


(

d
+
z

)



z




(


x
0
2

+

y
0
2


)


]




dz






[

Equation





2

]









    • where d is the distance between the focus of a first spherical wave and the focus of a second spherical wave. It is possible to correct distortion of a hologram due to zoom-out and zoom-in by adjusting d. As a method of adjusting d, it is possible to adjust d by changing the position and the focal distance of a lens in accordance with an imaging law of a lens.














i
H
3



(

x
,
y

)


=





O


(


x
0

,


y
0



;


z


)




exp
[

j







π





d


λ


(


z
2

-


d
2

4


)





(


x
0
2

+

y
0
2


)


]



dz






[

Equation





3

]








i
H
4



(

x
,
y

)


=





O


(


x
0

,


y
0



;


z


)




exp


[

j







2





π






f
gp




λ


(


2






f
gp


+
z

)



z




(


x
0
2

+

y
0
2


)


]




dz






[

Equation





4

]








i
H
5



(

x
,
y

)


=





O


(


x
0

,


y
0



;


z


)




exp


[

j







2





π






M
img
2



f
gp




λ


(


2






M
img
2



f
gp


+

z
img


)




z
img





(



M
img
2



x
0
2


+


M
img
2



y
0
2



)


]




dz






[

Equation





5

]









    • where Mimg is zoom-out of zoom-in of an image by a first lens when imaging the pattern of a polarization-sensitive lens (geometric phase lens) to a surface of an object region, zimg is the distance from the focus position of a second spherical wave to an object, and 2M2imgfgp is the distance between the focuses of adjusted first and second spherical waves.





The light-monitoring processor 220 can radiate monitoring-light toward the surface of any one of an objective plate and an object and can detect monitoring-light reflected by the surface. For example, the light-monitoring processor 220 can extract a depth position and a rotation angle of an objective plate by positioning a reflective plate on an objective plate and detecting the position of light reflected when a laser is radiated to the reflective plate, using an image element, etc.


In an embodiment, the light-monitoring processor 220 may include a monitoring-light generation module disposed on an objective plate and generating monitoring-light that is radiated to an object, and a light position detection module detecting monitoring-light reflected by the surface of an object. Referring to FIG. 10, monitoring-light is radiated to an inspection object or an objective plate, and the monitoring-light is reflected by a surface of the inspection object and travels to a light position detection unit. The planar rotation angle of the inspection object is extracted from the position of the beam reaching the light position detection unit.


In an embodiment, the light-monitoring processor 220 further includes an optical element including an objective lens and a beam splitter on the travel path of monitoring-light, and the positions of the monitoring-light generation module and the light position detection module may be determined in accordance with operations therebetween and the disposition of the optical element. Referring to FIG. 11, the light-monitoring processor 220 may include a lens or a lens and a beam splitter between the monitoring-light generation module and the light position detection module.


Accordingly, the light-monitoring processor 220 can have various geometric structures and can perform position vector analysis in accordance with the geometric structures. Further, the light-monitoring processor 220 can perform geometric analysis on a position vector in consideration of a depth difference between an objective plate and an inspection object.


In an embodiment, the monitoring-light generation module can generate at least two rays of light as monitoring-light and the light position detection module may be composed of an array of image elements including CCD and CMOS. In FIG. 10, the monitoring-light generator (monitoring-light generation module) can generate and radiate two rays of light to an inspection object or an objective plate, and the two rays of light can be reflected by the inspection object or the objective plate and then can travel to the light position detection unit (light position detection module). In particular, the light position detection unit can provide the position of light traveling to an imaging element such as CCD as output.


The position of light reflected by an inspection object or an objective plate is given as the following equation 21 by a reflective vector.

{right arrow over (r)}i={right arrow over (d)}i+2({right arrow over (d)}·{circumflex over (z)}tilt){circumflex over (z)}tilt;i={1,2}  [Equation 21]

    • where i={1,2} shows first and second monitoring beams parallel with each other, {right arrow over (d)}i is a projection direction vector of each of first and second rays of monitoring-light, and {circumflex over (z)}tilt is a normal vector of a rotated inspection object or objective surface. The light position detection module may be composed of an array of light detection elements in a flat plate shape such as an imaging element such as CCD and CMOS, and in this case, the position of monitoring-light detected on a light detection plane of a light detection unit is given as the following equation 22 in accordance with a cross-equation of a plane and a ray.











r



det

_

plane

i

=




r



obj

_

plane

i

+





r



obj

_

plane

i












n



det

_

plane






d


i












n



det

_

plane







r


i



;


i


=

{

1
,
2

}






[

Equation





22

]









    • where {right arrow over (n)}det_plane is a normal vector of a detection plane of the light position detection module, and {right arrow over (r)}obj_planei is a position vector indicating the position of monitoring-light reflected on an inspection object or an objective plate, that is, {right arrow over (r)}obj_planei=ldi{right arrow over (d)}i+{right arrow over (r)}mor_sti; i={1, 2}. In this case, {right arrow over (r)}mor_sti; i={1, 2} is a position vector indicating the start position of first and second monitoring-light generated by the monitoring-light generation module, and ldi; i={1,2} is the distance to the inspection object or the objective plate in the projection direction of the first and second monitoring-light.





The depth position/rotation angle extractor 230 can extract a depth position and a rotation angle about the objective surface of an objective plate on the basis of a hologram. The objective surface may be a plane that is parallel with the objective plate. That is, the depth position/rotation angle extractor 230 can calculate a depth position and a rotation angle of an objective surface by numerically processing a hologram. In this case, the numerical method may include an extraction method base on 3-region analysis, an extraction method based on a CNN, and an extraction method based on gradient descent.


In an embodiment, the depth position/rotation angle extractor 230 can extract a depth position at each of three regions spaced apart from each other and independently defined on an objective surface, and then can extract a depth position and a rotation angle about the objective surface. For example, when an object on a conveyer belt is not horizontally positioned on an objective surface in FIG. 4, the depth position/rotation angle extractor 230 can extract a depth position and a rotation angle using a focus metric, and the rotated coordinate system generator 240 can generate a rotated coordinate system on the basis of the depth position and the rotation angle.


The three regions may be the regions indicated by small rectangles in FIG. 6 and may be defined as first, second, and third cell regions 610, 620, and 630, respectively. The three regions may be spaced apart from each other and independently defined on a plane including the objective surface and may be specific regions including three different positions. Further, the three regions may be independently defined on a plane corresponding to the objective surface, depending on a depth position.


In an embodiment, the depth position/rotation angle extractor 230 may perform a first step of restoring holograms at sequential depth positions, respectively, a second step of calculating a focus metric at each of three regions for the restored images, and a third step of determining the depth position at which the focus metric is maximum as the depth position of each of the three regions. The depth position/rotation angle extractor 230 can restore the holograms of the equations 1 to 5 at sequential depth positions using digital back propagation. FIG. 6 shows holograms restored at sequential depth positions.


As a detailed example of a method of restoring a hologram at each of sequential depth positions, the holograms are restored by forming the holograms of the equations 1 to 5 and then convoluting conjugate complex numbers of a Fresnel zone plate of equation 51 to the holograms at corresponding sequential depth positions, respectively. This is given as the following equation 6. Further, holograms can be restored by convoluting each angular spectrum corresponding to the Fresnel zone plate of equation 51 at each depth position, can be restored by Rayleigh-Sommerfeld method, and can be restored through various well-known digital back propagations.
















fzp
1



(


x
o

,


y
o



;


z


)


=

exp


[

j






π

λ





z




(


x
0
2

+

y
0
2


)


]







[

Equation





51

]













fzp
2



(


x
o

,


y
o



;


z


)


=

exp


[

j







π





d



λ


(

d
+
z

)



z




(


x
0
2

+

y
0
2


)


]





















fzp
3



(


x
o

,


y
o



;


z


)


=

exp
[

j







π





d


λ


(


z
2

-


d
2

4


)





(


x
0
2

+

y
0
2


)


]




















fzp
4



(


x
o

,


y
o



;


z


)


=

exp


[

j







2





π






f
gp




λ


(


2






f
gp


+
z

)



z




(


x
0
2

+

y
0
2


)


]
















fzp
5



(


x
o

,


y
o



;


z


)


=

exp


[

j







2





π






M
img
2



f
gp




λ


(


2






M
img
2



f
gp


+

z
img


)




z
img





(



M
img
2



x
0
2


+


M
img
2



y
0
2



)


]




















O
rec



(

x
,

y


;


z


)


=



i
H
l



(

x
,
y

)






fzp
1



(

x
,

y


;


z


)


*







[

Equation





6

]









    • where l={1, 2, 3, 4, 5} indicate holograms obtained through the equations 1 to 5, respectively, and a Fresnel zone plate corresponding to equation 51. ‘*’ is a conjugate complex number.





Further, the depth position/rotation angle extractor 230 can find out depth positions at three points (610, 620, and 630 in FIG. 6) of images restored at sequential depth positions. A detail method of finding out a depth position is to restore rotated holograms at sequential depth positions (FIG. 6) and then to calculate focused depth positions in first, second, and third cell regions of the images restored at the sequential depth positions. Various algorithms for calculating a focused depth position are well known in the computer vision field.


Further, the depth position/rotation angle extractor 230 can determine the depth positions at which the focus metrics of the first, second, and third regions are maximum as the depth positions of the first, second, and third regions. The depth position/rotation angle extractor 230 may use various focus metrics, and for example, may use a Tamura coefficient as a focus metric. The Tamura coefficient in n-th regions (n={1, 2, 3}) of an image restored at a depth position z is as the following equation 7.











C
n



(
z
)


=







σ


(

x
,

y


;


z


)






n
th


region








O
rec



(

x
,

y


;


z


)








n
th


region






;






n

=

{

1
,
2
,
3

}






[

Equation





7

]









    • where σ(x,y;z)|nth region is the standard deviation in n-th regions (n={1, 2, 3}) of an image restored at a depth position z, and <Orec(x,y;z)>|nth region is the average in the n-th regions (n={1, 2, 3}) of the image restored at a depth position z. Assuming that the depth positions in the first, second, and third regions where the Tamura coefficient according to equation 7 is maximum is z1, z2 and z3, the coordinates of the central points of the first, second, and third regions are (x1, y1, z1), (x2, y2, z2), and (x3, y3, z3), respectively. Although a hologram including first, second, and third regions is restored to extract a depth position in the above description, it is also possible to find out a depth position through the above method by restoring a portion including a first cell region, a portion including a second cell region, and a portion including a third cell region.





Further, when relative depth position in first, second, and third regions are different, depending on the 3D distribution of an object, the depth position/rotation angle extractor 230 can extract a depth position about an objective surface by correcting depth positions extracted through the above method using information about the relative depth positions. Referring to (b) of FIG. 5, when an object has 3D distribution, according to the method described above, not a depth position of the objective surface, but a depth position in each corresponding region of the object can be extracted. The depth position/rotation angle extractor 230 can recognize 3D distribution of an object in advance and can extract a depth position of an objective surface of an objective plate by correcting an extracted depth position using information δd1 and δd2 about a relative depth position.


In an embodiment, the depth position/rotation angle extractor 230 can extract a depth position directly from a hologram without restoring images at sequential depth positions. The method used in this case may include a method of numerically analyzing the fringe of a hologram.


In more detail, the depth position/rotation angle extractor 230 can extract depth positions of first, second, and third regions by performing fringe analysis on a partial hologram including the first cell region, a partial hologram including the second cell region, and a partial hologram including the third cell region without restoring a hologram. In this case, since the variation of the fringe of a hologram is linearly proportioned to a focused depth position, the focused depth position is extracted by obtaining the variation of the fringe.


That is, first, second, and third cell regions are extracted from the taken hologram of FIG. 7, the real number part and the imaginary number part of the holograms of the cell regions are extracted and Fourier-transformed, and then the Fourier-transformed real number part of each hologram is extracted and are added by a complex number addition method, whereby real number-only holograms are combined. This is given as the following equation 8.

Hreal−nl(kx,ky)=Re[F{Re[iH−nl(x,y)]}]+jRe[F{Im[iH−nl(x,y)]}]  [Equation 8]

    • where l={1,2,3,4,5} indicate the holograms obtained through equations 1 to 5, respectively, and n={1,2,3} indicate first, second, and third regions, respectively. F{ } is 2D Fourier transform, Re[ ] and Im[ ] are a real operator and an imaginary operator for extracting a rear number part and an imaginary number part from a complex number, respectively, and (kx,ky) is a spatial frequency axis. A Fresnel zone plate in which the variation of a fringe increases in accordance with a depth position is obtained by applying power fringe-adjusted filtering to the real number-only hologram. This is given as the following equation 9.











H
PFF



(

k
λ

)


=



1









H

real
-
n

l



(


k
x

,

k
y


)




dk
y





2

+
δ


×


[



H

real
-
n

l



(


k
x

,

k
y


)




dk
y


]

2




exp


(

j







λ






f
l



(
z
)



2





π




k
x
2


)







[

Equation





9

]









    • where δ is a small value for preventing a pole problem in a power fringe-adjusted filter, l={1,2,3,4,5} indicate the holograms obtained through equations 1 to 5, and fl(z), which is a depth position-related parameter, is as the following equation 10 in accordance with the holograms obtained through equations 1 to 5.












{




f
1



(
z
)


=
z

,







f
2



(
z
)


=



(

d
+
z

)


z

d


,







f
3



(
z
)


=


(


z
2

-


d
2

4


)

d


,







f
4



(
z
)


=



(


2






f
gp


+
z

)


z


2






f
gp




,



f
5



(
z
)


=



(


2






M
img
2



f
gp


+

z
img


)



z
img



2





π






M
img
2



f
gp





}




[

Equation





10

]







It is possible to obtain a straight line in a space and a spatial frequency region by obtaining wigner distribution of the Fresnel zone plate of equation 9. The slope of the straight line is the same as fl(z). Accordingly, fl(z) is obtained from the slope and then a focused depth position z is obtained in accordance with equation 10. Alternatively, the axis of the Fresnel zone plate of equation 9 is converted into a new axis through interpolation (kxnew=kx2 for kx≥0) and equation 9 is Fourier-transformed for the new axis, whereby a peak signal at the position of fl(z) on a frequency axis is generated. Accordingly, fl(z) is extracted at the peak position of the signal and then a focused depth position z is obtained in accordance with equation 10.


In an embodiment, the depth position/rotation angle extractor 230 can obtain depth positions in three regions as output by inputting hologram data into a Convolutional Neural Network (CNN) model, and then can extract a depth position and a rotation angle about an objective surface. In this case, the CNN model may be neural network trained in advance by a training unit, using hologram data about a training object obtained at various 3D positions known in advance for the training object, and corresponding position information as training data. In this case, position information is information including the depth position of the object. In this embodiment, a CNN is exemplified as an artificial neural network, but various types of neural networks may be used.


In an embodiment, the depth position/rotation angle extractor 230 can input any one of a complex-number hologram obtained through imaging, a real-number hologram corresponding to the rear number part of the complex-number hologram, an imaginary-number hologram corresponding to the imaginary part of the complex-number hologram, and an off-axis hologram combined using the complex-number hologram into the CNN model as hologram data.


In an embodiment, the depth position/rotation angle extractor 230 can extract a depth position and a rotation angle about an objective surface using a CNN model generated through training in which a specific region for forming a hologram is input and a rotation angle of a rotated objective surface as output. Referring to FIG. 9, the depth position/rotation angle extractor 230 can train a CNN by using a specific region 910 of a hologram as input and the rotation angles θx, θy, and θz of the rotated plane 830 of FIG. 8 as output.


In an embodiment, the depth position/rotation angle extractor 230 can Fourier-transform a hologram into a spatial frequency region and then can use a region corresponding to a specific region in the spatial frequency region as input. That is, the depth position/rotation angle extractor 230 can Fourier-transform a hologram into a spatial frequency region, and can train the CNN by extracting a partial region from the spatial frequency region and by using the partial region as input of the CNN and the rotation angle of the rotated plane 830 of FIG. 8 as output.


In an embodiment, the depth position/rotation angle extractor 230 can use at least one of the real number part, the imaginary number part, the amplitude part, and the phase part of a complex-number hologram related to a specific region for forming a hologram as input. That is, the depth position/rotation angle extractor 230 can train the CNN through a known CNN training method by using the real number part, the imaginary number part, the amplitude part, and the phase part of a hologram in a spatial frequency region or a combination of the parts as input of the CNN and the rotation angle of the rotated plane 830 of FIG. 8 as output on the basis of that a hologram in a spatial frequency region is composed of complex number.


In an embodiment, the depth position/rotation angle extractor 230 can extract a depth position and a rotation angle about an objective surface using gradient descent on the basis of a partial or the entire region of a hologram. That is, the depth position/rotation angle extractor 230 can search out a depth position and a rotation angle at which the focus metric in a portion of or the entire of a hologram is maximum through gradient descent.


In an embodiment, the depth position/rotation angle extractor 230 can guide a portion of or the entire region of a hologram for a rotation region in which the objective plate can be rotated and a depth region in which the objective plate can be positioned, and then can search out a rotation angle and a depth position at which the focus metric of the guided hologram has a maximum value.


Referring to FIG. 8, when a coordinate system is converted into a coordinate system rotated by θx, θy, and θz from the x-axis, y-axis, and z-axis of a reference coordinate system, respectively, the position in the rotated coordinate system is {right arrow over (r)}tilt=T{right arrow over (r)}. In this case, T is a transform matrix given from the following equation 11 and {right arrow over (r)}tilt=(xtilt,ytilt,ztilt) is a position vector of the rotated coordinate system.











T
=

=


[




a
1




a
4




a
7






a
2




a
5




a
8






a
3




a
6




a
9




]

=




R
=

z



(

θ
z

)






R
=

y



(

θ
y

)






R
=

x



(

θ
x

)







where

















R
=

x



(

θ
x

)


=

[



1


0


0




0



cos






θ
x





sin






θ
x






0




-
sin







θ
x





cos






θ
x





]


,








R
=

y



(

θ
y

)


=

[




cos






θ
y




0




-
sin







θ
y






0


1


0





sin






θ
y




0



cos






θ
y





]


,








R
=

z



(

θ
z

)


=

[




cos






θ
z





sin






θ
z




0






-
sin







θ
z





cos






θ
z




0




0


0


1



]







[

Equation





11

]








are rotation matrixes, and θx, θy, and θz are rotation angles from the x-axis, y-axis, and z-axis, respectively, as shown in FIG. 8.


When the specific region 910 of the hologram of FIG. 9 is iHpl(x,y,z) in a reference coordinate system, l={1,2,3,4,5}, which are the holograms of equations 1 to 5. A hologram guided in the depth direction of a rotated coordinate system obtained by transforming a hologram of a specific region in a reference coordinate system into the rotated coordinate system through angular spectrum rotational transformation is given as the following equation 12.














i
Hp
l



(


x
tilt

,

y
tilt

,

z
tilt


)


=











I
Hp
l



(

u
,
v

)




|


u
=



a
1



u
tilt


+


a
2



v
tilt


+


a
3




w
tilt



(


u
tilt

,

v
tilt


)






v
=



a
2



u
tilt


+


a
5



v
tilt


+


a
6




w
tilt



(


u
^

,

v
^


)







×







exp


[

j





2






π


(



u
tilt



x
tilt


+


v
tilt



y
tilt


+



w
tilt



(


u
tilt

,

v
tilt


)




z
tilt



)



]






J


(


u
tilt

,

v
tilt


)










du
tilt



dv
tilt










=






u
tilt



x
tilt


,


v
tilt



y
tilt




-
1




[






I
Hp
l



(

u
,
v

)




|


u
=



a
1



u
tilt


+


a
2



v
tilt


+


a
3




w
tilt



(


u
tilt

,

v
tilt


)






v
=



a
4



u
tilt


+


a
5



v
tilt


+


a
6




w
tilt



(


u
tilt

,

v
tilt


)







×







exp


[

j





2





π







w
tilt



(


u
tilt

,

v
tilt


)




z
tilt


]






J


(


u
tilt

,

v
tilt


)








]









[

Equation





12

]









    • where ℑutilt→xtilt,ytilt→ytilt−1[ ] is 2D Fourier transform from a rotated spatial frequency axis to a rotated spatial axis, IHpl(u,v)=ℑx→u,y→v[iHpl(x,y)] is 2D Fourier transform of iHpl(x,y), (u, v) is a spatial frequency axis in the (x, y)-axial directions, am;m={1,2,3,4,5,6} are elements of a transform matrix T in equation 11, (utilt,vtilt) are frequency axes in the (xtilt,ytilt)-axial directions, respectively, wtilt(utilt,vtilt)=(λ−2−utilt2−vtilt2)1/2 is a spatial frequency axis in the ztilt direction, and










J


(


u
tilt

,

v
tilt


)


=




(



a
2



a
6


-


a
3



a
5



)



u
tilt




w
tilt



(


u
tilt

,

v
tilt


)



+



(



a
3



a
4


-


a
1



a
6



)



v
tilt




w
tilt



(


u
tilt

,

v
tilt


)



+

(



a
1



a
5


-


a
2



a
4



)







is Jacobian showing the proportion amount of integral area element according to coordinate system conversion, that is, dudv=|(utilt,vtilt)|dutiltdvtilt.


When an image restored at ztilt in the depth direction on a rotated axis according to equation 12 is obtained, a hologram may be obtained in a rotated coordinate system through a method of resampling IHl(u,v) with regular intervals by applying interpolation to (utilt,vtilt) axes that are new variables and then applying fast Fourier transform or may be obtained by performing non-uniform discrete Fourier transform such as non-uniform discrete Fourier transform or non-uniform Fast Fourier transform on non-equispaced data.


For example, a hologram was guided through transformation using angular spectrum rotational transformation, but it is possible to transform and guide a hologram to a rotated coordinate system using various transformation methods.


The guided pattern of the specific region 910 of the hologram of FIG. 9 in a coordinate system rotated by θx and θy from the x-axis and the y-axis can be obtained from equation 12. Accordingly, the hologram of the specific region 910 is guided by equation 12 for a rotation region in which an objective plate can be rotated, that is, ΘxMin≤θx≤ΘxMax, ΘyMin≤θy≤ΘyMax, and a region at a possible depth position, that is, ztiltMin≤ztilt≤ztiltMax, thereby finding out a rotation angle and a depth position at which the focus metric of the guided hologram of the specific region 910 has a maximum value, where (ΘxMin≤θx≤ΘxMax, ΘyMin≤θy≤ΘyMax) are a minimum value and a maximum value of a rotation angle region in which the objective plate can be rotated, and (ztiltMin,ztiltMax) are a minimum value and a maximum value of a depth position at which the objective plate can be positioned in a rotated coordinate system. There are various well-known searching methods as a searching method for finding out a rotation angle and a depth position in a rotated coordinate system at which a focus metric has a maximum value, but gradient descent is used in this embodiment. In this case, a focus metric function may be used as a target function of the gradient descent. A Tamura coefficient that is a representative focus metric is exemplified. A target function for finding out a rotation angle and a depth position using gradient descent is as the following equation 121.










f


(

p


)


=



-



σ


(


x
tilt

,


x
tilt



;



p




)






i
Hp
l



(


x
tilt

,


y
tilt



;



p




)









;






l

=

{

1
,
2
,
3
,
4
,
5

}






[

Equation





121

]









    • where {circumflex over (p)}=(θxyz,ztilt) shows a rotation angle of a rotated coordinate system and a length guided in the depth direction of the rotated coordinate system, and <iHpl(xtilt,ytilt;{right arrow over (p)})> is an image guided by ztilt in the depth direction of the coordinate system rotated in accordance with equation 12 from a rotated coordinate system having rotation angles of θx, θy, and θz, that is, the average of iHpl(xtilt,ytilt;{right arrow over (p)}), and σ(xtilt,ytilt;{right arrow over (p)}) is a standard deviation. A focused image is restored at a corresponding position of a domain of function at which a target function becomes a minimum value, so the rotation angle and the depth position of the rotated coordinate system are obtained at a point corresponding to the minimum value of the target function of the domain of function of the target function, that is, {right arrow over (p)} following










min


p




R
4





f


(

p


)







is obtained through gradient descent. First, the initial value {right arrow over (p)}0 is estimated and then convergent {right arrow over (p)} is found out in accordance with iteration of equation 122.

{right arrow over (p)}k+1={right arrow over (p)}k−αk∇f({right arrow over (p)}k);k={0,1,2,3, . . . }  [Equation 122]

    • where ∇ is a gradient operator, and αk is a step size according to iteration and can be determined by various methods including Cauchy method, Barzilai and Borwein method, and Dai and Yuan method. For example, a step size according to Barzilai and Borwein method is







α
k

=







(



p


k

-


p



k
-
1



)

T



[




f


(


p


k

)



-



f


(


p



k
-
1


)




]











f


(


p


k

)



-



f


(


p



k
-
1


)






2


.





In an embodiment, the depth position/rotation angle extractor 230 can calculate the position of monitoring-light reflected by the surface of an object or an objective plate as a vector.


The following equation 23 is obtained by inserting and arranging equation 21 and the position vector showing the reflected position of the monitoring-light on the inspection object or the objective plate.











r



det

_

plane

i

=




l
d
i




d


1


+


r



mor

_

si

i

+





(



l
d
i




d


i


+


r



mor

_

si

i


)












n



det

_

plane






d


i












n



det

_

plane




[







d


i

+

2


(



d


i












z
^

tilt


)




z
^

tilt



]







;


i


=

{

1
,
2

}






[

Equation





23

]









    • where the start position vector {right arrow over (r)}mor_sti of the monitoring-light, the projection direction vector {right arrow over (d)}i of first and second rays of monitoring-light, and a detection plane normal vector {right arrow over (n)}det_plane are known parameters that are determined in accordance with the geometric structures of the monitoring-light generation module and the light position detection module, and the position {right arrow over (r)}det_planei of the monitoring-light detected on the light detection surface of the light detection module is a position detected by rotation of an inspection object of an objective plate. Accordingly, the position of monitoring-light detected on the detection surface and a normal vector {circumflex over (z)}tilt of a rotated inspection object or objective surface in a reference coordinate system are {right arrow over (r)}det_planei=(xdet_planei,ydet_planei,zdet_planei);i={1,2} and {circumflex over (z)}tilt=(x{circumflex over (z)}_tilt,y{circumflex over (z)}_tilt,z{circumflex over (z)}_tilt). Equation 23 is expressed as a matrix composed of known parameters determined in accordance with a normal vector, the distance to an inspection object or an objective plate in the projection direction of first and second rays of monitoring-light, and the geometric structures of the monitoring-light generation module and the light position detection module that are variables, whereby it is given as the following equation 24.














M
=



[




x


z
^



_

tilt








y


z
^



_

tilt








z


z
^



_

tilt








l
d
1






l
d
2




]


=

[




x

det

_

plane

1






y

det

_

plane

1






z

det

_

plane

1






x

det

_

plane

2






y

det

_

plane

2






z

det

_

plane

2




]





[

Equation





24

]







The variables (x{circumflex over (z)}_tilt,y{circumflex over (z)}_tilt,z{circumflex over (z)}_tilt,ld1,ld2) are obtained by solving the simultaneous equation of equation 24 through various methods known in linear algebra. One of simplest methods is to solve the simultaneous equation by transforming the simultaneous equation into a reduced row echelon form through Gaussian Jordan elimination. Further, it is also possible to solve the simultaneous equation by forming a linear simultaneous equation system by selecting 5 of 6 rows of the simultaneous equation of equation 24, and then by multiplying both members by an inverse matrix. Further, considering that there is an error in measuring, the transform matrix M is an overdetermined system in which the number of rows is larger than the number of columns. This can be estimated using various kinds of estimators based on regression analysis, etc. As a very simple method, it is possible to obtain the following equation 241 using an ordinary least square estimator.










[




x


z
^



_

tilt








y


z
^



_

tilt








z


z
^



_

tilt








l
d
1






l
d
2




]

=



(



M
=

T



M
=


)


-
1






M
=

T



[




x

det

_

plane

1






y

det

_

plane

1






z

det

_

plane

1






x

det

_

plane

2






y

det

_

plane

2






z

det

_

plane

2




]







[

Equation





241

]









    • where T means transposition of a matrix and −1 means inverse of a matrix. Although an ordinary least square estimator was proposed as an estimator, the solution can be obtained through various estimator methods known in the art such as weighted least square, generalized least square, Iteratively reweighted least squares, instrumental variables regression, and total least square.





In an embodiment, the monitoring-light generation module can generate one ray of light as monitoring-light and the light position detection module may be composed of an array of image elements including CCD and CMOS. The monitoring-light generator (monitoring-light generation module) generates and sends two rays of monitoring-light to an inspection object or an objective plate in FIG. 10, but when one ray of monitoring-light is generated by the monitoring-light generator in FIG. 10, one ray of monitoring-light (referred to as single monitoring-light) can travel to the light position detection unit (light position detection module) after being reflected by the inspection object or the objective plate. In particular, the light position detection unit can provide the position of light traveling to an imaging element such as CCD as output. When the monitoring-light generation module configured to generate one ray of monitoring-light generates one beam and the light position detection unit detects the position of the one beam, the following equation 242 is obtained by inserting and arranging equation 21 and the position vector showing the reflected position of the single monitoring-light on the inspection object or the objective plate.











r



det

_

plane

1

=



l
d
1




d


1


+


r



mor

_

si

1

+




(



l
d
1




d


1


+


r



mor

_

si

1


)












n



det

_

plane






d


1












n



det

_

plane






[



d


1

+

2


(



d


1












z
^

tilt


)




z


tilt



]







[

Equation





242

]









    • where {right arrow over (d)}1 is a projection direction vector of the single monitoring-light and is the distance to the inspection object or the objective plate in the projection direction of the single monitoring-light.





Equation 242 is expressed as a matrix composed of known parameters determined in accordance with a normal vector, the distance to an inspection object or an objective plate in the projection direction of the single monitoring-light, and the geometric structures of the monitoring-light generation module and the light position detection module that are variables, whereby it is given as the following equation 243.











M
=



[




x


z
^



_

tilt








y


z
^



_

tilt








z


z
^



_

tilt








l
d
1




]


=

[




x

det

_

plane

1






y

det

_

plane

1






z

det

_

plane

1




]





[

Equation





243

]







The transform matrix M is an underdetermined system in which the number of rows is smaller than the number of columns. This can be estimated using various kinds of estimators based on regression analysis, etc.


The rotated coordinate system generator 240 can generate a rotated coordinate system corresponding to an objective surface using a depth position and a rotation angle.


In an embodiment, the rotated coordinate system generator 240 can generate a rotated coordinate system through the following first method. A plane equation of a rotated plan crossing depth positions (x1,y1,z1), (x2,y2,z2), and (x3,y3,z3) of first, second, and third cell regions is given as the following equation 13.

{right arrow over (n)}·{right arrow over (r)}−1=0  [Equation 13]

    • where {right arrow over (r)}=(x,y,z) is a position vector in a non-rotated reference coordinate system, · an inner product operator, and {right arrow over (n)}=(nx,ny,nz) is a normal vector of a plan crossing the first, second, and third cell regions, which is given as the following equation 14.










n


=


A

-
1




[



1




1




1



]






[

Equation





14

]









    • where A−1 is an inverse matrix of a matrix









A
=

[




x
1




y
1




z
1






x
2




y
2




z
2






x
3




y
3




z
3




]






having the positions of the first, second, and third cell regions as rows.


When a perpendicular coordinate system in which an axis perpendicular to a rotated plane is defined as the depth direction of the rotated plane is formed, a unit direction vector in the depth direction is given as a regulated normal vector of the rotated plane as the following equation 15.











z
^

tilt

=


n





n









[

Equation





15

]







A unit vector in a direction that is parallel with a rotated plane is a unit vector having two points, which satisfy the plane equation of equation 13 as a start point and an end point, and is given as the following equation 16.











x
^

tilt

=




r


1

-


r


2







r


1

-


r


2









[

Equation





16

]









    • where {right arrow over (r)}1,{right arrow over (r)}2, which are two certain positions on a rotated plane, are two position vectors that satisfy equation 13. The other unit position vectors of the rotated plane are vector products of the unit position vector in the depth direction and a unit direction vector in a direction parallel with the plane, and are given as the following equation 17.














y
^

tilt

=




z
^

tilt

×


x
^

tilt


=



n





n





×




r


1

-


r


2







r


1

-


r


2











[

Equation





17

]







A rotated coordinate system having a unit direction vector rotated from the origin of a non-rotated reference coordinate system is defined as in FIG. 8. In FIG. 8, {circumflex over (x)}tilttilt,{circumflex over (z)}tilt are unit direction vectors in a rotated plane coordinate system and {circumflex over (x)},ŷ,{circumflex over (z)} are unit direction vectors in a non-rotated reference coordinate system. The position of a reference coordinate system is given as the following equation 18 in a rotated coordinate system in accordance with equations 15 to 17.











(


x
tilt

,

y
tilt

,

z
tilt


)

=




(




x
^

tilt











r



,



y
^

tilt











r



,



z
^

tilt











r




)

=


T
=



r












where






T
=


=




[




a
1




a
4




a
7






a
2




a
5




a
8






a
3




a
6




a
9




]

=

[







r


1

-


r


2







r


1

-


r


2












n





n





×




r


1

-


r


2







r


1

-


r


2












n





n








]







[

Equation





18

]








is a transform matrix.


Since the position of the centers of first, second, and third cell regions are (x1,y1,z1), (x2,y2,z2), and (x3,y3,z3), and a reference coordinate system and a rotated coordinate system share an origin, the depth position of an object plane rotated from the horizontal origin ((x,y)=(0,0)) is given as the following equation 181 through the equation of a straight line crossing two points in 3D space.











z
=





(


z
m

-

z
n


)



x
n




x
n

-

x
m



+


z
n



;


n



m


,

n
=

{

1
,
2
,
3

}


,

m
=

{

1
,
2
,
3

}










z
=





(


z
m

-

z
n


)



y
n




y
n

-

y
m



+


z
n



;


n



m


,

n
=

{

1
,
2
,
3

}


,

m
=

{

1
,
2
,
3

}







[

Equation





181

]









    • where (xn,yn,zn);n={1,2,3} and (xm,ym,zm);m={1,2,3} are the positions of the center of first, second, and third cell regions.





In an embodiment, the rotated coordinate system generator 240 can generate a rotated coordinate system through the following second method. The rotated coordinate system generator 240 can obtain the transform matrix of equation 11 by applying input corresponding to a trained CNN model and by obtaining the rotation angle of a rotated plane as output. In this case, the CNN is trained using a rotation angle as the output of the CNN to achieve the object of training efficiency of the CNN, but it is also possible to obtain a transform matrix by training the CNN using the elements of a transform matrix as output. Further, it is possible to find out a depth position at a certain position of an objective surface when extracting the rotation angle of a rotated plane using a CNN.


In another embodiment, the rotated coordinate system generator 240 trains a CNN using rotation angles θx, θy, and θz of the rotated plane 830 of FIG. 8 and the depth position at the origin as output, and obtains the rotation angle that is the output of the CNN and the depth position at the origin as the output of a trained CNN model, thereby being able to a transform matrix and a depth position according to equation 11 of the rotated plane 830.


In an embodiment, the rotated coordinate system generator 240 can generate a rotated coordinate system through the following third method. The plane equation of a rotated plane formed by an inspection object or an objective plate rotated using (x{circumflex over (z)}_tilt,y{circumflex over (z)}_tilt,z{circumflex over (z)}_tilt,ld1,ld2) obtained by solving the simultaneously equation of equation 24 is given as the following equation 25.

{right arrow over (z)}tilt·({right arrow over (r)}−{right arrow over (r)}obj_planei)=0;i={1,2}  [Equation 25]

    • where {circumflex over (z)}tilt is a normal vector of a rotated plane, that is, {circumflex over (z)}tilt=x{circumflex over (z)}_tilt{circumflex over (x)}+y{circumflex over (z)}_tiltŷ+z{circumflex over (z)}_tilt{circumflex over (z)}, and {right arrow over (r)}obj_planei is a position vector showing the position of reflected monitoring-light on an inspection object or an objective plane, that is, {right arrow over (r)}obj_planei=ldi{right arrow over (d)}i+{right arrow over (r)}mor_sti;i={1,2}.


A unit vector in a direction that is parallel with a rotated plane is a unit vector having two points, which satisfy the plane equation of equation 25 as a start point and an end point, and is given as the following equation 26.











x
^

tilt

=




r


1

-


r


2







r


1

-


r


2









[

Equation





26

]









    • where {right arrow over (r)}1,{right arrow over (r)}2, which are certain points on a rotated plane, are two position vectors satisfying equation 25. The other unit position vectors of the rotated plane are vector products of the unit position vector in the depth direction and a unit direction vector in a direction parallel with the plane, and are given as the following equation 27.














y
^

tilt

=




z
^

tilt

×


x
^

tilt


=



n





n





×




r


1

-


r


2








r


1

-


r


2


|









[

Equation





27

]







A rotated coordinate system having a unit direction vector rotated from the original point of a non-rotated reference coordinate system is defined as in FIG. 7. In FIG. 7, {circumflex over (x)}tilttilt,{circumflex over (z)}tilt are unit direction vectors in a rotated plane coordinate system and {circumflex over (x)},ŷ,{circumflex over (z)} are unit direction vectors in a non-rotated reference coordinate system. The position of a reference coordinate system is given as the following equation 28 in a rotated coordinate system in accordance with equations 25 to 27.











(


x
tilt

,

y
tilt

,

z
tilt


)

=


(




x
^

tilt











r



,



y
^

tilt











r



,



z
^

tilt











r




)

=


T
=



r












where






T
=


=


[




a
1




a
4




a
7






a
2




a
5




a
8






a
3




a
6




a
9




]

=

[







r


1

-


r


2







r


1

-


r


2












n





n





×




r


1

-


r


2







r


1

-


r


2












n





n








]







[

Equation





28

]








is a transform matrix.


The hologram restorer 250 can obtain an image of an object by restoring a hologram in a plane formed in the depth direction of a rotated coordinate system.


In an embodiment, the hologram restorer 250 restores an image by transforming a hologram obtained in a coordinate system (x,y,z), that is, iHpi(x,y,z); l={1,2,3,4,5} using angular spectrum rotational transformation, and guiding the hologram in the depth direction of a rotated coordinate system ({circumflex over (x)},ŷ,{circumflex over (z)}). The restored image is given as the following equation 19.












i
^

H
l



(


x
^

,

y
^

,

z
^


)


=








I
H
l



(

u
,
v

)






|


u
=



a
1



u
^


+


a
2



v
^


+


a
3




w
^



(


u
^

,

v
^


)











v
=



a
4



u
^


+


a
5



v
^


+


a
6




w
^



(


u
^

,

v
^


)










exp


[

j





2






π


(



u
^



x
^


+


v
^



y
^


+



w
^



(


u
^

,

v
^


)




z
^



)



]






J


(


u
^

,

v
^


)





d


u
^


d


v
^



=






u
^



x
^


,


v
^



y
^




-
1


[



I
H
l



(

u
,
v

)




|


u
=



a
1



u
^


+


a
2



v
^


+


a
3




w
^



(


u
^

,

v
^


)











v
=



a
4



u
^


+


a
5



v
^


+


a
6




w
^



(


u
^

,

v
^


)










exp


[

j





2





π







w
^



(


u
^

,

v
^


)




z
^


]






J


(


u
^

,

v
^


)






]






[

Equation





19

]








where ℑû→{circumflex over (x)},{circumflex over (v)}→ŷ[ ] is 2D Fourier transform from a rotated spatial frequency axis to a rotated frequency axis, IHl(u,v)=ℑx→u,y→v[iHl(x,y)] is 2D Fourier transform of iHl(x,y), (u, v) is a spatial frequency axis in the (x, y)-axial directions, am;m={1,2,3,4,5,6} are elements of a transform matrix T in equation 13, (û,{circumflex over (v)}) are frequency axes in the ({circumflex over (x)},ŷ)-axial directions, respectively, ŵ(û,{circumflex over (v)})=(λ−2−û2−{circumflex over (v)}2)1/2 is a spatial frequency axis in the {circumflex over (z)} direction, and







J


(


u
^

,

v
^


)


=




(



a
2



a
6


-


a
3



a
5



)



u
^




w
^



(


u
^

,

v
^


)



+



(



a
3



a
4


-


a
1



a
6



)



v
^




w
^



(


u
^

,

v
^


)



+

(



a
1



a
5


-


a
2



a
4



)







is Jacobian showing the proportion amount of integral area element according to coordinate system conversion, that is, dudv=|J(û,{circumflex over (v)})|dûd{circumflex over (v)}.


When an image restored at {circumflex over (z)} in the depth direction on a rotated axis according to equation 19 is obtained, a hologram may be obtained in a rotated coordinate system through a method of resampling IHl(u,v) with regular intervals by applying interpolation to (û,{circumflex over (v)}) axes that are new variables and then applying fast Fourier transform or may be obtained by performing non-uniform discrete Fourier transform such as non-uniform discrete Fourier transform or non-uniform Fast Fourier transform on non-equispaced data.


In this embodiment, it is possible to obtain a 3D image of the object by transforming a hologram of a reference coordinate system into a rotated coordinate system and restoring the hologram in a plane formed in the depth direction in the rotated coordinate system.


For example, a hologram was guided through conversion using angular spectrum rotational transformation, but it is possible to transform and guide a hologram to a rotated coordinate system using various conversion methods.


In an embodiment, the hologram restorer 250 can obtain an image of an object by restoring a hologram at a depth position of a reference coordinate system and then guiding the hologram to a plane formed in the depth direction of a rotated coordinate system.


That is, the hologram restorer 250 can obtain a focused image regardless of rotation by restoring a hologram obtained in a reference coordinate system (x,y,z), that is, iHl(x,y,z);l={1,2,3,4,5} at a depth position of the reference coordinate system in accordance with equation 6 and then obtaining an image guided to a plane perpendicular to the depth direction of a rotated coordinate system. In this case, guiding may be achieved by various digital guiding methods including Rayleigh-Sommerfeld method. The following equation 191 shows an image restored in a rotated plane by guiding an image restored at a specific depth position of a reference coordinate system to a plane of a rotated coordinate system corresponding to the specific depth position, for example, using Rayleigh-Sommerfeld method.











O
rec
tilt



(


x
tilt

,

y
tilt

,

z
tilt


)


=


j
λ









O
rec



(

x
,
y
,
z

)





exp


(


-
j








2





π

λ






(

x
-

x
tilt


)

2

+


(

y
-

y
tilt


)

2

+


(

z
-

z
tilt


)

2




)






(

x
-

x
tilt


)

2

+


(

y
-

y
tilt


)

2

+


(

z
-

z
tilt


)

2





dxdy








[

Equation





191

]









    • where λ is the wavelength of a laser used to record a hologram and Orec(x,y,z) is an image restored in accordance with equation 6.





In this embodiment, it is possible to obtain a 3D image of the object by transforming a hologram of a reference coordinate system into a rotated coordinate system and restoring the hologram in a plane formed in the depth direction in the rotated coordinate system.


In an embodiment, the hologram restorer 250 can restore a hologram in sequential planes that are formed in the depth direction of a reference coordinate system, and then can interpolate the holograms into a rotated coordinate system.


In an embodiment, the hologram restorer 250 generates a 3D matrix using the depth position of each of sequential planes as an axis for images restored in the sequential planes and interpolates the 3D matrix to the axes of a rotated coordinate system, thereby being able to obtain a 3D image of an object.


That is, the hologram restorer 250 can restore the hologram, that is, iHl(x,y,z);l={1,2,3,4,5} obtained from the reference coordinate system (x, y, z) in the reference coordinate system in accordance with equation 6 for sequential depth positions. The hologram restorer 250 can generate a 3D matrix using sequential depth positions as axes for images restored at the sequential depth positions and then can interpolate images restored at the sequential positions to the axes of a rotated coordinate system, which is given as the following equation 20.

Orectilt({right arrow over (r)}tilt)=Orec({right arrow over (r)})|{right arrow over (r)}tilt=M{right arrow over (r)}  [Equation 20]

    • where M is a transform matrix for transformation from a reference coordinate system to a rotated coordinate system, {right arrow over (r)}=(x, y, z) is a position vector of a reference coordinate system, and {right arrow over (r)}tilt=(xtilt, ytilt, ztilt) is a position vector of a rotated coordinate system.


The controller 260 can control the general operation of the automatic optical inspection apparatus 130 and can manage control flow or data flow among the hologram capturer 210, the light-monitoring processor 220, the depth position/rotation angle extractor 230, the rotated coordinate system generator 240, and the hologram restorer 250.



FIG. 3 is a flowchart illustrating an automatic optical inspection process using scanning holography that is performed in the automatic optical inspection apparatus shown in FIG. 1.


Referring to FIG. 3, the automatic optical inspection apparatus 130 can take a hologram of an object existing on an objective plate using the scanning hologram camera 110 through the hologram capturer 210 (step S310). The automatic optical inspection apparatus 130 can radiate monitoring-light toward the surface of any one of an objective plate and an object through the light-monitoring processor 220 and can detect monitoring-light reflected by the surface (step S330). The automatic optical inspection apparatus 130 can extract a depth position and a rotation angle about an objective surface of the objective plate on the basis of the monitoring-light detected through the depth position/rotation angle extractor 230 (step S350).


Further, the automatic optical inspection apparatus 130 can generate a rotated coordinate system corresponding to the objective surface using the depth position and the rotation angle extracted by the depth position/rotation angle extractor 230 through the rotated coordinate system generator 240 (step S370). The automatic optical inspection apparatus 130 can obtain an image of the object by restoring a hologram in a plane formed in the depth direction of the rotated coordinate system through the hologram restorer 250 (step S390).


The automatic optical inspection apparatus 130 according to an embodiment of the present disclosure can calculate the depth position and the rotation angle of the objective surface by numerically processing a hologram through the depth position/rotation angle extractor 230. In this case, the numerical method may include an extraction method based on 3D region analysis (method A), an extraction method based on a CNN (method B), an extraction method based on gradient descent (method C), and an extraction method based on light monitoring (method D), and may extract a depth position and a rotation angle by partially combining the methods.


For example, for single monitoring-light in the method D, the depth position/rotation angle extractor 230 can only estimate a rotation angle through the underdetermined system in equation 243. However, when only a depth position is extracted by the method B or C or a depth position is obtained by analyzing a first region or first and second regions by the method A, the depth position/rotation angle extractor 230 can finally extract a rotation angle by calculating the distance ldi to an inspection object or an objective plate in the projection direction of the single monitoring-light, using the depth position and the start position {right arrow over (d)}l of the single monitoring-light in accordance with a geometric structure.


In addition, the depth position/rotation angle extractor 230 can obtain a rotation angle or a depth position as partial output in the methods A to C and can generate a rotated coordinate system by extracting rotation angle and depth position information by combining rotation angle or depth position information obtained through the methods A to C.


Although the present disclosure was described above with reference to exemplary embodiments, it should be understood that the present disclosure may be changed and modified in various ways by those skilled in the art, without departing from the spirit and scope of the present disclosure described in claims.












DESCRIPTION OF THE REFERENCE CHARACTERS

















100: system 100 for automatic optical inspection using scanning



holography



110: scanning hologram camera



130: automatic optical inspection apparatus



150: data base



210: a hologram capturer



220: light-monitoring processor



230: depth position/rotation angle extractor



240: rotated coordinate system generator



250: hologram restorer



260: controller



610: first cell region



620: second cell region



630: third cell region



810: reference plane



830: rotated plane



910: specific region








Claims
  • 1. An apparatus for automatic optical inspection using scanning holography, the apparatus comprising: a hologram capturer that takes a hologram of an object existing on an objective plate using a scanning hologram camera;a depth position/rotation angle extractor that extracts a depth position and a rotation angle about an objective surface of the objective plate on the basis of the hologram;a rotated coordinate system generator that generates a rotated coordinate system corresponding to the objective surface using the depth position and the rotation angle; anda hologram restorer that obtains an image of the object by restoring the hologram in a plane formed in a depth direction of the rotated coordinate system.
  • 2. The apparatus of claim 1, further comprising a light-monitoring processor that controls a monitoring-light generator to radiate monitoring-light toward a surface of any one of the objective plate and the object and detects monitoring-light reflected by the surface, wherein the depth position/rotation angle extractor extracts a depth position and a rotation angle about the objective surface of the objective plate on the basis of the detected monitoring-light.
  • 3. The apparatus of claim 1, wherein the scanning hologram camera includes a light source that generates an electromagnetic wave, a splitting unit that splits the electromagnetic wave, a scan unit that scans the object using interference beams generated by the split electromagnetic waves, and a light detection unit that detects a reflected, fluorescent, or transmitted beam from the object; and the hologram capturer generates a complex-number hologram as the result of taking a hologram.
  • 4. The apparatus of claim 2, wherein the scanning hologram camera includes a light source that generates an electromagnetic wave, a splitting unit that splits the electromagnetic wave, a scan unit that scans the object using interference beams generated by the split electromagnetic waves, and a light detection unit that detects a reflected, fluorescent, or transmitted beam from the object; and the hologram capturer generates a complex-number hologram as the result of taking a hologram.
  • 5. The apparatus of claim 1, wherein the depth position/rotation angle extractor extracts a depth position in each of three regions spaced apart from each other and independently defined on the objective surface, and then extracts a depth position and a rotation angle about the objective surface.
  • 6. The apparatus of claim 2, wherein the depth position/rotation angle extractor extracts a depth position in each of three regions spaced apart from each other and independently defined on the objective surface, and then extracts a depth position and a rotation angle about the objective surface.
  • 7. The apparatus of claim 5, wherein the depth position/rotation angle extractor performs: a first step of restoring the hologram at each of sequential depth positions; a second step of calculating a focus metric in each of the three regions for restored images; anda third step of determining a depth position at which the focus metric is a maximum value as a depth position of each of the three regions.
  • 8. The apparatus of claim 6, wherein the depth position/rotation angle extractor performs: a first step of restoring the hologram at each of sequential depth positions; a second step of calculating a focus metric in each of the three regions for restored images; anda third step of determining a depth position at which the focus metric is a maximum value as a depth position of each of the three regions.
  • 9. The apparatus of claim 5, wherein the depth position/rotation angle extractor obtains a depth position in each of the three regions as output by inputting hologram data into a Convolutional Neural Network (CNN) model and then extracts a depth position and a rotation angle of the objective surface.
  • 10. The apparatus of claim 6, wherein the depth position/rotation angle extractor obtains a depth position in each of the three regions as output by inputting hologram data into a Convolutional Neural Network (CNN) model and then extracts a depth position and a rotation angle of the objective surface.
  • 11. The apparatus of claim 1, wherein the depth position/rotation angle extractor extracts a depth position and a rotation angle about the objective surface using a CNN model generated through training in which a specific region for forming the hologram is input and a rotation angle of a rotated objective surface as output.
  • 12. The apparatus of claim 11, wherein the depth position/rotation angle extractor Fourier-transforms the hologram into a spatial frequency region and then uses a region corresponding to the specific region in the spatial frequency region as input.
  • 13. The apparatus of claim 1, wherein the depth position/rotation angle extractor extracts a depth position and a rotation angle about the objective surface using gradient descent on the basis of a portion of or the entire region of the hologram.
  • 14. The apparatus of claim 2, wherein the depth position/rotation angle extractor extracts a depth position and a rotation angle about the objective surface using gradient descent on the basis of a portion of or the entire region of the hologram.
  • 15. The apparatus of claim 1, wherein the hologram restorer restores an image of the object by transforming the hologram into the rotated coordinate system and guiding the hologram in a depth direction of the rotated coordinate system.
  • 16. The apparatus of claim 15, wherein the hologram restorer obtains an image of the object by transforming the hologram through angular spectrum rotational transformation and guiding the hologram in a depth direction of a rotated coordinate system.
  • 17. The apparatus of claim 1, wherein the hologram restorer obtains an image of the object by restoring the hologram at a depth position of a reference coordinate system and then guiding the hologram to a plane formed in a depth direction of the rotated coordinate system, wherein the hologram restorer obtains a 3D image of the object by restoring the hologram at a depth position of a reference coordinate system and then guiding the hologram to sequential planes formed in the depth direction of the rotated coordinate system.
  • 18. The apparatus of claim 2, wherein the hologram restorer obtains an image of the object by restoring the hologram at a depth position of a reference coordinate system and then guiding the hologram to a plane formed in a depth direction of the rotated coordinate system, wherein the hologram restorer obtains a 3D image of the object by restoring the hologram at a depth position of a reference coordinate system and then guiding the hologram to sequential planes formed in the depth direction of the rotated coordinate system.
  • 19. The apparatus of claim 1, wherein the hologram restorer restores the hologram in each of sequential planes formed in a depth direction of a reference coordinate system and then interpolates the restored holograms to the rotated coordinate system, wherein the hologram restorer obtains an image of the object by generating a 3D matrix using a depth position of each of the sequential planes as an axis for images restored in the sequential planes and interpolating the 3D matrix to axes of the rotated coordinate system.
  • 20. A method of automatic optical inspection using scanning holography, the method comprising: taking a hologram of an object existing on an objective plate using a scanning hologram camera;extracting a depth position and a rotation angle about an objective surface of the objective plate on the basis of the hologram;generating a rotated coordinate system corresponding to the objective surface using the depth position and the rotation angle; andobtaining an image of the object by restoring the hologram in a plane formed in a depth direction of the rotated coordinate system.
Priority Claims (2)
Number Date Country Kind
10-2019-0066639 Jun 2019 KR national
10-2019-0066643 Jun 2019 KR national
PCT Information
Filing Document Filing Date Country Kind
PCT/KR2020/007189 6/3/2020 WO
Publishing Document Publishing Date Country Kind
WO2020/246788 12/10/2020 WO A
US Referenced Citations (5)
Number Name Date Kind
11680655 Fatehi Jun 2023 B1
20170221230 Allinson Aug 2017 A1
20210312596 Hagiwara Oct 2021 A1
20220043251 Moore Feb 2022 A1
20220206629 Mochizuki Jun 2022 A1
Foreign Referenced Citations (5)
Number Date Country
H07-318331 Dec 1995 JP
10-2013-0081127 Jul 2013 KR
10-2014-0021765 Feb 2014 KR
10-2014-0121107 Oct 2014 KR
10-1529820 Jun 2015 KR
Non-Patent Literature Citations (1)
Entry
International Search Report for PCT/KR2020/007189 dated Sep. 3, 2020 from Korean Intellectual Property Office.
Related Publications (1)
Number Date Country
20220221822 A1 Jul 2022 US