Calibration methods for placement machines incorporating on-head linescan sensing

Information

  • Patent Grant
  • 6535291
  • Patent Number
    6,535,291
  • Date Filed
    Wednesday, June 7, 2000
    24 years ago
  • Date Issued
    Tuesday, March 18, 2003
    21 years ago
Abstract
A method of calibrating a pick and place machine having an on-head linescan sensor is disclosed. The calibration includes obtaining z-axis height information of one or more nozzle tips via focus metric methods, including a Fourier transform method and a normalized correlation method. Additionally, other physical characteristics such as linear detector tilt, horizontal scale factor, and vertical scale factor are measured and compensated for in the process of placing the component. Nozzle runout, another physical characteristic, is also measured by a sinusoidal curve fit method, and the resulting Z-height calibration data is used to later place the component.
Description




COPYRIGHT RESERVATION




A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.




BACKGROUND OF THE INVENTION




The present invention relates to pick and place machines. More particularly, the present invention relates to a method of calibrating pick and place machines.




Pick and place machines are used by the electronics assembly industry to mount individual components on printed circuit boards. These machines automate the tedious process of placing individual electrical components on the circuit board. In operation, pick and place machines generally pick up individual components from a component feeder or the like, and place the components in their respective positions on a circuit board.




During the placement operation, it is generally necessary for the pick and place machine to look at, or image, a given component prior to placement in order to adjust the orientation of the component for proper placement. Such imaging allows precise adjustments to be made to the component's orientation, such that the component will be accurately placed in its desired position.




One standard type of pick and place machine uses a shadowing sensor, such as a LaserAlign® sensor available from CyberOptics® Corporation of Golden Valley, Minn. In a shadowing sensor, the object under test is rotated, and the effective width of the shadow (or an image of the shadow) is monitored on a detector. The dimensions of the object can be computed by monitoring the width of the shadow (or the shadow image). During start-up, the pick and place machine is calibrated so that any positional output from the sensor is mathematically related to the pick and place machine coordinate system. Once a correlation between the pick and place machine and the sensor output is known in X and Y, the pick and place machine can accurately place the object under test in its intended (X, Y) location on, say, a printed circuit board. There are also disclosed methods of calibrating the Z-height of a nozzle, so that the pick and place machine can repeatably place the object onto the intended place at the correct Z height. However, the methods disclosed for calibrating pick and place machines in (X, Y) and in Z are specific to the type of sensor in the pick and place machine.




Another type of pick and place machine uses an on-head linescan sensor to image the component while the placement head is traveling. An on-head sensor, as used herein, refers to a sensor which travels with the placement head in at least one dimension, so as to sense the orientation of the component while the component travels to the circuit board. This is in contrast to off-head systems, where the component is transported to a stationary station to sense the orientation of the component, and from there, the component is transported to the circuit board. A linescan sensor, as used herein, is an optical sensor comprised of a plurality of light sensitive elements that are arranged in a line such that the sensor acquires a single line of the image in a given time period. By translating the linescan sensor relative to the entire component and storing a plurality of the acquired lines, the component image is realized and X, Y and θ orientation is then calculated using this scanned image.




Placement machines incorporating on-head linescan sensing technology are very flexible in the types of components that they can place. The on-head linescan sensor is able to directly image components such as chip capacitors, Quad Flat Packs (QFP), TSOP, Ball Grid Arrays (BGA), CSP, and flip-chips. The video output of the linescan camera allows a video processor to compute the orientation of the component. Based on knowledge of the desired orientation of the component and the present orientation, the pick and place machine corrects the orientation of the component and places it on a printed circuit board. The linescan image can also provide inspection information about the component to be placed. Also, placement machines incorporating on-head linescan sensing are very fast compared to off-head sensing technologies since the step of visiting a fixed inspection station to measure pick-up offset errors is eliminated. To increase the accuracy of pick and place machines using on-head linescan sensing technology, however, careful calibration of the linescan sensor and its physical relationship to other parts of the placement machine should be performed.




SUMMARY OF THE INVENTION




A method of calibrating a pick and place machine having an on-head linescan sensor is disclosed. The calibration includes obtaining z-axis height information of one or more nozzle tips via focus metric methods, including a Fourier transform method and a normalized correlation method. Additionally, other physical characteristics such as linear detector tilt, horizontal scale factor, and vertical scale factor are measured and compensated for in the process of placing the component. Nozzle runout, another physical characteristic, is also measured by a sinusoidal curve fit method, and the resulting Z-height calibration data is used to later place the component.











BRIEF DESCRIPTION OF THE DRAWINGS





FIG. 1

is a top plan view of a pick and place machine.





FIG. 2

is a perspective view of a pick and place head in accordance with an embodiment of the present invention.





FIG. 3

is a block diagram of a method of calibrating a pick and place machine in accordance with an embodiment of the present invention.





FIG. 4

is a chart of contrast vs. nozzle Z-position.





FIG. 5

is a diagrammatic view of a rotated linescan sensor.





FIG. 6

is a diagrammatic view of sheared linescan images.





FIG. 7

is a diagrammatic view of a calibration target.





FIG. 8

is a diagrammatic view of a linescan sensor and calibration target in the X-Y coordinate system of the linescan sensor stage.





FIG. 9

is a diagrammatic view of a linescan sensor stage coordinate system as it relates to the coordinate system of a pick and place machine.





FIG. 10A

is a diagrammatic view of components A and B as measured in a coordinate system of a linescan sensor.





FIG. 10B

is a diagrammatic view of components A and B as measured in a coordinate system of a pick and place machine.





FIG. 11

is a diagrammatic view illustrating nozzle runout.





FIG. 12

is a top plan view illustrating nozzle tip positions at various angular orientations associated with the nozzle runout shown in FIG.


8


.





FIG. 13

is a pair of charts showing nozzle tip position along the X′ and Y′ axes vs. nozzle angle θ.











DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS





FIG. 1

is a top plan view of pick and place machine


31


in accordance with an embodiment of the present invention. Pick and place machine


31


is adapted to mount a variety of electrical components such as chip resistors, chip capacitors, flip-chips, Ball Grid Arrays (BGA), Quad Flat Packs (QFP), and connectors on a workpiece


32


such as a printed circuit board.




As various methods are disclosed, it will be apparent that there are three relevant coordinate systems. Interrelationships between all three must be known in order to accurately and repeatably place a component in an intended location. Coordinates of the scanned image are denoted with a single prime after the coordinate (i.e. (X′,Y′,Z′)). Coordinates of the linescan sensor stage are denoted without any prime notation (i.e. (X,Y,Z)); the pick and place coordinate system is denoted by a double prime notation (i.e. (X″,Y″,Z″)) and the coordinate system of a target is denoted in a triple prime labeling convention (i.e. (X″′,Y″′,Z″′)).




The individual components are picked up from feeders


34


which are disposed on opposite sides of conveyor


33


. Feeders


34


can be known tape feeders, or any other suitable device.




Pick and place head


37


is adapted to releasably pick up components from feeders


34


and transport such components to their respective mounting locations upon workpiece


32


. Head


37


will be described in greater detail with respect to FIG.


2


. Head


37


is movably disposed upon carriage


41


, and is coupled to drive motor


42


via a ballscrew or other appropriate means. Thus, energization of motor


42


causes displacement of head


37


in the Y″ direction along carriage


41


, as indicated by arrow


20


. Motor


42


is coupled to encoder


43


which provides a Y″-axis position signal to a controller


39


.




Skipping ahead to

FIG. 2

, Linescan sensor


64


views the component from its underside, so a scanned image from sensor


64


may include detail of the underside of the component. The underside is typically the most detailed portion of the component, with fine pitch balls, columns, or other connection means may be present. The linescan sensor


64


also has the advantage of a variable field of view, with a variable resolution, so that as more detail is needed, the resolution and field of view may be appropriately adjusted.




Carriage


41


is mounted upon a pair of guide rails


46


, and is movable along the X″ axis, as indicated by arrow


22


. Carriage


41


is coupled to drive motor


49


such that energization of motor


49


causes a displacement of carriage


41


, and head


37


along the X″ axis. Encoder


51


is coupled to motor


49


to provide a X″-axis position signal to the controller


39


.




Pick and place machine


31


also includes controller


39


which receives the encoder position signals from encoders


42


,


51


, receives linescan image information from sensor


64


(shown in FIG.


2


), and receives fiducial image data from camera


92


(shown in FIG.


2


). As will be described in greater detail later in the specification, controller


39


computes physical characteristics for calibrating pick and place machine


31


.




Other sorts of linescan sensors are adaptable for use with the present methods for calibration. For example, some high-capacity pick and place machines have a turret system with a rotating head which sequentially places components it picks up on a plurality of nozzles, all of which rotate around a central point in the rotating head. As for traditional X,Y translation gantry pick and place machines, some have recently been modified so that they have a small degree of movement in one dimension while the gantry is fixed in another orthogonal direction. Furthermore, it is understood that for any linescan sensor to scan an object of interest, there must either be movement of the sensor while the object is stationary, movement of the object of interest while the sensor is stationary, or movement of both sensor and object at the same time.





FIG. 2

is a perspective view of placement head


37


in accordance with an embodiment of the present invention. As can be seen, placement head


37


includes two vacuum pick-up nozzles


62


, fiducial sensing camera


92


, on-head linescan sensor


64


, and linescan sensor stage


88


. Nozzles


62


are coupled to pickup units


84


such that components


86


A and


86


B held by nozzles


62


can be translated up and down and rotated about their respective nozzle axes. Although two nozzles


62


are shown in

FIG. 2

, any suitable number of nozzles, including one nozzle, can be used to practice embodiments of the present invention.




Linescan sensor


64


is movably supported upon linear stage


88


, such that linescan sensor


64


can move in the Y direction as indicated by arrow


21


. A linear motor (not shown) provides the drive, but any mechanical arrangement for moving the stage


88


is acceptable. Fiducial camera


92


is disposed on head


37


and measures registration marks, or fiducials, on the workpiece. The locations of the fiducials are used in order to compute the placement location correction, as well as facilitate calibration, as will be described in greater detail later in the specification.




In order to calibrate pick and place machine


31


, it is generally necessary to measure a number of physical characteristics of the pick and place machine. With these physical characteristics and knowledge of the mathematical relationship between the sensor coordinate system, the line scan stage coordinate system and the pick and place machine coordinate system, a processor in the system can compute instructions for moving the head to finely and accurately place the component in the intended location (this process is called “compensating” the position of the component). These characteristics include the Z-axis height of each nozzle tip on the placement head relative to some reference position of a Z position encoder in the pick and place machine, the location of each nozzle on the placement head, the effective axis of the linescan sensor, the horizontal scale factor of the linescan sensor, the vertical scale factor of the linescan sensor, and the runout of each nozzle.




All embodiments of the present invention utilize a linescan sensor for calibration, it is preferred that the first step in the calibration process is location of the Z-axis heights of the nozzle tips, such that later calibration steps can be performed at the focal plane of the linescan sensor, and thus be performed more accurately. Once the nozzle Z-heights are established, each nozzle can be adjusted for the proper Z-axis position so that all components and calibration targets are in best focus (i.e. positioned in the focal plane) when scanned by the linescan sensor.





FIG. 3

shows one method of calculating the Z-heights in accordance with an embodiment of the present invention. The method shown in

FIG. 3

can be considered an autofocus method for reasons which will become apparent during the description of FIG.


3


. Prior to beginning of the method of

FIG. 3

, an illumination type is chosen that will highlight the sharp edges of each nozzle tip. Linescan sensors can employ sophisticated combinations of various illumination types, and a description of such illumination can be found in co-pending application Ser. No. 09/432,552 filed Nov. 3, 1999 entitled ELECTRONICS ASSEMBLY APPARATUS WITH IMPROVED IMAGING SYSTEM. Once the scanned image is acquired, a focus metric method is applied to the scanned image to provide a measure of the focus of the nozzle tips and finally, to compute the z elevation of the nozzle at which the nozzle tips are in best focus. Two embodiments for the focus metric method will be presented here, but other methods can be equally suitable.




The procedure begins at block


100


by raising each nozzle so that each tip is known to be above the Z location of the plane of best focus for the linescan sensor. Next, a scan of all nozzle tips is performed by translating linescan sensor


64


in the Y direction and acquiring a single image of all the nozzle tips in a scanned image, as indicated by block


102


.




At block


104


, a focus metric method is applied to the scanned images of the nozzle tips and the result is stored along with some indication of the presently set Z height. Although two embodiments for focus metric methods are described herein, it is understood that any method which identifies an optical Z height corresponding to the best focus of the nozzles will be adequate.




In a first embodiment of the focus metric method, a two dimensional Fourier transform is performed on the scanned images of the nozzle tips. Since the scanned images of the nozzle tips have a significant high-frequency content, a Fourier transform will permit analysis of the strength of the high frequency components, and thus the sharpness of the images. Other means for identifying high frequency portions of a scanned image may also be employed.




At block


108


, the focus metric result from block


104


is compared to the previously stored focus metric results from images at previous nozzle Z positions. In this first embodiment of the focus metric method, the amplitude of selected high frequency spatial components in the Fourier transform of the scanned image is the measure of best focus. The amplitudes of the high frequency components for each nozzle increases until local maxima are achieved which corresponds roughly to the optimal Z height for best focus. After reaching the maxima, the amplitudes of the high frequency components begins to decrease. When these monitored amplitudes begin to decrease, the presently set Z height is less than the optimal Z height. As indicated in

FIG. 3

, if the Z height is not yet less than the optimal Z height, the nozzles are lowered at block


112


and the process starts again at block


102


.




Otherwise, the process continues at block


110


, where a fourth order polynomial is fit to the amplitude data in order to interpolate an optimal Z height for each nozzle that results in the highest contrast, and thus best focus. Curve fitting suppresses noise, and allows the selection of the optional focus point to be computed. Any suitable curve fitting method can be used to fit the results from the “focus-metric” method to any suitable mathematical model. The result of the curve fitting preferably facilitates interpolation of the Z height position of best focus for each nozzle from the “focus-metric” data.




Functions other than Fourier Transform amplitude may be used to measure the sharpness of the edges of the nozzles in the scanned image, and this information can then be used to compute when the image of the nozzle tips is in best focus.




An alternate focus metric method can be used in block


104


in FIG.


3


. In this alternative focus metric, a template (i.e. expected image) of each nozzle tip is compared to the nozzle tip in the scanned image. The template can be constructed in software or a previously recorded image of the nozzle tip that is in sharp focus can be used. The normalized correlation algorithm returns a score indicating the quality of the template match, and the score is stored as a measure of the focus at each Z-height. When the correlation score is maximized, the scanned image of nozzle tips is in best focus. Various types of auto-focus methods, other than the normalized correlation and the Fourier transform method are equally suitable.




The next preferred step in the calibration process is to make the effective axis of the linear detector within the linescan sensor perpendicular to the linescan sensor's direction of motion. If there is a tilt in this effective axis, then all images will appear to be sheared.

FIG. 5

shows the effective axis


65


of the linear detector tilted at a greatly exaggerated angle θ


d


with respect to the X axis. The Y axis in this figure is the direction of motion for linescan stage


88


. Linescan stage


88


is omitted in

FIG. 5

for clarity, but is shown in FIG.


2


.





FIG. 6

shows the sheared images


87


A and


87


B of components


86


A and


86


B, respectively. Also, labeled in

FIG. 6

is the X′-Y′ coordinate system of the linescan sensor


64


as it appears in a captured video image. The procedure that follows uses the linescan sensor to scan a calibration target of known dimension that has been picked up by one of the nozzles. The detector tilt is calculated from this image of the calibration target.




If the detector tilt θ


d


, calculated below, is larger than the allowable tolerance, the linescan sensor housing is rotated in the X-Y plane on its mechanical mount. The rotation is accomplished by unscrewing bolts which fix sensor stage


88


into place on head


37


or other suitable mechanical means. Then, the procedure of scanning, calculating the detector tilt, and rotating the linescan sensor housing repeats until the detector tilt is within tolerance limits. Additionally, the horizontal and vertical scale factors of the linescan sensor are measured in the same procedure.




To measure the detector tilt, vertical scale factor and horizontal scale factor of the linescan sensor, a calibration target of known dimension is used. Preferably, this target is made by high precision photolithographic techniques. An example of a suitable calibration target


128


is shown in FIG.


7


. Features


130


A through


130


F are known size, shape, and location in the X″′-Y″′ coordinate system. These features are also referred to as fiducial marks.




Another type of calibration target


128


that has been successfully used has an orthogonal grid of squares patterned on it.




In general, the image of the features or grid will be sheared, rotated and have an offset in the linescan image.




In

FIG. 8

calibration target


128


is rotated by an amount θ


g


with respect to the X axis.




Referring now back to

FIGS. 5 and 6

, we see that points in the linescan image are transformed to points in the coordinate frame of stage


88


by the relationship










[



X




Y



]

=


[




h





cos






θ
d




0





h





sin






θ
d




v



]



[




X







Y





]






(
1
)













Where h and v are the horizontal and vertical scale factors, respectively, and Equation (1) accounts for both scale and shear.





FIG. 8

shows linescan sensor


64


prior to obtaining an image of calibration target


128


. Calibration target is held by one of the vacuum nozzles (not shown). Calibration target is shown rotated by an amount θ


g


. In the stage coordinate frame, the positions of features


130


A through


130


F, ignoring offsets, are given by the rotation equation:










[



X




Y



]

=


[




cos






θ
g






-


sin






θ
g







sin






θ
g





cos






θ
g





]



[




X
′′′






Y
′′′




]






(
2
)













Equating equations (1) and (2) gives:











[




h





cos






θ
d




0





h





sin






θ
d




v



]



[




X







Y





]


=


[




cos






θ
g






-


sin






θ
g







sin






θ
g





cos






θ
g





]



[




X
′′′






Y
′′′




]






(
3
)













To compute the linear detector tilt, horizontal scale factor, and vertical scale factor, a geometric transformation is used. One geometric transformation, known as the affine transformation, can accommodate translation, rotation, scaling and shear. Further information about the affine transformation is provided in the monograph by George Wolberg entitled, “Digital Image Warping” (IEEE Computer Society Press, 1990).




Points in the X′-Y′ linescan sensor image coordinate frame are mapped into the X″′-Y″′ calibration target coordinate frame, preferably by the following affine transformation:










[




X
′′′






Y
′′′




]

=



[



α


β




γ


δ



]



[




X







Y





]


+

[




X
0







Y
0





]






(
4
)













where (X′


0


, Y′


0


) is the offset of the calibration target


128


origin and α, β, γ, δ describe the rotation, scale, and shear of the calibration target image. The location (X′, Y′) of each feature


130


A through


130


F is found in the linescan image by the normalized correlation method. Equation (4) is repeated for each feature


130


A through


130


F. The parameters α, β, γ, δ, X′


0


, Y′


0


are then found by a known method such as the method of least squares, although other interpolation methods are suitable.




Substituting equation (4) into equation (3) gives:











[




h





cos






θ
d




0





h





sin






θ
d




v



]



[




X







Y





]


=



[




cos






θ
g






-


sin






θ
g







sin






θ
g





cos






θ
g





]



[



α


β




γ


δ



]




[




X







Y





]






(
5
)













Again, the offsets are ignored since it is desired to only compute the detector tilt, horizontal scale factor, and the vertical scale factor.




If equation (5) holds for all






[




X







Y





]










then:










[




h





cos






θ
d




0





h





sin






θ
d




v



]

=


[




cos






θ
g






-


sin






θ
g







sin






θ
g





cos






θ
g





]



[



α


β




γ


δ



]






(
6
)













Writing out the “northeast” equation of (6) gives:






β cos θ


g


−δ sin θ


g


=0  (7)






Solving equation (7) for the tilt of the calibration target, θ


g


, gives:










θ
g

=


tan

-
1




(

β
δ

)






(
8
)













By using standard trigonometric identities, equation (6) becomes












1



β
2

+

δ
2






[



δ



-
β





β


δ



]




[



α


β




γ


δ



]


=

[




h





cos






θ
d




0





h





sin






θ
d




v



]





(
9
)













Solving equation (9) for detector tilt θ


d


, gives:










θ
d

=


tan

-
1




(


αβ
+
γδ


αδ
-
βγ


)






(
10
)













The horizontal and vertical scale factors are given by:








h={square root over (α


2





2


)},


  (11)










v={square root over (β


2





2


)}


  (12)






Another method for computing detector tilt θ


d


can be performed by imaging a target with a clearly delineated pattern on the target, such as a square or a rectangle. Once the scanned image of the target (which includes the pattern) is acquired, the slope of each of the line segments forming the pattern can be computed with commercially available machine vision software. With knowledge of the equation of at least two adjacent line segments in the rectangle, the angle, θ


d


, between the two line segments can be computed and compared to the expected angle between the line segments. Alternatively, one can compute θ


d


and the scale factors by performing a transformation on at least three points, each point formed by the intersection of the line segments. Finally, the stage


88


is be mechanically adjusted by the angle θ


d


, thereby compensating for the initial detector stage tilt in subsequent measurements.




Once the linear detector tilt has been removed, the mapping of the linescan stage coordinate frame into the placement machine coordinate frame is determined.

FIG. 9

shows an example where the coordinate axis of the on-head linescan sensor stage is tilted relative to the X″-Y″ axes of the placement machine. (In the present placement machine embodiment, the placement head moves both in the X″ and Y″ axes. In other placement machine embodiments, the placement head may move in only the X″ or Y″ axis).




The procedure begins by picking up components labeled


86


A and


86


B as shown in FIG.


2


. For simplicity, components


86


A and


86


B will be referred to as components A and B, respectively, hereinafter. For this calibration step, the components are preferably machined rectangular blocks. However, normal electrical components can also be used. Components A and B are then scanned by linescan sensor


64


and the center positions of components A and B are calculated. After components A and B have been scanned, they are placed on a target substrate. Fiducial camera


92


is then sequentially positioned over components A and B and their locations on the substrate are measured in the placement machine coordinate frame. Fiducial camera


92


travels and also measures in the placement machine coordinate frame because it is mounted to the placement head.





FIG. 10A

shows the locations (X′


A


, Y′


A


) and (X′


B


, Y′


B


) of components A and B, respectively, as measured by linescan sensor


64


in the single prime linescan coordinate system. Line


132


between these two points makes an angle ε with respect to the Y′ axis.

FIG. 10B

shows the locations (X″


A


, Y″


A


) and (X″


B


and Y″


B


) of components A and B, respectively, as measured by fiducial camera


92


in the double prime coordinate system of placement machine


31


. Line


134


between these two points makes an angle ω with respect to the Y″ axis. For this example, the two coordinate frames are rotated with respect to one another by an amount φ as given by equation (13). Equations (14) and (15) give expressions for ε and ω.






φ=ε−ω  (13)
















ε
=


tan

-
1




(



X

B



-

X

A






Y

B



-

Y

A





)






(
14
)






ω
=


tan

-
1




(



X

B



-

X

A






Y

B



-

Y

A





)






(
15
)













Converting measurements made in the prime coordinate frame of linescan sensor


64


(X′,Y′) into the double prime coordinate frame of placement machine


31


(X″,Y″) by a translation and rotation is given by the following equation










[




X
′′






Y
′′




]

=



[




cos





φ





-


sin





φ






sin





φ




cos





φ




]



[




X







Y





]


+

[




X
0







Y
0





]






(
16
)













The translation amounts X′


0


and Y′


0


may be calculated by substituting measured locations of either component A or B into equation (16). Doing this for the measured location of components A gives:








X′




0




=X″




A




−X′




A


cos φ+


Y′




A


sin φ  (17)










Y′




0




=Y″




A




−X′




A


sin φ−


Y′




A


cos φ  (18)






The accuracy of pick and place machine


31


is also improved by measuring the exact locations of the nozzles and measuring any mechanical runout of the nozzles as they are rotated. Runout refers to the offset of the nozzle tip from its effective axis of rotation, as measured in the plane of the nozzle tip.

FIG. 11

shows a side view of a nozzle


62


with runout and the dotted line view of the same nozzle after it has been rotated 180°. To measure the nozzle locations and the associated runout, the nozzles are scanned and then their locations are computed by the normalized correlation method described earlier. The nozzles are then incremented in the θ direction and the procedure of scanning and measuring their locations by using the normalized correlation method is repeated until the nozzles have been rotated through 360°.

FIG. 12

shows an example of one nozzle tip location for various nozzle angles. The circles labeled


1


through


6


on

FIG. 12

are the nozzle tip images for this example.

FIGS. 13A and 13B

show the X′ and Y′ locations of the nozzle tip plotted against the θ position of the nozzle. Nozzle tip locations


1


through


6


are also labeled in

FIGS. 13A and 13B

. The nozzle runout axes and angles can be found from a best-fit sinusoidal curve to the X′ and Y′ locations as described below.

FIGS. 13A and 13B

also show these best-fit sinusoidal curves. The equations for the tip position of nozzle number k are given by:








X′




k




=X′




ck




+R




k


cos(θ


k


−ξ


k


)  (19)










Y′




k




=Y′




ck




+R




k


sin(θ


k


−ξ


k


)  (20)






where the center of rotation for nozzle number k is given by the coordinate (X′


ck


, Y′


ck


) and the radius of revolution is given by R


k


. The angle of the nozzle is θ


k


and ξ


k


is an angular offset.




To solve equations (19) and (20) for the nozzle center position, the radius, and the angular offset, the following parameters a


k


and b


k


are defined








a




k




=R




k


cos ξ


k


  (21)










b




k




=R




k


sin ξ


k


  (22)






Using the standard trigonometric angle-difference formulas, equations (19) and (20) become










[




X
k







Y
k





]

=



[




cos






θ
k





sin






θ
k







sin






θ
g






-


cos






θ
k





]



[




a
k






b
k




]


+

[




X
ck







Y
ck





]






(
23
)













The method of least squares is then used to compute a


k


, b


k


, and the center of rotation for each nozzle. The radius of revolution and the angular offset are then given by








R




k




={square root over (a


k





2





+b





k





2


)}


  (24)

















ξ
k

=


tan

-
1




(


b
k


a
k


)






(
25
)













When a component must be rotated after having been measured by linescan sensor


64


and prior to placement, the difference in nozzle center position for the two angles is computed. The difference is then applied to the correction amount measured by the linescan sensor. Further information regarding the correction amount calculation can be found in the co-pending United States patent application listed above.




From

FIGS. 13A and 13B

, it should be apparent that nozzle runout could become a large error source when the component must be rotated through a large angle because it was picked up in a different orientation than it is to be placed on the printed circuit board. It is typical to rotate a component approximately −90°, 90°, or 180° prior to placement. To reduce the amount of runout correction necessary, components may be pre-rotated to their approximate placement orientation before scanning with the linescan sensor. This pre-rotation can take place while the nozzle is retracted in a position for scanning, or the pre-rotation can take place while the nozzle is being retracted after part pick-up.




Although the present invention has been described with reference to preferred embodiments, workers skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the invention. In particular, the calibration methods of the present invention can be readily expanded to multiple nozzles.



Claims
  • 1. A method of calibrating a pick and place machine having at least one nozzle, the method comprising:scanning the at least one nozzle with an on-head linescan sensor to provide a scanned image; calculating a physical characteristic of the at least one nozzle based at least in part on the scanned image; placing a component based at least in part on the calculated physical characteristic; and wherein the nozzle is displaced in a Z direction after the step of scanning the at least one nozzle, and wherein the step of scanning the at least one nozzle is repeated to provide a plurality of additional scanned images, where the physical characteristic is a Z-height of the at least one nozzle.
  • 2. The method of claim 1 further comprising performing a focus-metric method on the scanned image and on the plurality of additional scanned images, the result of the focus metric method used to compute the Z-height.
  • 3. The method of claim 2 where the focus-metric method includes analyzing a strength of high frequency components in the scanned image and in the plurality of scanned images.
  • 4. The method of claim 3 where the step of analyzing the strength of high frequency components includes performing a Fourier transform.
  • 5. The method of claim 3 where the step of analyzing the strength provides a plurality of measures of sharpness of the scanned image and the plurality of additional scanned images, and further where the step of computing the Z-height includes interpolating between the plurality of measures of sharpness.
  • 6. The method of claim 5 where the Z-height is computed from interpolating the plurality of measures of correlation.
  • 7. The method of claim 2 where the focus metric method includes comparing the scanned image and the plurality of additional scanned images to a template image to provide a plurality of measures of correlation, where the plurality of measures of correlation are used to compute the Z-height.
  • 8. The method of claim 1 further comprising picking up a component with the at least one nozzle and positioning the component on a Z-axis based on the Z-height.
  • 9. A method of calibrating a pick and place machine having at least one nozzle, the method comprising:scanning the at least one nozzle with an on-head linescan sensor to provide a scanned image; calculating a physical characteristic of the at least one nozzle based at least in part on the scanned image; placing a component based at least in part on the calculated physical characteristic; and wherein the linescan sensor is mounted on a placement head, where the physical characteristic is a position of the at least one nozzle on the placement head.
  • 10. The method of claim 9 where the position is indicated relative to two orthogonal axes.
  • 11. The method of claim 9 where the position is computed as a function of a linescan coordinate system, a stage coordinate system and a pick and place coordinate system.
  • 12. The method of claim 9 further comprising picking up a component with the at least one nozzle and positioning the component relative to a pick and place coordinate system based on the position.
  • 13. The method of claim 9 where the placement head includes an additional nozzle, where the physical characteristic indicates a position of the at least one nozzle and of the additional nozzle.
US Referenced Citations (83)
Number Name Date Kind
4473842 Suzuki et al. Sep 1984 A
4521112 Kuwabara et al. Jun 1985 A
4578810 MacFarlane et al. Mar 1986 A
4615093 Tews et al. Oct 1986 A
4675993 Harada Jun 1987 A
4700398 Mizuno et al. Oct 1987 A
4706379 Seno et al. Nov 1987 A
4727471 Driels et al. Feb 1988 A
4738025 Arnold Apr 1988 A
4743768 Watannabe May 1988 A
4772125 Yoshimura et al. Sep 1988 A
4782273 Moynagh Nov 1988 A
4794689 Seno et al. Jan 1989 A
4811410 Amir et al. Mar 1989 A
4875778 Luebbe et al. Oct 1989 A
4876728 Roth Oct 1989 A
4920429 Jaffe et al. Apr 1990 A
4942618 Sumi et al. Jul 1990 A
4959898 Landman et al. Oct 1990 A
4969108 Webb et al. Nov 1990 A
4973216 Domm Nov 1990 A
4980971 Bartschat et al. Jan 1991 A
5030008 Scott et al. Jul 1991 A
5046113 Hoki Sep 1991 A
5084959 Ando et al. Feb 1992 A
5084962 Takahashi et al. Feb 1992 A
5086559 Akatsuchi Feb 1992 A
5096353 Tesh et al. Mar 1992 A
5099522 Morimoto Mar 1992 A
5140643 Izumi et al. Aug 1992 A
5148591 Pryor Sep 1992 A
5195234 Pine et al. Mar 1993 A
5208463 Honma et al. May 1993 A
5233745 Morita Aug 1993 A
5237622 Howell Aug 1993 A
5249239 Kida Sep 1993 A
5249349 Kuinose et al. Oct 1993 A
5278634 Skunes et al. Jan 1994 A
5309522 Dye May 1994 A
5342460 Hidese Aug 1994 A
5369492 Sugawara Nov 1994 A
5377405 Sakurai et al. Jan 1995 A
5379514 Okuda et al. Jan 1995 A
5383270 Iwatsuka et al. Jan 1995 A
5403140 Carmichael et al. Apr 1995 A
5434629 Pearson et al. Jul 1995 A
5452370 Nagata Sep 1995 A
5456003 Yamamoto et al. Oct 1995 A
5461480 Yamada et al. Oct 1995 A
5491888 Sakurai et al. Feb 1996 A
5523663 Tsuge et al. Jun 1996 A
5541834 Tomigashi et al. Jul 1996 A
5555090 Schmutz Sep 1996 A
5559727 Deley et al. Sep 1996 A
5560100 Englert Oct 1996 A
5566447 Sakurai Oct 1996 A
5592563 Zahavi Jan 1997 A
5608642 Onodera Mar 1997 A
5619328 Sakurai Apr 1997 A
5660519 Ohta et al. Aug 1997 A
5661561 Wurz et al. Aug 1997 A
5671527 Asai et al. Sep 1997 A
5694219 Kim Dec 1997 A
5724722 Hashimoto Mar 1998 A
5743005 Nakao et al. Apr 1998 A
5768759 Hudson Jun 1998 A
5777746 Dlugos Jul 1998 A
5787577 Kent Aug 1998 A
5832107 Choate Nov 1998 A
5839186 Onodera Nov 1998 A
5855059 Togami et al. Jan 1999 A
5864944 Kashiwagi et al. Feb 1999 A
5878484 Araya et al. Mar 1999 A
5999266 Takahashi et al. Dec 1999 A
5999640 Hatase et al. Dec 1999 A
6018865 Michael Feb 2000 A
6031242 Hudson Feb 2000 A
6118538 Haugan et al. Sep 2000 A
6195165 Sayegh Feb 2001 B1
6232724 Onimoto et al. May 2001 B1
6243164 Baldwin et al. Jun 2001 B1
6291816 Liu Sep 2001 B1
6342916 Kashiwagi et al. Jan 2002 B1
Foreign Referenced Citations (49)
Number Date Country
198 26 555 Dec 1999 DE
0 664 666 Jan 1994 EP
0 942 641 Sep 1997 EP
0 854 671 Jul 1998 EP
0 730 397 Apr 1999 EP
2-18900 Jul 1990 JP
2-275700 Nov 1990 JP
2-306700 Dec 1990 JP
3-030499 Feb 1991 JP
3-110898 May 1991 JP
3-265198 Nov 1991 JP
3-117898 Dec 1991 JP
3-289197 Dec 1991 JP
3-293800 Dec 1991 JP
3-104300 Feb 1992 JP
4-051598 Feb 1992 JP
4-064292 Feb 1992 JP
4-083400 Mar 1992 JP
4-107988 Apr 1992 JP
4-107993 Apr 1992 JP
4-262201 Sep 1992 JP
4-271200 Sep 1992 JP
4-3111000 Nov 1992 JP
5-053360 Jul 1993 JP
5-335793 Dec 1993 JP
7-020960 Aug 1994 JP
6-291490 Oct 1994 JP
6-310899 Nov 1994 JP
7-336099 Dec 1995 JP
8-005335 Jan 1996 JP
8-018289 Jan 1996 JP
8-032299 Feb 1996 JP
8-043025 Feb 1996 JP
8-046396 Feb 1996 JP
8-167799 Jun 1996 JP
2554424 Aug 1996 JP
2554437 Aug 1996 JP
9-023097 Jan 1997 JP
9-246799 Sep 1997 JP
9-292998 Nov 1997 JP
9-293998 Nov 1997 JP
9-307286 Nov 1997 JP
9-307297 Nov 1997 JP
2847801 Jan 1999 JP
2000-299600 Apr 1999 JP
2000-312100 Apr 1999 JP
P3186387 Jul 2001 JP
WO 9942257 Aug 1999 WO
WO 0026611 May 2000 WO
Non-Patent Literature Citations (15)
Entry
“SP-11-xxx40 Compact Line Scan Camera,” downloaded from www.dalsa.com, pp. 1-6 (undated) .
U.S. patent application Ser. No. 09/432,552, Case et al., filed Nov. 3, 1999.
U.S. patent application Ser. No. 09/434,320, Skunes, filed Nov. 4, 1999.
U.S. patent application Ser. No. 09/434,325, Case, filed Nov. 4, 1999.
U.S. patent application Ser. No. 09/524,071, Skunes, filed Mar. 13, 2000.
Copy of International Search Report from Application No. PCT/US01/07810 with international filing date of Mar. 13, 2001.
“A Stereo Imagining System for Dimensional Measurement,” by Robert C. Chang, SPI, vol. 2909, pp. 50-57 (undated).
“A New Sense for Depth of Field,” by A. Pentland, IEEE Trans. Pattern Anal. Machine Intell. 9, pp. 523-531 (1987).
“Application of Modulation Measurement Profilometry to Objects With Surface Holes,” by Likun et al., Applied Optics, vol. 38, No. 7, pp. 1153-1158.
“A Matrix Based Method for Determining Depth From Focus,” by J. Ens and P. Lawrence, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New Your, pp. 600-609 (1991).
“Library of C/C++ Machine Vision Software Routines,” Imagining Technology, pp. 63-68 (1999).
“A Perspective on Range Finding Technique for Compute Vision,” by R.A. Jarvis, IEEE Trans. Pattern Anal. Machine Intell. 5, pp. 122-139 (1983).
“Real Time Computation of Depth from DeFocus,” by Watanabe et al., SPIE, vol. 2599, pp. 14-25 (undated).
“Root-Mean Square Error in Passive Autofocusing and 3D Shape Recovery,” by Subbarao et al., SPIE, vol. 2909, pp. 162-177 (undated).
“Pyramid Based Depth from Focus,” by T. Darrel and K. Wohn, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York), pp. 504-509 (1988).