The attached figures illustrate an overhead traveling camera inspection system, which comprises a camera 1, a lens 2, a prism 3, a carriage 4, a positional encoder 5, a linear bearing 6, and a linear actuator comprising a servomotor 7 and a screw drive 8. These are depicted in
The camera 1 is an electronic CCD camera commonly used for machine vision. The camera can be any of a variety of electronic CCD cameras including the Sony XC-ST30 or the Basler A202k. A variety of CCD cameras can be used.
The lens 2 is a typical optical machine vision lens. It can be a zoom lens.
The prism 3 is a pentaprism used to fold the optical path by 90 degrees so that the camera looks downward. This allows for a compact and rigid design. In another embodiment the prism is not needed because the camera is already oriented looking downward.
The carriage 4 is a structural member that can move horizontally. The carriage rigidly supports a camera 1, lens 2 and prism 3 and couples to the linear bearing 6 and screw drive 8. The carriage could be made out of a variety of materials and have a variety of shapes.
The positional encoder 5 is a rotary encoder that connects to the rotating shaft of the servomotor 7 to report the angular position of the shaft. The positional encoder consists of a stationary read head and a disk shaped rule attached to the shaft. The rule contains indicator marks at highly accurate intervals. The read head optically senses the indicator marks as the shaft rotates and electronically reports the consequent positional location of the carriage. Absolute and relative encoders can be used. Alternatively a linear encoder could be placed along the linear bearing. Laser and other positional sensors could be used.
The linear bearing 6 consists of three stationary rods 20 and allows the carriage to move horizontally via six bushings 21 connected to the carriage. The linear bearing is about 2 meters long and allows for smooth movement in a horizontal direction. The linear bearing supports the weight of the carriage. A variety of linear bearings and lengths would-work.
The linear actuator comprises an electric servomotor 7 that turns a screw drive 8 to move the carriage. As the screw turns it moves a coupling connected to the carriage and hence moves the carriage. The linear actuator could alternatively utilize a linear motor, a belt drive, a chain drive or other possibilities.
The camera is connected to the lens. The pentaprism is located in front of the lens to deviate the line-of-sight by 90 degrees. This makes the camera mounting convenient, compact and rigid. The lens is attached to the carriage. Bushings are attached to the carriage. The linear bearing consists of three rods which pass thru the bushings in the carriage. The rods are attached to a stationary frame. A screw drive nut is also attached to the carriage. The drive screw passes through the nut so that when the drive screw rotates, the nut moves horizontally and thus propels the carriage. The servomotor is attached to the frame. The shaft of the servomotor is attached to the drive screw. The shaft of the servomotor is also attached to the positional encoder. Various means of propulsion could be used to move the camera. Various linear bearings are possible.
An electronic controller such as a computer activates the linear actuator to move the carriage so the camera line-of-sight is above the pick and place output destination. The camera inspects the device after it is placed in its destination. If the camera is above a tray and the device passes, then the camera is moved to the next area of the tray to be inspected. If the device fails, then the carriage waits as the pick and place removes the bad device and puts another device in its place. The inspection and replacement sequence is repeated until a device passes. If the output destination is tape, then the carriage moves so that the camera can image a device just slightly downstream of the placement location. After the image(s) are taken, the tape can index forward. If the device passes inspection, then operation proceeds as normal. If the device fails, then the pick and place replaces the device and the carriage moves the camera to the location of the replaced device and inspects the device. If the device fails, then the replacement and inspection repeats. If the device passes, then the carriage may move back to its previous location for inspection.
Calibrating the machine can be accomplished as follows. The carriage 4 first moves the camera 1 to a calibration target 13. The camera then calibrates its pixel size and orientation. Machine vision software identifies the predetermined feature in the center of the target and determines its x location in the image (x1). The current output of the positional encoder is noted (xCameraEncoder1), and the x location of the target center feature relative to the encoder is computed as xCameraDatum=(xCameraEncoder1)+(x1). Next the carriage moves the camera to a predetermined feature on a tray stacker 10. Using the positional encoder 6 the machine knows roughly where to move the carriage to find this feature. The feature can be simply the edge of a rail on the tray stacker or a drilled hole or some other feature. It could also be a first pocket in the tray. The camera 1 then takes a picture and machine vision software identifies the feature and determines its x location in the image (x2). This location information is coupled with the current positional encoder information (xCameraEncoder 2) to map the module's location relative to calibration target 13 as xTrayModule1=(xCameraEncoder 2)+(x2)−xCameraDatum. The carriage is then moved to the other tray stackers to determine their location in the same fashion. The location of all of the machine modules, such as a vision system 11, electrical tester, a taper module 12, and any other modules can be determined in the same way.
Additionally each pick and place nozzle can be calibrated relative to the overhead camera positional encoder. Pick and place nozzle 16 is supported by arm 17 which is attached to encoder 18 which reads rule marks on stationary rule 19. The camera or nozzle can be moved so that the nozzle is in the camera's field of view. A feature on the top of the nozzle can be identified and the location in the image measured (x3). The current camera encoder value is noted (xCameraEncoder3). The current nozzle location relative to the calibration target can be calculated as follows:
x
CalibrationNozzleLocation
=x
CameraEncoder3
+x
3
×x
CameraDatum
The nozzle has its own encoder that is parallel to the camera movement. If the current reading on the nozzle encoder is Ψ1 then at any future time we can determine the nozzle's offset from the calibration target as:
Nozzle current X location=Ψ1−xCalibrationNozzleLocation.
We can also know the location of any module relative to the nozzle's encoder. Viewing the nozzle's location from the traveling camera may not be ideal, as the feature on the top of the nozzle might not accurately represent the center of the nozzle, or the traveling camera's optical axis may not be coincident with the vertical stroke of the nozzle, or the nozzle may be out of focus because it is on a different plane than the modules. Thus, another method to correlate the nozzle's location is to employ a stationary through beam optical sensor. Emitter 14 is positioned opposite receiver 15 and in the same plane as the other modules. The camera is moved over the sensor location and measures the sensor location in the image (x4). The sensor barrel location may be determined or another feature that correlates to the sensor's location. This location information is coupled with the current positional encoder information (xCameraEncoder 4) to map the sensor's location relative to calibration target 13 as:
xSensor=(xCameraEncoder 4)+x4−xCameraDatum.
Next, nozzle 16 can be moved thru the beam and trigger the sensor. As the nozzle moves, the nozzle encoder values are noted when the beam is interrupted and then restored. Averaging these values provides the center value for the nozzle (Ψ4). Consequently at any future time we can now calculate the nozzle's offset from the calibration target as:
Nozzle current X location=Ψ4−xSensor
In this way the encoder positions of the nozzle can be related to the locations of the calibration target and modules on the machine.
Additional automated calibration is possible. For calibrating the taper position, for example, a tape pocket can be found with a common machine vision algorithm. If the taper has its own encoder, then this data can be linked together. Alternatively the position of a sensor on the taper, such as an optical thru beam sensor that senses the leading edge of a tape pocket, or a feature that corresponds to the sensor's location such as a scribe line on a bracket, can be used to calibrate the taper module and the tape pocket location with the rest of the machine.
Other additional automated calibration is also possible. For example, the y position of a tray in a tray stacker can be determined and measured by the same method described but applied in the orthogonal direction. This y position can be compared to the y position of the nozzles in the images from the traveling-camera. The traveling camera can locate a tray pocket or a device in a tray pocket and use this positional information to place a tray in the correct y location to be serviced by the pick and place nozzle.
With respect to the above description then, it is to be realized that the optimum dimensional relationships for the parts of the invention, to include variations in size, materials, shape, form, function and manner of operation, assembly and use, are deemed readily apparent and obvious to one skilled in the art, and all equivalent relationships to those illustrated in the drawings and described in the specification are intended to be encompassed by the present invention.
This application claims the benefit of provisional patent application Ser. No. 60/818,050 filed Jun. 30, 2006 by the present inventors.
Number | Date | Country | |
---|---|---|---|
60818050 | Jun 2006 | US |