The disclosure generally relates to a non-contact measurement method and system, and more specifically, to a method and system for determining positional characteristics related to a vehicle, such as wheel alignment parameters.
Position determination systems, such as a machine vision measuring system, are used in many applications. For example, wheels of motor vehicles may be aligned using a computer-aided, three-dimensional machine vision alignment apparatus and a related alignment method. Examples of 3D alignment are described in U.S. Pat. No. 5,724,743, titled “Method and apparatus for determining the alignment of motor vehicle wheels,” and U.S. Pat. No. 5,535,522, titled “Method and apparatus for determining the alignment of motor vehicle wheels,” both of which are commonly assigned to the assignee of the present disclosure and incorporated herein for reference in their entireties.
To determine the alignment status of the vehicle wheels, some aligners use directional sensors, such as cameras, to view alignment targets affixed to the wheels to determine the position of the alignment targets relative to the alignment cameras. These types of aligners require one or more targets with known target patterns to affix to the subject under test in a known positional relationship. The alignment cameras capture images of the targets. From these images the spatial location of the wheels can be determined, and when the spatial locations of the vehicle or wheels are altered. Characteristics related to the vehicle body or wheel are then determined based on the captured images of the targets.
Although such types of alignment systems provide satisfactory measurement results, the need of attaching targets to the subject under test introduces additional work load to technicians and increases system cost. In addition, in order to attach targets to vehicle test. Different attachment devices are needed for different vehicle models, which further increase cost of the systems and complexity of inventory management.
Therefore, there is a need for a non-contact vehicle service system for obtaining characteristics related to a vehicle without using targets. There is another need to apply the same non-contact vehicle service system to different measurement purposes, such as alignment measurements or collision measurements.
This disclosure describes embodiments of non-contact measurement system for determining spatial characteristics of objects, such as wheels of a vehicle.
An exemplary measurement system includes at least one image capturing device configured to produce at least two images of an object from different viewing angles, and a data processing system configured to determine spatial characteristics of the object based on data derived from the at least two images.
The at least one image capturing device may include a plurality of image capturing devices. Each of the plurality of image capturing devices corresponds to a wheel of a vehicle, and is configured to produce at least two images of the wheel from different viewing angles. The exemplary system further includes a calibration arrangement for producing information representative of relative positional relationships between the plurality of image capturing devices. The data processing system is configured to determine spatial characteristics of wheels of the vehicle based on the images produced by the plurality of image capturing devices, and the information representative of relative positional relationships between the plurality of image capturing devices.
In one aspect, the calibration arrangement includes a combination of at least one calibration camera and at least one calibration target. Each of the at least one calibration camera and the at least one calibration target is attached to one of the plurality of image capturing devices in a known positional relationship. Each of the at least one calibration camera is configured to generate an image of one of the at least one calibration target. In another aspect, the calibration arrangement includes a calibration target attached to each of the plurality of image capturing devices being viewed by a common calibration camera.
According to one embodiment, the information representative of relative positional relationships between the plurality of image capturing devices are generated based on images of a plurality of calibration targets. The positional relationship between the plurality of calibration targets is known. An image of each of the plurality of calibration targets is captured by one of the at least one image capturing devices or at least one calibration camera. Each of the at least one calibration camera is attached to one of the at least one image capturing devices in a known positional relationship.
According to another example of this disclosure, the measurement system further includes a platform for supporting the vehicle at a predetermined location on the platform. A plurality of docking stations disposed at predetermined locations relative to the platform. The positional relationships between the plurality of docking stations are known. Each of the plurality of image capturing device is configured to install on one of the plurality of docking stations for capturing images of the wheel of the vehicle, and the data processing system is configured to determine spatial characteristics of the wheels of the vehicle based on the positional relationships between the plurality of docking stations and the images produced by the plurality of image capturing devices.
An exemplary measurement method of this disclosure obtains images of at least one wheel of a vehicle from two different angles, and determines spatial characteristics of the at least one wheel of the vehicle based on data related to the obtained images. In one embodiment, the exemplary method provides a plurality of image capturing devices. Each of the plurality of image capturing devices corresponds to one of the at least one wheel of the vehicle, and is configured to produce images of the corresponding wheel from two different angles. Calibration information representative of a relationship between the plurality of image capturing devices is produced. The spatial characteristics of the at least one wheel of the vehicle is determined based on the images produced by the plurality of image capturing devices, and the information representative of relative positional relationships between the image capturing devices.
In one aspect, the calibration information is generated by calibration means including a combination of at least one calibration camera and at least one calibration target. Each of the at least one calibration camera and the at least one calibration target is attached to one of the plurality of image capturing devices in a known positional relationship. Each of the at least one calibration camera is configured to generate an image of one of the at least one calibration target.
In another aspect, the calibration information is generated by calibration means including a calibration target attached to each respective image capturing device. Each calibration target is viewed by a common calibration camera.
In accordance with an embodiment of this disclosure, the calibration information is generated based on images of a plurality of calibration targets. The positional relationship between the calibration targets is known. An image of each of the plurality of calibration targets is captured by one of the at least one image capturing devices or at least one calibration camera. Each of the at least one calibration camera is attached to one of the at least one image capturing devices in a known positional relationship.
According to another embodiment, the vehicle is supported by a platform at a predetermined location on the platform. The calibration information is generated by calibration means including a plurality of docking stations disposed at predetermined locations relative to the platform. The positional relationships between the plurality of docking stations are known. Each respective image capturing device is configured to install on one of the plurality of docking stations for capturing images of a corresponding wheel of the vehicle. The spatial characteristics of the at least one wheel of the vehicle are determined based on the positional relationships between the docking stations and the images produced by the image capturing devices.
Additional advantages of the present disclosure will become readily apparent to those skilled in this art from the following detailed description, wherein only the illustrative embodiments are shown and described, simply by way of illustration of the best mode contemplated. As will be realized, the disclosure is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the disclosure. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
The present disclosure is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which like reference numerals refer to similar elements.
In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, to one skilled in the art that the present disclosure may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present disclosure.
One technique for determining relative positions between the cameras is disclosed in U.S. Pat. No. 5,809,658, entitled “Method and Apparatus for Calibrating Alignment cameras Used in the Alignment of Motor Vehicle Wheels,” issued to Jackson et al. on Sep. 22, 1998, which is incorporated herein by reference in its entirety. Additional devices, such as a set of calibration camera and target, can be attached to cameras 4 and 5, respectively, to provide real-time calibration of the relative position between cameras 4 and 5. Exemplary approaches for determination of the relative position between cameras 4 and 5, and real-time calibration are described in U.S. patent application Ser. No. 09/576,442, filed May 20, 2000 and titled “SELF-CALIBRATING, MULTI-CAMERA MACHINE VISION MEASURING SYSTEM,” the disclosure of which is incorporated herein by reference in its entirety.
Images captured by cameras 4 and 5 are sent to a data processing system, such as a computer (not shown), for further processing of the captured images in order to determine alignment parameters of the wheel under test based on the captured images. In one embodiment, the exemplary non-contact measurement system calculates spatial parameters of wheel 1 and tire 2 based on images of a selected portion on wheel 1 and tire 2, such as interface 3. If desired, other portions on wheel 1 and tire 2 can be selected and used, such as nuts 17.
Steps and mathematical computations used in calculating wheel parameters based on the images captured by cameras 3 and 4 are now described. Let the curve described by interface 3 be called the rim circle and the plane in which this circle lies be called the rim plane. The data processing system sets up a coordinate system, such as a three-dimensional (3D) plane, to describe the spatial characteristics of wheel 1 and tire 2. This three-dimensional plane (the rim plane) may be defined by a point and three orthogonal unit vectors. The point and two of the unit vectors lie in the plane. The third unit vector is normal to the plane. Let this point be the center of the rim circle. The point is described and defined by a vector from the origin of a Cartesian coordinate system, and the three unit vectors are described and defined relative to this system. Due to the symmetry of a circle, only the center and the normal unit vector are uniquely defined. The other two unit vectors, orthogonal to each other and the normal which lie in the plane can be rotated about the normal by an arbitrary angle without changing the rim circle center or normal, unless an additional feature in the plane can be identified to define the orientation of these two vectors.
Let this Cartesian coordinate system be called the Camera Coordinate System (CCS).
The focal point of the camera is the origin of the CCS, and the directions of the camera's rows and columns of pixels define the X and Y axes, respectively. The camera image plane is normal to the Z axis, at a distance from the origin called the focal length. Since the rim circle now lies in the rim plane, the only additional parameter needed to define the rim circle is its radius.
For any position and orientation of the rim circle relative to a CCS, and in a camera's field of view, the rim circle projects to a curve on the camera image plane. Using edge detection means well known in the optical imaging field, interface 3 will be defined as curve 8 and 9 (shown in
As described earlier, cameras 4 and 5 are in known positional relationship relative to each other. As illustrated in
Spatial characteristics of the 3D rim circle are determined based on two-dimensional (2D) curves in camera image planes of cameras 4, 5 by using techniques described below. Since the relative position and orientation of cameras 4 and 5 are known, if the position and orientation of the rim plane and circle are defined relative to one of the cameras' CCS, the position and orientation relative to the other camera's CCS is also defined or known. If the position and orientation of the rim plane and circle are so defined relative to the CCS of a selected one of cameras 4 and 5, then the curve of the rim circle may be projected onto the selected camera image plane, and compared to the measured curve in that camera image plane obtained from the edge detection technique. Changing the position and orientation of the rim plane and circle changes the curves projected onto the camera image planes, and hence changes the comparison with the measured curves.
The position and orientation of the rim plane and circle that generate projected curves on the camera image planes that best fit the measured curves is defined as the optimal solution for the 3D rim plane and circle, given the images and measured data.
The best fit of projected to measured curves is defined as follows:
The measured curves are defined by a series of points in the camera image plane by the edge detection process. For each such point on a measured curve, the closest point on the projected curve is determined. The sum of the squares of the distances from each measured point to the corresponding closest point on the projected curve is taken as a figure of merit. The best fit is defined as that position and orientation of the rim circle and plane that minimizes the sum of both sums of squares from both cameras. The fitting process adjusts the position and orientation of the rim plane and circle to minimize that sum.
To find the closest point on the projected curve to a measured point, both in the camera image plane, an exemplary mathematical approach as described below is used:
The contribution to the figure of merit from this camera is the sum of the squares of the distances from all measured points in the camera image plane to the corresponding closest points on the projected curve, as found by steps (1-3) above.
Detailed mathematical computations are now described: Define:
The rim plane is defined relative to the CCS by:
Any point in the rim plane is defined by a vector r from the origin of the CCS:
r=rp.c+q Eq. 2)
where q is a vector lying in the rim plane, from the rim plane center rp.c to r.
Since r is parallel to u:
r=k*u=rp.c+q (Eq. 3)
where k is a scalar value.
q is normal to the rim plane normal rp.n, since it lies in the rim plane, so:
q*rp.n=0 Eq. 4)
Taking the dot product of Eq. 3 with rp.n:
r*rp.n=k*(u*rp.n)=(rp.c*rp.n) Eq. 5)
k=(rp.c*rp.n)/(u*rp.n) Eq. 6)
From Eq. 3 and Eq. 6:
q=k*u−rp.c Eq. 7)
Given the current parameters of the rim plane (rp.c and rp.n) and u (pm.x, pm.y, F), Eq. 6 defines k, and Eq. 7 defines q. The magnitude of q is the square root of q*q:
Q=√(q*q)
The closest point on the rim circle is defined by a vector from the center of the rim circle (and plane) parallel to q, but having the magnitude of the radius of the rim circle:
q′=(rr/Q)*q Eq. 9)
r′=rp.c+q′ Eq. 10)
Project this point onto the camera image plane:
k′*u′=rp.c+q′ Eq 11)
Taking the Z-component in the CCS:
k′=(rp.c.z+q′.z)/u′.z=(rp.c.z+q′.z)/F Eq. 12)
u′.x=(rp.c.x+q′.x)/k=F*(rp.c.x+q′.x)/(rp.c.z+q′.z) Eq. 13x)
u′.y=(rp.c.y+q′.y)/k=F*(rp.c.y+q′.y)/(rp.c.z+q′.z) Eq. 13y)
The measured point pm should have been the projection onto the camera image plane of a point on the rim circle, so the difference between (pm.x, pm.y) and (u′.x, u′.y) on the camera image plane is a measure of the “goodness of fit” of the rim parameters (rp.c and rp.n) to the measurements. Summing the squares of these differences over all measured points gives a goodness-of-fit value:
Φ=Σ((u′.xi−pm.xi)2+(u′.yi−pm.yi)2)i=1, . . . , N Eq. 14)
where N is the number of measured points. A “least-squares fit” procedure, well know in the art, is used to adjust rp.c and rp.n, the defining parameters of the rim circle, to minimize Φ, given the measured data set {pm.xi,pm.yi} and the rim circle radius rr.
In a related embodiment, two cameras whose relative position is known by a calibration procedure can image the wheel and rim and the data sets from these two cameras can be used in the above calculation. In this case:
Φ=Φ0Φ1 Eq. 15)
where Φ0 is defined as in Eq. 14, and Φ1 is similarly defined for the second camera, with the following difference: the rim plane parameters rp.c and rp.n used for the second camera are transformed from the CCS of the first camera into the CCS of the second camera.
The CCS of the second camera is defined (by a calibration procedure) by a vector from the center of the first camera CCS to the center of the second camera CCS (c1), and three orthogonal unit vectors (u0i, u11, u21). Then:
rp.01=(rp−c1)*u01 Eq. 16.0)
rp.11=(rp−c1)*u11 Eq. 16.1)
rp.21=(rp−c1)*u21 Eq. 16.2)
(rp.01, rp.11,rp.21) are the equivalent x,y,z components of rp.c and rp.n to be used for the second camera in Eq. 1 through Eq. 14.
As illustrated above, the rim plane and circle are now determined based on two curves, comprised of sets of measured points, in camera image planes, and thus spatial characteristics of the rim plane and circle are now known. As the rim plane and circle are part of the wheel assembly (including wheel 1 and tire 2), spatial characteristics of the wheel assembly can be determined based on the spatial characteristics of the rim plane and circle.
One application of the exemplary non-contact measurement system is to determine wheel alignment parameters of a vehicle, such as toe, camber, caster, etc.
A calibration process is performed to determine relative positions and angles between measurement pods 14. During the calibration process, a known object with known geometrical characteristics is provided to be viewed by each measurement pod 14, such that each measurement pod 14 generates an image representing the relative position between the object and that measurement pod. For example, as shown in
In addition to solid 55 as shown in
The computer derives the spatial characteristics of each wheel 54 based on the respective captured images using approaches as discussed related to embodiment 1. The computer creates and stores profiles for each wheel, including tire interface, rings, edges, rotational axis, the center of wheel 54, etc., based on the captured images. As the relative positions between the sets of cameras and measurement pods are known, the computer determines the relative spatial relationships between the wheels based on the known relative positions between the sets of cameras/measurement pods and the spatial characteristics of each wheel. Wheel locations and angles are determined based on images captured by the measurement pods, and are translated to a master coordinate system, such as a vehicle coordinate system. Wheel alignment parameters are then determined based o the respective spatial characteristics of each wheel and/or relative spatial relationships between the wheels.
For instance, after wheel locations and angles are determined and translated to a vehicle coordinate system, the computer creates a two-dimensional diagram of the wheels by projecting the wheels on to a projection plane parallel to the surface on which the vehicle rests. Axels of the vehicle are determined by drawing a line linking wheel centers on the opposite sides of the vehicle. The thrust line of the vehicle is determined by linking the middle point of each axial. Rear wheel toe angles are determined based on the wheel planes projected onto the projection plane.
Each measurement pod further includes calibrations devices for determining relative positions between the measurement modules. For instance, measurement pod 14A includes a calibration target 58 and a calibration camera 57. Calibration camera 57 is used to view a calibration target 58 of another measurement pod 14B, and calibration target 58 on measurement pod 14A is to be viewed by calibration camera 57 of the other measurement pod 14D. Calibration target 58 and calibration camera 57 are pre-calibrated to the measuring cameras in their respective measurement pods. In other words, the relative positions between the calibration camera and target and measurement cameras in the same measurement pod are known, and data of which can be accessed by the computer. Since the relative positions between the measurement pods are determined by using the calibration targets and calibration cameras, and the relative positions between the measurement cameras and the calibration target and camera in each measurement pod are known, the relative spatial relationships between the cameras in the system can be determined. Wheel locations and angles are determined based on images captured by the measurement pods using techniques described related to embodiment 1, and are translated to a master pod coordinate system, and further to a vehicle coordinate system.
According to one embodiment, calibration target 58 and a calibration camera 57 of each measurement pod 14 are arranged in such a way that the vehicle under test does not obstruct a line-of-sight view of a calibration target by the corresponding calibration camera, such that dynamic calibrations can be performed even during the measurement process.
The computer determines the relative locations and angles between measurement pods 14 based on images of calibration target 60 of each measurement pod 14 that are captured by common calibration camera 59. Since the relative positions between measurement pods are now known, and the relative positions between the cameras and the calibration target 60 in each measurement pod 14 are predetermined, the relative spatial relationships between the cameras in the system can be derived. Wheel locations and angles are determined based on images captured by the measurement pods, and are translated to a master pod coordinate system, and further to a vehicle coordinate system.
In another embodiment, calibration target 60 in each measurement pod is substituted by a calibration camera, and the common calibration camera 59 is substitute by a common calibration target. Again, the calibration camera and measurement cameras of each measurement pod 14 are pre-calibrated. Thus, the relative positional relationships between measurement pods or cameras can be determined based on images of the common calibration target captured by the calibration cameras. Spatial characteristics of the wheels are determined using techniques described related to embodiment 1.
Each measurement pod 14 includes at least one imaging device for producing at least two images of a wheel. For example, each measurement pod 14 includes two cameras 4, 5 arranged in a known positional relationship relative to each other. Similar to embodiments described above, system 800 further includes a data processing system, such as a computer (not shown), that receives, or has access to, images captured by the measurement pods 14. The positional relationships between the cameras 4, 5 and base 63 are established in a calibration process.
Locations of docking stations 62 are prearranged to accommodate vehicles with different dimensions, such that measurement pods 14 will be in an acceptable range to vehicle wheels after installation. For example, a short wheelbase vehicle might use docking stations 62A, 62B, 62C, and 62D, while a longer vehicle might use docking stations 62A, 62B, 62E, and 62F. By installing measurement pods 14 on predetermined docking stations 62, the relative positions between measurement pods 14 are known. The computer determines wheel alignment parameters or other types of parameters related to a vehicle under test using methods and approaches described in previous embodiments.
In embodiments 2-5 described above, although four measurement pods are shown for performing non-contact measurements for a vehicle having four wheels (one measurement pod for each wheel), these systems can perform the same functions using fewer measurement pods. For instance, in system 100 as shown in
Another application of the exemplary non-contact measurement system is for determining whether a wheel or vehicle body has an appropriate shape or profile. The computer stores data related a prescribed shape or profile of a wheel or vehicle body. After the non-contact measurement system obtains a profile of a wheel or vehicle body under measurement, the measured profile is compared with the prescribed shape/profile to determine whether the shape complies with specifications. If the difference between the prescribed shape and the measured profile of the wheel or vehicle body under test exceeds a predetermined threshold, the computer determines that the wheel or vehicle body is deformed.
Images captured by cameras 18 and 19 are sent to a data processing system, such as a computer (not shown), for further processing. Representative images obtained by cameras 18, 19 are shown in
The computer also stores, or has access to, data related to specifications for the locations of many pre-identified points on the vehicle, such as points 20, 21, 22, 23, 26, 27. Deviation of the spatial location of the measured points from the specification is an indication of damage of vehicle body or structure. A display of the computer may display prompts to a user regarding the existence of deformation, and provide guidance on corrections of such distortion or deformation using methods well known in the collision repair field of art.
Steps and mathematical computations performed by the computer to determine the spatial locations of the points based on images captured by cameras 18, 19 are now described.
In a Camera Coordinate System (CCS), the origin lies at the focal point of the camera. As shown in
R=P+(t* U) 22)
where t is a scalar variable. The coordinates of this point are the components of R in the CCS: Rx, Ry and Rz.
If there are two cameras, and thus two Camera Coordinate Systems are available, let CCS0 be the CCS of camera 18 and CCS1 be the CCS of camera 19. As described above, the relative position between cameras 18 and 19 is known. Thus, let C1 be the vector from the origin of CCS0 to the origin of CCS1, and U1X, U1Y and U1Z be the unit vectors of CCS1 defined relative to CCS0. Let R0 be a point on the image plane of camera 18, at pixel coordinates x0,y0. The coordinates of this point are (x0,y0,F0), where F0 is the focal length of the master camera. R0 is also a vector from the origin of CCS0 to this point. Let U0 be a unit vector in the direction of R0. Then:
U0=R0/|R0| 23)
Let this be the unit vector of the path connecting point 23 and camera 18. For this path, P=0. Let R1 be a point on the second camera image plane, at pixel coordinates x1,y1. The coordinates of this point, in CCS1, are (x1,y1,F1), where F1 is the focal length of the second camera. R1 is also a vector from the origin of CCS1 to this point. Let U1 be a unit vector in CCS1 in the direction of R1. Then, in CCS0:
R1=C1+(x1*U1X)+(y1*U1Y)+(F1*U1Z) 24)
U1=(R1−C1)/|R1−C1| 25)
Let U1 be the unit vector of a second path connecting point 23 and camera 19. In CCS0, P for the second path is C1. Coordinates of points on the first path are:
PR0=t0*U0 26)
Coordinates of points on the second path are
PR1=C1+(t1*U1) 27)
The points of closest approach of these two paths are defined by:
t0=((C1*U0)−(U0*U1)(C1*U1))/D 28a)
t1=((C1*U0)(U0*U1)−(C1*U1))/D 28b)
D=1.−(U0*U1)2 28c)
With PR0 and PR1 defined by equations 26 and 27, and with t0 and t1 derived from equations 28a and 28b, the distance between these points is:
d=|PR1−PR0| 29)
and the point of intersection of the rays is defined as the midpoint:
PI=(PR1+PR0)/2 30)
Thus, using the approaches as described above, the computer determines spatial parameters of a point based on images captured by cameras 18 and 19.
Laser 35 is aimed using a mirror 36 and a control device 37, controlled by the computer (not shown) in a manner to aim a ray of light 38 onto a region of interest on vehicle body 43, such as spot 39, which reflects a ray 40 into camera 34. The origin and orientation of ray 38 are known relative to the Camera Coordinate System (CCS) of camera 34, as ray 38 is moved under control of the computer. As shown in
By scanning the light around a point of interest, such as a known point 47, the point's position in the coordinate system of camera 34 is calculated. Likewise, by scanning the spot over the entire vehicle body 43, all features of interest may be mapped in the CCS of camera 34. The relative positions of the camera, the laser system and its rotations are calibrated by means common to the art of structured light vision metrology. When datum points 45, 46, 47 are identified and located in space, information related to spatial parameters of the datum points is transposed into the vehicle's coordinate system (Vx, Vy, Vz). Other points of interest, such as point 44, may be expressed relative to the vehicle's coordinate system. The computer stores, or has access to, data related to specifications for the locations of many points on the vehicle. Deviation of the spatial location of the measured points from the specification is an indication of damage of vehicle body or structure. A display of the computer may display prompts to a user regarding the existence of deformation, and provide guidance on corrections of such distortion or deformation using methods well known in the collision repair field of art.
The detailed process and mathematical computation for determining spatial parameters of points of interests are now described. In the Camera Coordinate System (CCS), the origin lies at the focal point of camera 34. The Z axis is normal to the camera image plane, and the X and Y axes lie in the camera image plane. The focal length F of camera 34 is the normal distance from the focal point/origin to the camera image plane. The CCS coordinates of the center of the camera image plane is (0, 0, F).
Let a ray (a line in space) be defined by a vector P from the origin to a point on the ray, and a unit vector U in the direction of the ray. Then the vector from the origin to any point on the ray is given by:
R=P+(t*U) 1)
where t is a scalar variable. The coordinates of this point on the ray are the components of R in the CCS: Rx, Ry and Rz.
In
For the first ray, choose P as the origin of the CCS, so P=0, and let R0 be a point on the camera image plane, at pixel coordinates x0,y0. The coordinates of this point are (x0,y0,F0), where F0 is the focal length of the camera. R0 is also a vector from the origin of the CCS to this point. Let U0 be a unit vector in the direction of R0. Then:
U0=R0/|R0| 2)
and the vector from the origin of the CCS to the point on the object is:
RP0=t0*U0 3)
As described earlier, the relative position and orientation of the light projector 54 relative to the CCS of camera 34 are predetermined by, for example, a calibration procedure. Therefore, points on the second ray are given by:
RL=PL+(tL*UL) 4)
PL and UL are known from the calibration procedure, as the movement of light is controlled by the computer.
The point on this second ray (the light ray) where it hits the 3D object is:
RPL=PL+(tL*UL) (5)
The points of closest approach of these two rays are defined by:
t0=((PL*U0)−(U0*UL)(PL*UL))/D 6a)
tL=((PL*U0)(U0*UL)−(PL*UL))/D 6b)
D=1.−(U0*UL)2 6c)
With RP0 and RPL defined by equations (3) and (5), and with t0 and tL derived from equation (6), the distance between these points is:
d=|RPL−RP0| 7)
The point of intersection of the rays is defined as the midpoint:
PI=(RPL+RP0)/2 8)
A third measurement pod 14C is also used to measure the upper body reference points, of the A-pillar 65, B pillar 66, and the corner of door 67. Measurement pod 14C may also be used to make redundant measurements of common points measured by pods 14A or 14B, in order to improve measurement accuracy, or to allow blockage of some of the points of interest in some views, necessitated by the use of clamping or pulling equipment. Although this system shows the geometric identifiers of cameras and targets, the relative pod positions may also be established by viewing of a common known object by the measurement pods or by an external camera system, or by the use of docking stations as described earlier.
The data processing system used in the above-described systems performs numerous tasks, such as processing positional signals, calculating relative positions, providing a user interface to the operator, displaying alignment instructions and results, receiving commands from the operator, sending control signals to reposition the alignment cameras, etc. The data processing system receives captured images from cameras and performs computations based on the captured images. Machine-readable instructions are used to control the data processing system to perform the functions and steps as described in this disclosure.
Data processing system 900 may be coupled via bus 902 to a display 912, such as a cathode ray tube (CRT), for displaying information to an operator. An input device 914, including alphanumeric and other keys, is coupled to bus 902 for communicating information and command selections to processor 904. Another type of user input device is cursor control 916, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 904 and for controlling cursor movement on display 912.
The data processing system 900 is controlled in response to processor 904 executing one or more sequences of one or more instructions contained in main memory 906. Such instructions may be read into main memory 906 from another machine-readable medium, such as storage device 910. Execution of the sequences of instructions contained in main memory 906 causes processor 904 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the disclosure. Thus, embodiments of the disclosure are not limited to any specific combination of hardware circuitry and software.
The term “machine readable medium” as used herein refers to any medium that participates in providing instructions to processor 904 for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 910. Volatile media includes dynamic memory, such as main memory 906. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 902. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
Common forms of machine readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-R0M, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a data processing system can read.
Various forms of machine-readable media may be involved in carrying one or more sequences of one or more instructions to processor 904 for execution. For example, the instructions may initially be carried on a magnetic disk of a remote data processing. The remote data processing system can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to data processing system 900 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 902. Bus 902 carries the data to main memory 906, from which processor 904 retrieves and executes the instructions. The instructions received by main memory 906 may optionally be stored on storage device 910 either before or after execution by processor 904.
Data processing system 900 also includes a communication interface 919 coupled to bus 902. Communication interface 919 provides a two-way data communication coupling to a network link 920 that is connected to a local network 922. For example, communication interface 919 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 919 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 919 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
Network link 920 typically provides data communication through one or more networks to other data devices. For example, network link 920 may provide a connection through local network 922 to a host data processing system 924 or to data equipment operated by an Internet Service Provider (ISP) 926. ISP 926 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 929. Local network 922 and Internet 929 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 920 and through communication interface 919, which carry the digital data to and from data processing system 900, are exemplary forms of carrier waves transporting the information.
Data processing system 900 can send messages and receive data, including program code, through the network(s), network link 920 and communication interface 919. In the Internet example, a server 930 might transmit a requested code for an application program through Internet 929, ISP 926, local network 922 and communication interface 919. In accordance with embodiments of the disclosure, one such downloaded application provides for automatic calibration of an aligner as described herein.
The data processing also has various signal input/output ports (not shown in the drawing) for connecting to and communicating with peripheral devices, such as USB port, PS/2 port, serial port, parallel port, IEEE-1394 port, infra red communication port, etc., or other proprietary ports. The measurement modules may communicate with the data processing system via such signal input/output ports.
The disclosure has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the disclosure. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
This application claims the benefit of priority from U.S. provisional patent application No. 60/640,060 filed Dec. 30, 2005, the entire disclosure of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
60640060 | Dec 2004 | US |