The present subject matter relates to equipment and techniques for measuring alignment of wheels of a vehicle.
Wheel alignment equipment is used to measure the alignment of the wheels of a vehicle. Based on the measurements, adjustments to be made to the vehicle and wheels are determined in order to bring the wheels into alignment. As part of the alignment measurement process, the alignment equipment commonly measures the relative alignment of wheels disposed on each side of the vehicle separately (e.g., on a left side and a right side). In order to relate measurements taken on one side of the vehicle with measurements taken on the other/opposite side of the vehicle, the alignment equipment generally needs to have a precise reference for relating the measurements taken on the one side to the measurements taken on the other/opposite side.
Alignment systems include conventional aligners, visual alignments, and self-calibrating aligners. In conventional aligners, a toe gauge is provided in one wheel alignment head attached to a vehicle wheel on one side of the vehicle. The toe gauge can measure an angle to another toe gauge provided in another wheel alignment head that is attached to a wheel on the other side of the vehicle. The aligner can then relate alignment measurements taken on the one side of the vehicle with alignment measurements taken on the other side of the vehicle based on the toe gauge measurement.
However, the toe gauges used in conventional aligners are attached to the alignment heads, and generally require use of a boom extending from the alignment head to look around the wheel that it is attached to. The presence of such booms results in large, heavy, and expensive alignment heads, and the toe gauges can be obstructed easily by the vehicle body since they are in a fixed position on the alignment head (e.g., any rotation of the alignment head, for example resulting from the rolling forward or backward of the vehicle, may result in the toe gauge being obstructed).
In visual aligners (e.g., camera-based aligners), a solid beam mounted to a fixed structure (e.g., a shop wall) holds two alignment cameras each looking down a respective side of the vehicle. The relative position of the two alignment cameras is maintained fixedly by the solid beam and, once the relative position is measured and stored in memory, the relative position of the alignment cameras can be used to relate alignment measurements taken on the one side of the vehicle (by one alignment camera) with alignment measurements taken on the other side of the vehicle (by the other alignment camera).
However, the cameras of the visual aligners are fixedly attached to a large beam. The large beam can get in the way of shop operations, and the presence of the large beam results in a system that is large, heavy, and expensive. Additionally, the large beam has minimum configurations options, and any deformation of the beam results in alignment measurement inaccuracies.
In the case of self-calibrating aligners, a calibration camera is provided in addition to two alignment cameras each looking down a respective side of the vehicle. The calibration camera has a fixed and known relative position to one of the two alignment cameras, and the calibration camera is oriented so as to point across a width of the vehicle towards the other of the two alignment cameras. Specifically, the calibration camera is oriented so as to point towards a calibration target that is attached to the other alignment camera, where the calibration target itself has a fixed and known relative position to the other alignment camera. In this set-up, the calibration can, as often as is required, obtain an image of the calibration target. In turn, based on the known relative positions between the calibration camera and the one alignment camera and between the calibration target and the other alignment camera, the alignment system can precisely determine the relative positions of the two alignment cameras. The determined relative position information is used to relate measurements taken by the alignment cameras on both sides of the vehicle.
However, while the self-calibrating aligners address some of the drawbacks of the conventional and visual aligners noted above, the self-calibrating aligners rely on a calibration camera or a calibration target being attached to each alignment camera. As a result, the aligner generally needs to be set-up in such a manner that the calibration camera (attached to one alignment camera) can see the calibration target (attached to the other alignment camera) while the alignment cameras are each oriented to see vehicle wheel alignment targets on a respective side of the vehicle. This set-up complexity restricts the acceptable locations of the alignment cameras (each having one of the calibration camera and the calibration target attached thereto), and limits some of the acceptable locations where the system can be used.
In order to address the drawbacks detailed above, there exists a need for a side-to-side reference that can be used when measuring the alignment of a vehicle.
The teachings herein alleviate one or more of the above noted problems with conventional alignment systems.
In accordance with one aspect of the disclosure, a wheel alignment system comprises a pair of first and second passive heads, each comprising a target, each for mounting in association with one wheel of a first pair of wheels disposed on first and second sides, respectively, of a vehicle that is to be measured by operation of the wheel alignment system; a pair of reference targets for mounting to a stationary reference, the pair of reference targets including a first reference target disposed on one of the first and second sides of the vehicle, and a second reference target disposed on the other of the first and second sides of the vehicle; a pair of first and second active heads for mounting in association with the first and second sides of the vehicle, respectively, the first active head comprising a first image sensor, the second active head comprising a second image sensor, the first image sensor producing image data of the first passive head and of the first reference target, the second image sensor producing image data of the second passive head and of the second reference target; a first gravity sensor and a second gravity sensor, the first and second gravity sensors each disposed in a known relationship to a respective one of the first and second reference targets or a respective one of the first and second image sensors for measuring a sensed orientation relative to gravity on the first and second sides of the vehicle, respectively; and a data processor. The data processor is for performing the steps of calculating, using the image data, a plural number of poses of each of the first and second passive heads as the first pair of wheels is rotated; calculating a drive direction of the vehicle using the calculated poses of the first and second passive heads and the sensed orientation relative to gravity on the first and second sides of the vehicle; and calculating a wheel alignment measurement using the vehicle drive direction.
In some embodiments, calculating the drive direction of the vehicle comprises calculating a drive direction of the first side of the vehicle using the calculated poses of the first target, and a drive direction of the second side of the vehicle using the calculated poses of the second target; calculating a gravity direction on the first side of the vehicle using the measured orientation relative to gravity of the first gravity sensor, and a gravity direction on the second side of the vehicle using the measured orientation relative to gravity of the second gravity sensor; and transforming the drive direction and gravity direction of the first side of the vehicle into a common coordinate system with the drive direction and gravity direction of the second side of the vehicle.
In some embodiments, the first active head includes the first gravity sensor, and the second active head includes the second gravity sensor.
The active heads may be for mounting to a stationary reference. The first and second active heads may be for mounting to the vehicle that is to be measured by operation of the wheel alignment system. The first and second active heads may be for mounting in association with a second pair of wheels disposed on the first and second sides of the vehicle.
In accordance with a further aspect of the disclosure, a method for measuring an alignment of a vehicle includes attaching a pair of first and second passive heads, each comprising a target, in association with a first pair of wheels disposed on first and second sides, respectively, of the vehicle to be measured; providing a pair of reference targets mounted to a stationary reference, the pair of reference targets including a first reference target disposed on one of the first and second sides of the vehicle, and a second reference target disposed on the other of the first and second sides of the vehicle; capturing, using a first image sensor of a first active head mounted in association with the first side of the vehicle, image data of the first passive head and of the first reference target; capturing, using a second image sensor of a second active head mounted in association with the second side of the vehicle, image data of the second passive head and of the second reference target; measuring, using a first gravity sensor disposed in a known relationship to the first reference target or the first image sensor, an orientation relative to gravity on the first side of the vehicle; measuring, using a second gravity sensor disposed in a known relationship to the second reference target or the second image sensor, an orientation relative to gravity on the second side of the vehicle; processing the image data from the image sensors to calculate a plural number of poses of each of the first and second passive heads as the first pair of wheels is rotated; calculating a drive direction of the vehicle using the calculated poses of the first and second passive heads and the measured orientation relative to gravity on the first and second sides of the vehicle; and calculating a wheel alignment measurement using the vehicle drive direction.
In some embodiments, calculating the drive direction of the vehicle comprises calculating a drive direction of the first side of the vehicle using the calculated poses of the first target, and a drive direction of the second side of the vehicle using the calculated poses of the second target; calculating a gravity direction on the first side of the vehicle using the measured orientation relative to gravity of the first gravity sensor, and a gravity direction on the second side of the vehicle using the measured orientation relative to gravity of the second gravity sensor; and transforming the drive direction and gravity direction of the first side of the vehicle into a common coordinate system with the drive direction and gravity direction of the second side of the vehicle.
In accordance with a further aspect of the disclosure, a wheel alignment system comprises a pair of first and second passive heads, each comprising a target, each for mounting in association with one wheel of a first pair of wheels disposed on first and second sides, respectively, of a vehicle that is to be measured by operation of the wheel alignment system; a pair of reference targets for mounting to a stationary reference, the pair of reference targets including a first reference target disposed on one of the first and second sides of the vehicle, and a second reference target disposed on the other of the first and second sides of the vehicle; a pair of first and second active heads for mounting in association with the first and second sides of the vehicle, respectively, the first active head comprising a first image sensor, the second active head comprising a second image sensor, the first image sensor producing image data of the first passive head and of the first reference target, the second image sensor producing image data of the second passive head and of the second reference target; a first common direction sensor and a second common direction sensor, the first and second common direction sensors each disposed in a known relationship to a respective one of the first and second reference targets or a respective one of the first and second image sensors for measuring a common direction on the first and second sides of the vehicle, respectively; and a data processor. The data processor is for performing the steps of calculating, using the image data, a plural number of poses of each of the first and second passive heads as the first pair of wheels is rotated; calculating a drive direction of the vehicle using the calculated poses of the first and second passive heads and the sensed common direction on the first and second sides of the vehicle; and calculating a wheel alignment measurement using the vehicle drive direction.
In some embodiments, the first and second common direction sensors each comprise a magnetometer for measuring a direction to the magnetic north pole on one of the first and second sides of the vehicle, respectively. In some embodiments, the first and second common direction sensors each comprise a gyroscope for measuring a direction on one of the first and second sides of the vehicle, respectively. In some embodiments, the first and second common direction sensors each comprise an absolute orientation sensor for measuring a direction on one of the first and second sides of the vehicle, respectively.
Additional advantages and novel features will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following and the accompanying drawings or may be learned by production or operation of the examples. The advantages of the present teachings may be realized and attained by practice or use of various aspects of the methodologies, instrumentalities and combinations set forth in the detailed examples discussed below.
The drawing figures depict one or more implementations in accord with the present teachings, by way of example only, not by way of limitation. In the figures, like reference numerals refer to the same or similar elements.
In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant teachings. However, it should be apparent to those skilled in the art that the present teachings may be practiced without such details. In other instances, well known methods, procedures, components, and/or circuitry have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present teachings.
The various systems and methods disclosed herein relate to improved equipment and approaches to performing vehicle wheel alignment, including improved equipment and approaches for performing alignment measurements of wheels disposed on opposite sides of a vehicle.
In order to address the drawbacks detailed above, a side-to-side reference is provided that can be used when measuring the alignment of a vehicle, and that is not necessarily attached to the wheel alignment heads or to the alignment cameras. The side-to-side reference can therefore be disposed or installed in many different locations, so as to be seen or referred to easily by the vehicle wheel alignment measuring system. Such a side-to-side reference may enable use of alignment heads with simplified streamlined designs (e.g., with lower complexity).
As shown, each target 200, 210 has a characteristic pattern thereon (e.g., on a surface thereof), such as a characteristic pattern forms by dots, circles, or other geometric shapes. The geometric shapes may be of the same or different colors or sizes. The pattern formed by the geometric shapes is generally rotationally asymmetric such that the rotational orientation of the target can be determined based on the observed pattern. More generally, while targets having patterns thereon are shown in
The side-to-side reference system 100 is shown as it may be used in a wheel alignment system 115 in the illustrative example of
In addition, in the illustrative example of
As noted above, the calibration camera 300 has a known relative position to the first reference target 200 in the active reference pod 105. The relative positional relationship can either be fixed at the time of manufacture and determined at that time, or fixed at a later time and measured through a calibration process. In some examples, the relative positional relationship can be adjustable, and can be measured through the calibration process following any adjustment in the positional relationship. In use, the relative positions of the first and second reference targets 200 and 210 can thus be determined based at least in part on the known (e.g., measured) relative position of the calibration camera 300 to the first reference target 200, and on the relative position of the calibration camera 300 to the second reference target 210 as determined based on one or more perspective images (and associated image data) of the second reference target 210 obtained using the calibration camera 300. In turn, wheel alignments can be determined based on the determined relative positions of the first and second reference targets 200 and 210 in combination with other alignment measurements. Specifically, when performing a wheel alignment measurement, the first and second reference targets 200 and 210 are positioned such that: (i) the calibration camera 300 can see the second reference target 210; and (ii) alignment cameras of the wheel alignment system can see the first and second reference targets 200 and 210. The relative positions of the first and second reference targets 200 and 210 can then be measured, e.g. based on one or more perspective images (and associated image data) of the second reference target 210 captured by the calibration camera 300.
Further, each alignment camera (e.g., 105a and 105b) of the wheel alignment system (e.g., 115) can see at least a respective one of the first and second reference targets 200 and 210, and the relative positions of the alignment cameras 105a and 105b can thus be determined based on the relative positions of the first and second reference targets 200 and 210 determined based on the images captured by the calibration camera 300. In this manner, measurements obtained by the alignment cameras 105a, 105b of the wheel alignment system 115 on opposite sides of the vehicle can be correlated to each other to determine the vehicle's overall wheel alignment measurements.
In one example, a wheel alignment system determines wheel alignments based on the determined relative positions of the first and second reference targets 200 and 210 in combination with other alignment measurements as detailed in the following paragraphs. In particular, a spatial relationship between the active reference pod 105 and the passive reference pod 110 is determined according to the image data produced by the reference image sensor 300 and including a perspective representation of at least one target of the passive reference pod 110. The determined spatial relationship is then used to establish the positional relationship between measurements performed by the alignment cameras 105a and 105b.
The wheel alignments are determined by transforming coordinates measured relative to one target (e.g., 200) into coordinates measured relative to the other target (e.g., 210). The transformation is performed using a chain of coordinate transformations from the first to the second reference target as depicted in the following Equation 1:
Trl=T1(T0)
Equation 1: Transformation from First Reference Target 200 to Second Reference Target 210
In Equation 1, To is the 3D rigid body transformation from the first reference target coordinate system 200 to the calibration camera coordinate system 300, T1 is the 3D rigid body transformation from the calibration camera coordinate system 300 to the second reference target coordinate system 210, and Trl is the composite 3D rigid body transformation from the first reference target coordinate system 200 to the second reference target coordinate system 210.
In Equation 1 and in all subsequent equations, each transformation Ti( ) denotes a three dimensional rigid body transformation (rotation and/or translation) from one coordinate system to another. A number of different coordinate transformation formalisms can be used to implement the transformations defined herein, including but not limited to: homogeneous transformation matrices, separate rotation matrices and translation vectors in Euclidean coordinates, rotations expressed as quaternions, etc. The disclosure is not limited to any specific coordinate transformation used or described, but can generally be used with any appropriate coordinate transformation.
With the known fixed relationship of the calibration camera 300 to the first reference target 200 in the active reference pod 105 and the known fixed relationship of the calibration target 310 to the second reference target 210 in the passive reference pod 110, the relative position of the first and second reference targets 200 and 210 can be determined based on a measurement of the position of the calibration target 310 with respect to the calibration camera 300 based on one or more perspective images (and associated image data) of the calibration target 310 captured by the calibration camera 300.
The chain of coordinate transformations used to transform coordinates expressed relative to the first reference target 200 to coordinates expressed relative to the second reference target 210 by way of an intermediate coordinate transformation is depicted in the following Equation 2:
Trl=T2(T1(T0))
Equation 2: Transformation from First to Second Reference Target Using an Intermediate Coordinate Transformation
In Equation 2, To is the 3D rigid body transformation from the first reference target coordinate system 200 to the calibration camera coordinate system 300, T1 is the 3D rigid body transformation from the calibration camera coordinate system 300 to the calibration target coordinate system 310, T2 is the 3D rigid body transformation from calibration target coordinate system 310 to the second reference target coordinate system 210, and Trl is the composite 3D rigid body transformation from the first reference target coordinate system 200 to the second reference target coordinate system 210. Each transformation Ti in Equation 1 denotes a three dimensional rigid body transformation (rotation and/or translation) from one coordinate system to another.
In this example, the active and passive reference pods 105 and 110 and their first and second reference targets 200 and 210 are positioned such that the calibration camera 300 of the active reference pod 105 can see the calibration target 310 of the passive reference pod 110 and such that the alignment cameras (e.g., 105a and 105b in
During a wheel alignment procedure, the vehicle having its wheel alignment measured is commonly rolled forward and/or backward to cause the wheels to rotate, for example to measure run-out or compensation of the wheels. Specifically, measurements of wheel alignment targets of alignment heads mounted to the wheels are taken with the vehicle in a first position, the vehicle is then moved to a second position such that its wheels rotate forward or backward (e.g., by approximately 20.degree. or more), and measurements of the wheel alignment targets mounted to the wheels are taken with the vehicle in the second position.
In general, in conventional and certain other types of aligners, in order for wheel alignment targets forming part of passive alignment heads mounted or attached to wheels of the vehicle (and/or for wheel alignment cameras or other wheel alignment sensors or measurement components forming part of active alignment heads mounted or attached to wheels of the vehicle) to maintain a proper orientation when the vehicle is in both the first and second positions, the wheel alignment targets (and/or wheel alignment cameras or other wheel alignment sensors) rotate around a shaft. Specifically, each target, camera, or sensor that is configured to be mounted or attached as part of an alignment head to a vehicle wheel is attached to a wheel clamp of the alignment head that can be securely clamped onto the wheel, and the target, camera, or sensor can rotate with respect to the wheel clamp around a rotation axis of the shaft. Thus, as the vehicle wheel rotates when the vehicle is moved forward or backward, the target, camera, or sensor rotates about the shaft to maintain a same orientation (e.g., a same orientation with respect to gravity or a vertical or horizontal reference). An angular measurement sensor attached to the shaft measures the angle of rotation of the wheel with respect to the target, camera, or sensor as the vehicle wheel is rotated when the vehicle moves from the first position to the second position. The presence of a rotation shaft, bearings, and other moving parts inside the wheel alignment heads increases cost and sensitivity to drops, and can add error into alignment measurements (e.g., as a result of stickiness in the bearings).
In visual aligners, alignment heads generally contain no moving parts or sensitive components. Instead, targets forming part of passive alignment heads are fixedly attached to the wheels of the vehicle, and positions of the targets are measured by alignment cameras located off the vehicle. Generally, the alignment cameras are installed at precisely calibrated positions on an external rig (e.g., including the aforementioned solid beam) that is attached to the floor, console, lift, or rack. This makes the existing vision based aligners more expensive, harder to move (e.g., between racks in a vehicle repair shop), and requires an unobstructed visual path between the cameras on the external rig and the targets mounted on the vehicle wheels.
To address the above-noted drawbacks in vehicle wheel alignment systems, a fixed active-head wheel alignment system includes a wheel alignment measuring head that can be fixedly attached to the vehicle wheels and that does not include a rotation shaft and bearings. In the fixed active-head wheel alignment system, the alignment measuring heads maintain fixed positions relative to their respective vehicle wheels and rotate when the vehicle wheels are rotated (e.g., when performing a compensation procedure). In the fixed active-head wheel alignment system, all parts of the wheel alignment measuring heads thus remain immobile with respect to the vehicle wheels when the vehicle wheels are rotated.
In the fixed active-head wheel alignment system, as illustratively shown in
In operation, the fixed active-head wheel alignment system can be used to perform an alignment procedure 350 such as that described in relation to
Note that in visual aligners, the axis of rotation of a wheel is determined by placing a target on the wheel and capturing images of the rotating target using a fixed camera. In contrast, in the present case, the axis of rotation of the wheel whose position and orientation are determined is the axis of rotation of the camera itself, which corresponds to the axis of rotation of the wheel 200 on which the camera 100 is fixedly mounted. The position and orientation of the axis of rotation is determined in a coordinate system of the fixed target 400 (e.g., a fixed coordinate system that does not move as the wheel 200 and vehicle are moved between positions 1 and 2).
The mathematical description of the axis of rotation calculation for a rotating camera is as follows. There are two different calculation scenarios: (1) the axis of rotation for a camera that is rigidly attached to a wheel while observing a stationary reference target, and (2) the axis of rotation for a target that is rigidly attached to a wheel while being observed by a camera that is also rotating.
For the first scenario, in which the camera rotates while observing a stationary reference target:
V1=V01V0
Equation 3: Rotation of Alignment Camera 100 with Respect to a Fixed Reference Target 400 From an Initial to a Second Position
In Equation 3, V0 is the 3D rotation from a Fixed Reference Target Coordinate System 400 to an Alignment Camera Coordinate System 100 at the initial position, V1 is the 3D rotation from a Fixed Reference Target Coordinate System 400 to an Alignment Camera Coordinate System 100 at the second position, and V01 is the 3D rotation of the Alignment Camera Coordinate System 100 from the initial to the second position.
The following calculation can be performed to compute V01:
V01=V1V0−1
Equation 4: Computation of Composite 3D Rotation between Initial and Second Orientations of the Alignment Camera Coordinate System 100
The axis of rotation u is the principal axis about which all rotation occurs. It can be computed as the principal eigenvector of the rotation matrix V.sub.01 that rotates from an initial to a second orientation:
û=eig(V01)
In Equation 5, eig(V01) denotes the eigenvector/eigenvalue decomposition applied to the rotation matrix V01. This eigen-decomposition can be computed in variety of different standard methods, including but not limited to: characteristic polynomial root methods, QR decompositions, power iteration methods, and Rayleigh quotient iterations. u is the eigenvector corresponding to the largest individual eigenvalue computed in the eigen-decomposition.
The processing chain is similar for the scenario of a target that is rigidly attached to a wheel that is observed by a camera that is also rotating (see, e.g.,
P0=W0U0−1
Equation 6: 3D Orientation of a Wheel Mounted Target Coordinate System with Respect to a Fixed Reference Target Coordinate System at the Initial Position
In Equation 6, U−10 is the inverse rotation from the Alignment Camera Coordinate System to the Fixed Reference Target Coordinate System at the initial position; that is, it is the rotation from the Fixed Reference Target Coordinate System to the Alignment Camera Coordinate System at the initial position, W0 is the rotation from the Alignment Camera Coordinate System to the Wheel Mounted Target Coordinate System at the initial position, and P0 is the rotation from the Fixed Reference Target Coordinate System to the Wheel Mounted Target Coordinate System at the initial position.
Likewise, a similar formula can be used to compute the orientation of the Wheel Mounted Target Coordinate system with respect to the Fixed Reference Coordinate System at the second position:
P1=W1U1−1
Equation 7: 3D Orientation of a Wheel Mounted Target Coordinate System with Respect to a Fixed Reference Target Coordinate System at the Second Position
In Equation 7, U−11 is the inverse rotation from the Alignment Camera Coordinate System to the Fixed Reference Target Coordinate System at the second position; that is, it is the rotation from the Fixed Reference Target Coordinate System to the Alignment Camera Coordinate System at the second position, W1 is the rotation from the Alignment Camera Coordinate System to the Wheel Mounted Target Coordinate System at the second position, and P1 is the rotation from the Fixed Reference Target Coordinate System to the Wheel Mounted Target Coordinate System at the second position.
The rotation matrix P01 that rotates the Wheel Mounted target Coordinate System axes from P0 to P1 can be computed as:
P01=P1P0−1
Equation 8: 3D Rotation of the Wheel Mounted Target Coordinate System from the Initial to the Second Orientation
The axis of rotation w defines the three-dimensional vector about which all rotation is performed. As in the previous scenario, it can be computed as:
ŵ=eig(P01)
In Equation 9, eig(P01) denotes the eigenvector/eigenvalue decomposition applied to P01. This eigen-decomposition can be computed in variety of different standard methods, including but not limited to: characteristic polynomial root methods, QR decompositions, power iteration methods, and Rayleigh quotient iterations, and w is defined to be the principal eigenvector corresponding to the largest individual eigenvalue.
Because the fixed target 400 takes up some of the FOV 300 of the camera 100 (see, e.g.,
In
For example, in the example of
In operation, as shown in the procedure 450 shown in
Note that once the relative position of the cameras 120 and 140 is fixed, a calibration procedure is performed to precisely determine the relative positions of the cameras relative to each other. Knowledge of the relative positions of the cameras is then used to determine the relative positions of targets imaged by one camera in the other camera's coordinate system. In examples in which the FOVs 320 and 340 of the cameras 120 and 140 overlap, the calibration can be performed by capturing an image of a target positioned in the overlap region of the FOVs with each camera and, based on the captured image, determining the relative positions of the cameras.
A process for transforming one camera's coordinates into the other can involve the following equation:
Tu=Tlu(T1)
Equation 10: Transformation of a Target Coordinate System 400 from the Lower Camera Coordinate System 140 to the Upper Camera Coordinate System 120
In Equation 10, T1 is the pose of a Target Coordinate System 400 in the Lower Camera Coordinate System 140, Tu is the pose of a Target Coordinate System 400 in the Upper Camera Coordinate System 120, and Tlu is the 3D rigid body transformation from the Lower Camera Coordinate System 140 to the Upper Camera Coordinate System 120.
In examples in which the FOVs 320 and 340 of the cameras 120 and 140 do not overlap (e.g., so as to obtain a wider total FOV), the relative positions of the cameras can be determined using, for example, the first and second targets 200 and 210 described above. In such examples, one target (e.g., 200) is imaged with one camera (e.g., 120) while the other target (e.g., 210) is imaged using the other camera (e.g., 140), and the relative positions of the cameras 120 and 140 is determined based on the captured images of the targets 200 and 210 and the known relative positions of the targets 200 and 210.
As shown in
In operation, as shown in the procedure 550 of
In the foregoing description, the camera 100 is described as being attached to a rear wheel and the target 600 is described as being attached to a front wheel of the vehicle. However, the target could be attached to the rear wheel and the camera to the front wheel. Alternatively, the reference target 600 could be attached to a rack, floor, tripod, or other type of attachment, for example in situations in which only one wheel's axis of rotation is to be determined (e.g., the axis of rotation of the wheel on which the alignment camera is mounted).
The description of
For a four wheel alignment, a second set of an alignment camera and targets can be mounted on the other side of the vehicle, as shown in
In operation, as shown in the procedure 650 of
Specifically, the positions of the passive reference pod (including the right reference target 2400) and the right passive head (including wheel target 2500) are firstly determined from the image(s) captured by the right alignment camera 2300 in a coordinate system centered on the right alignment camera 2300; the determined positions are then transformed into coordinates centered on the passive reference pod and first reference target 2400; the transformed coordinates are then once again transformed into coordinates centered on the active reference pod and second reference target 2100 based on the determined relative positions of the active and passive reference pods and reference targets 2100 and 2400; and finally, the transformed coordinates are further transformed into coordinates centered on the left active head including alignment camera 2000.
A process for transforming coordinates from camera to target to target to camera can involve the following equation:
Tlcam_rref=Tcalcam_rref(Tiref_calcam(Tlcam_lref))
Equation 11: Transformation from Left Camera Coordinate System 2000 to the Right Reference Target Coordinate System
In Equation 11, Tlcam_lref is the 3D rigid body transformation from the Left Camera Coordinate system 2000 to the Left Reference Target Coordinate System 2100, Tlref_calcam is the 3D rigid body transformation from the Left Reference Target Coordinate system 2700 to the Calibration Camera Coordinate System 2700, Tcalcam_rref is the 3D rigid body transformation from the Calibration Camera Coordinate System 2700 to the Right Reference Target Coordinate System 2400, and Tlcam_rref is the 3D rigid body transformation from the Left Camera Coordinate system 2000 to the Right Reference Target Coordinate System 2400.
The transformation expressed in Equation 11 can be used to perform the coordinate transformation from the right wheel target 2500 to the left camera coordinate system 2000:
Tlcam_rw=Trcam_rw(Trcam_rref−1)(Tlcam_rref))
Equation 12: Transformation from Left Camera Coordinate System 2000 to Right Wheel Target Coordinate System 2500
In Equation 12, Tlcam_rref is the 3D rigid body transformation from the Left Camera Coordinate System 2000 to the Right Reference Target Coordinate System 2400 (as computed in Equation 11 above), T−1rcam_rref is the inverse of the 3D rigid body transformation from the Right Camera Coordinate System 2300 to the Right Reference Target Coordinate System 2500. I.e. it is the 3D rigid body transformation from the Right Reference Target Coordinate System to the Right Camera Coordinate System, Trcam_rw is the 3D rigid body transformation from the Right Camera Coordinate System 2300 to the Right Wheel Target Coordinate System 2500, and Tlcam_rw is the 3D rigid body transformation from the Left Camera Coordinate System 2000 to the Right Wheel Target Coordinate System 2500.
Based on the determined relative positions of the various alignment heads and reference pods (including the various cameras and targets), both active alignment heads (including cameras 2000 and 2300) are thus able to measure positions of targets and transform the measured positions into the same coordinate system. Thus, the full vehicle alignment can be measured, for example by projecting the wheels axis in the vehicle base plane. The positions of targets measured using the left camera can be transformed into a coordinate system centered on the right camera in a similar manner. Alternatively, positions (and coordinates) can be transformed into a reference frame centered on one of the reference targets (e.g., 2100). In any case, since a common coordinate system is used, the alignment of all wheels can be measured.
In the examples of
In the various examples presented above, the reference targets (e.g., 400, 900, 900′, 2100, 2400) are positioned so as to be in the FOV of a corresponding alignment camera. In some examples, the alignment camera is positioned so as to also concurrently include an alignment target (e.g., 600, 600′, 2200, 2500) in its FOV. In order to position the reference targets at positions that are both in the FOV of the alignment camera and not occluded by the alignment targets in the FOV, the reference targets may be attached to the surface 1000 on which the vehicle is sitting (e.g., attached to the rack or vehicle lift, attached to the ground, or the like). In particular, the reference targets may be positioned so as to be in a consistent relative position with the alignment targets mounted to the wheels, but also positioned such that the reference target move with the vehicle (e.g., should the vehicle be positioned on a rack or vehicle lift and the rack or vehicle lift is raised).
In the foregoing description and figures, the active and passive reference pods are described and shown as being optionally mounted with passive alignment heads to wheels of the vehicle being measured (see, e.g.,
Additionally, while the foregoing description and figures have described and shown active alignment heads including the cameras (or image sensors) as being mounted to wheels of the vehicle, the active alignment heads can alternatively be mounted to a fixed or stationary reference (e.g., ground, a rack or lift, or the like). For example,
In the example of
Various options exist for mounting the active or passive reference pods or other reference target(s) to the rack, vehicle lift or the like. A first option, of bolting a hanger bracket on the rack, may require drilling into the rack which may compromise the structural integrity of the rack and may in some situations not be possible. Furthermore, drilling into the rack makes for a time consuming installation process. For these reasons, this first option may not be preferred. Ideally, the active or passive reference pods or other reference targets would be removably attached to the rack or slidable so that the reference pods or targets can be moved out of the way when not in use. For this purpose, a second quick attachment method to the rack may be preferred. Further an attachment that can enable easy attachment of the reference pods or targets may be preferred, and that can allow the reference pods or target to be moved out of the way while remaining attached to the rack.
To provide the above-identified advantages, a mount is shown in
As noted above, the use of the side-to-side reference discussed in relation to
The calibration process may be performed following manufacturing in the factory, or can be performed in the field (for example following the replacement of a part of the side-to-side reference, following possible damage to the side-to-side reference, or simply to confirm that the factory specifications are still accurate).
To perform the calibration, the procedure 850 shown in
Once the cameras and targets are in position, the camera 6000 captures a first image of first reference target 5600 and second reference target 4000 so as to measure the pose of first reference target 5600 and second reference target 4000, in step 853. The measured pose is used to calculate a rotational matrix from reference target 4000 to reference target 5600. Additionally, in step 855, the calibration camera 5700 captures a second image of the reference target 4000 so as to measure the pose of reference target 4000 with respect to the calibration camera 5700. With the rotation matrix of the reference target 4000 to the reference target 5600, and with the pose of reference target 4000 with respect to the calibration camera 5700, a rotation matrix relating measurements from the reference target 5600 to the calibration camera 5700 is determined. In this way, the fixed spatial relationship between the calibration camera 5700 and the alignment target 5600 attached thereto can be determined based on the captured first and second images. The rotational matrix can then be used to update the relative positions (rotation matrix) between the two reference targets 5600 and 4000 every time the calibration camera 5700 measures the pose of the reference target 4000, as during an alignment procedure.
In order to effect the appropriate coordinate transformations, two different coordinate transformations can be used: an “RTTP” which, in combination with a transformation of target coordinates into calibration camera coordinates can provide an “RCTP” result.
The transformation from a second to a first reference target can be computed as:
Tref2_ref1=Trefcam_ref1(Trefcam_ref2−1)
Equation 13: 3D Rigid Body Transformation from Second Reference Target Coordinate System 4000 to First Reference Target Coordinate System 5600
In Equation 13, T−1refcam_ref2 is the inverse of the 3D rigid body transformation from the reference camera coordinate system 6000 to the second reference target coordinate system 4000; that is, it is the transformation from the second reference target coordinate system to the reference camera coordinate system, Trefcam_ref1 is the 3D rigid body transformation from the reference camera coordinate system 6000 to the first reference target coordinate system 5600, and Tref2_ref1 is the 3D rigid body transformation from the second reference target coordinate system 4000 to the first reference target coordinate system 5600.
The transformation from the second to the first reference target can be used (in conjunction with additional information) to compute the transformation from the first reference target 5600 to the calibration camera 5700. This process can be computed as:
Tref1_calcam=Tref2_ref1(Tcalcam_ref2)
Equation 14: 3D Rigid Body Transformation from First Reference Target Coordinate System 5600 to Calibration Camera Coordinate System 5700
In Equation 14, Tcalcam_ref2 is the 3D rigid body transformation from the calibration camera coordinate system 5700 to the second reference target coordinate system 4000, Tref2_ref1 is the 3D rigid body transformation from the second reference target coordinate system 4000 to the first reference target coordinate system 5600 (as computed previously in Equation 13), and Tref1_calcam is the 3D rigid body transformation from the first reference target coordinate system 5600 to the calibration camera coordinate system 5700.
In this section, an alternative embodiment of the wheel aligner is described. In this alternative embodiment, depicted in
The disclosed alignment systems and methods operate based on a calculation of a parameter called “drive direction,” which is the direction in which a vehicle is moving. Since a vehicle can be assumed to be a rigid body, each wheel (and each axle) has the same drive direction. Consequently, an alignment parameter of one wheel or one axle can be compared to the same parameter of another wheel or axle by equating their drive direction. For example, each axle's toe can be compared to each other axle's toe by equating each axle's drive direction. Therefore, the relative toe of two axles can be measured (i.e., the axle scrub), without all the cameras of a typical visual aligner seeing both axles at the same time, or without wheel position or orientation information from one side of the vehicle to the other.
A basic concept of drive direction alignment is to measure geometric properties of interest for wheel alignment without directly measuring lateral (i.e., “left to right”) position or orientation information about system components. Rather, the disclosed aligners indirectly measure information that couples measurements from left and right sides, allowing measurements from one side of the vehicle to be transformed into a common coordinate system with measurements from the other side of the vehicle. This can be accomplished by measuring two or more directions in common from both sides of the vehicle.
This basic principle will be explained with reference to
In the embodiment depicted in
If the output format is a set of (θX, θY) inclination angles, these angles must be converted to a 3D gravity vector to be used in the processing chain described above. This can be accomplished in a variety of ways. In one embodiment, an initial vector denoting the orientation of gravity in the inclinometer coordinate system is encoded as a 3D vector X=0, Y=0, Z=1. This 3D vector is then made to rotate about the inclinometer X axis by the rotation angle θX. The rotated 3D vector is then rotated about the inclinometer Y axis by the rotation angle θY. This rotated 3D vector now describes the orientation of gravity in the inclinometer coordinate system, given that the inclinometer sits at an inclination of (θX, θY), and can be used in the described processing chain.
The above discussion assumes that a three-dimensional wheel alignment procedure is performed. The present disclosure is not, however, restricted to purely 3D alignments. It may be desirable to perform 2D alignment measurements. In such a scenario, gravity is measured not as a 3D vector or as a set of 2D angles, but as an elevation angle from a single axis sensor. Under such a configuration, it is assumed that all tilt between cameras is in the vehicle camber direction. The measured inclination angles on both sides of the vehicle are then used to adjust the relative left to right tilt angles of cameras on both sides of the vehicle. This relative tilt angle between the sides of the vehicle is then used as an offset to measure camber angles on both sides of the vehicle to a common reference. Deviations of drive direction measurements from both cameras in the camber direction are ignored.
On both sides of the vehicle 3030 we must express gravity direction and drive direction in a common coordinate system. This means that geometric quantities measured in one coordinate system must be transformed to the same coordinate basis so that they can be used in downstream calculations. In the system depicted in
The transformation from the inclinometer coordinate system to the camera coordinate system is a well-known transformation which requires a calibration that quantifies how measurements from the inclinometer coordinate system are transformed to the camera coordinate system. The calibration consists of a 3D rotation from the inclinometer coordinate system to the camera coordinate system (or the inverse). At run-time, the measured 3D gravity vector in each inclinometer coordinate system is rotated by the inclinometer to camera coordinate system rotation calibration. The net effect is that the gravity, measured in the inclinometer coordinate system, is now expressed in the camera coordinate system on each side of the vehicle.
To express gravity direction in the reference coordinate system, an additional transformation is required, from the camera coordinate system to its associated reference coordinate system. The cameras rigidly attached to the rear wheels depicted in
In the embodiment depicted in
At a series of vehicle roll angles, the pose of the reference target coordinate system is measured concurrently with the Wheel target coordinate system. With pose of these two targets measured at the same camera pose, we can transform the pose of the Wheel target coordinate system into the reference target coordinate system. The pose of the camera coordinate system and of the Wheel target coordinate systems are expressed in their respective reference coordinate systems at multiple positions during the course of the vehicle roll.
Upon completion of the rolling motion, the measured 3D locations of the wheel targets and cameras at all positions are used to calculate the vehicle drive direction. To calculate drive direction, target and/or camera positions must be measured in at least two distinct vehicle rolling positions. Depending on the phase angle at which the wheel-mounted targets and/or cameras are mounted on the rolling vehicle, it may be advantageous to perform some orthogonalizations of the measured target/camera coordinates. If target or camera pose measurements are imaged while attached to the frame or body of the vehicle or positioned at the center of their respective wheels, they should travel in a straight line. But if, for example, the targets are positioned on vehicle wheels off-center, they will in general trace out a cycloidal trajectory. For this scenario, the direction of best-fit lines through the target centers will depend on the phase angle of the target on the wheel at the various data acquisition positions. In other words, the target will oscillate with some translation component in directions that are orthogonal to the true vehicle drive direction. This same consideration applies to the location of the camera coordinate systems in their respective reference coordinate systems.
These deviations from the true vehicle drive direction can be subtracted from the measured target locations by reference to external measurements that are approximately orthogonal to vehicle direction. For example, by using the gravity plane or the plane along which the vehicle rolls, the normal of the gravity plane or the rolling plane can be used as a direction to remove the orthogonal component of the target or camera oscillations. This reduces the uncontrolled variability in the measurement of vehicle drive direction, enabling a more accurate and repeatable drive direction measurement.
Once target and/or camera positions have been orthogonalized as described above (if needed), the array of 3D center locations are then used as input to a well-known least squares calculation algorithm. The optimal drive direction is computed using least squares methods to determine the primary direction of target and/or camera motion on each side of the vehicle. The net result of this calculation, carried out independently for the left and right sides, are vehicle drive directions DDL, DDR measured in each of the left and right reference coordinate systems.
It must also be noted that for vehicles with front wheel steer (either because front wheels are turned, or because individual front toe angles are badly out of spec), wheel targets imaged while attached to the front wheels will experience slightly different trajectories. This problem will compound when rolling distances are larger, and the vehicle is made to turn through a larger semi-circle. For shorter rolling distances, the effect of steer angle should however be quite limited.
In the event that vehicle steer is not negligible, the effects of steer can be detected and compensated for in various ways. One method is to calculate the axis of rotation of the wheel mounted targets between successive positions in the rolling motion, and to use the deviation of the wheel axes with wheel roll angle to determine the steer axis and steer angle. With the steer axis and angle, the nonlinear target trajectories can then be corrected for independently on each side of the vehicle, resulting in steer-adjusted drive directions.
The problem of determining the optimal rotation between left and right camera coordinate systems is an instance of what is known to those in the art as Wahba's Problem. The basic question of this method is: given two or more directions measured in an initial coordinate system, and those same directions measured in a second coordinate system, what is the optimal rotation between the two coordinate systems? This problem can be solved in various ways. If the number of common directions measured in two coordinate systems is exactly two, the so-called Triad method can be used to solve for the optimal rotation between the two coordinate systems. For two or more measurements in common in both coordinate systems, more general solution methods such as the Kabsch algorithm, Davenport's Q-method, and other computational algorithms, are used to determine the optimal rotation between coordinate systems. The details of the methods vary, but the essence of all such methods is to solve for the rotation that minimizes the least-squares error when rotating from one coordinate system to the other. Most methods incorporate a singular value decomposition of the 3D covariance matrix of the pairs of corresponding 3D vectors.
As depicted in
It must be emphasized that two or more unique common directions are required to calculate a unique 3D rotation between the two coordinate systems. With no common directions between the two coordinate systems, we have no information at all to constrain the rotation between. With only one common direction between both coordinate systems, we do not have enough information to determine a unique rotation between coordinate systems.
It must also be emphasized that the two or more common directions used to determine the optimal rotation between coordinate systems must be unique directions. If the two directions were parallel, they would actually point in the same direction. The more unique the directions, the better. Ideally, the common directions are orthogonal or nearly so. The more orthogonal the directions are to each other, the greater the amount of unique information that is incorporated into the calculation of the optimal left to right rotation solution.
The embodiment described above uses cameras 3010L, 3010R and inclinometers 3012L, 3012R to measure vehicle drive direction and gravity direction, respectively. However, the basic principle of correlating two coordinate systems based on measurement of two or more common directions can be extended in various ways.
The disclosed “drive direction” aligner uses vehicle drive direction and gravity direction as the common directions to measure on both sides of the vehicle. The core concept of determining the relative left and right sides of the vehicle, however, does not require these two directions. Any two or more common directions can be used to perform alignment measurements in the manner described. One could employ, for example, a magnetometer to use the measured direction to the magnetic north pole as a common direction that will be (for all practical purposes) the same on both sides of the vehicle. Another sensor which could be employed is a gyroscope, where the left side and right side gyroscopes are configured so as to measure a common direction. Still another well-known sensor that could be used is an absolute orientation sensor, which combines the measurements of plural independent sensors (such as an accelerometer, magnetometer, and gyroscope) to measure orientation with respect to an absolute reference direction. Any common direction measuring sensor can be used, provided that its common direction measurements can be transformed into common direction measurements from other sensors on each side of the vehicle. These are just some examples of other ways in which common directions can be measured on both sides of the vehicle.
In the measurement system described, two corresponding directions are measured on both sides of the vehicle to determine the left side to right side transformation. The number of corresponding directions need not be restricted to two, however. Arbitrarily many corresponding directions can be used to determine the left to right orientation. The calculation algorithms employed are not restricted to two common directions, so long as the additional directions in common are not parallel and thus provide complementary information to restrict the optimal solution.
As described, at least two 3D common directions are required to determine a unique 3D rotation between left and right sides of the vehicle. However, it is possible to retain some of the functionality of the system described if only one corresponding direction is measured on left and right sides of the vehicle. For example, it is possible to determine 2D rotations from just one common measured direction. This may be useful, for example, in a scenario wherein wheel alignment measurements are desired in a strictly 2D mode of operation.
As described, measurement of the gravity direction on both sides of the vehicle is performed with a conventional inclinometer. There are various other ways, however, in which gravity direction can be measured without using an inclinometer. Accelerometers could be used in lieu of inclinometers to measure gravity direction. Plumb lines or similar free-hanging masses could also be used to provide a measure of gravity direction. If the cameras themselves can be secured such that they do not rotate with respect to the vehicle rolling surface plane, one can perform a prior calibration step to determine the normal of the rolling surface in each of the left and right camera coordinate systems. This normal direction can then be used to provide a common reference direction for both sides of the vehicle.
In the embodiments described herein, targets of a predetermined geometry are fixed to a vehicle and measured with cameras to determine vehicle drive direction. Targets are not required, however, as there are various ways in which 3D drive direction can be determined without reference to them. One example is to use stereo vision techniques. For example, if stereo cameras are used on each side of the vehicle on the rear wheels, and can be positioned such that the surfaces of the front wheel surfaces are visible with sufficient resolution, textured feature points on the front wheel surfaces can be detected and matched in all cameras in each stereo camera array. With the detection and matching of corresponding feature points in the stereo camera array, 3D position measurements of feature points on the wheel surface can be detected and tracked as the vehicle rolls. These feature points can then be used in an analogous manner to a target with a predetermined geometry.
It is possible to use additional techniques other than stereo vision to measure vehicle drive direction without employing a target with a predetermined geometry. One could use structured light projection techniques to determine the 3D position of feature points throughout the vehicle rolling motion, and then used in an analogous manner to a target with a predetermined geometry.
One could also use “structure from motion” techniques to determine the 3D geometry of textured vehicle feature points from a single camera, provided some additional constraints about camera motion. With such techniques, a single camera effectively becomes a stereo camera array.
In the embodiment of
Given the above measurements, calibrations, and intermediate transformations, how does one calculate wheel alignment angles of interest from such a measurement system? Once key equivalences are established, the basic geometric quantities of interest are much the same as in traditional wheel alignment measurement systems that directly measure right side to left side transformations.
Runout compensation of the wheel mounted targets is performed in the same manner as prescribed in traditional wheel alignment systems. The concept and calculation of runout is discussed, for example, in U.S. Pat. No. 5,535,522. The core concept is to observe the orientation change of a coordinate system that is rigidly mounted to a vehicle wheel. The orientation change of this coordinate system as the wheel rolls allows for a calculation of the optimal wheel axis of rotation. The only addition to this concept in a “drive direction” aligner is a downstream step in the processing chain where all wheel axes are transformed into a common coordinate system (i.e., from the right side of the vehicle to the left side) using the optimal right side to left side rotation.
The notion of a vehicle coordinate system (VCS) is a commonly used concept in wheel alignment. See, for example, U.S. Patent Application Publication 2017/0097229. The VCS serves as a frame of reference in which alignment angles can be expressed. In the prior art, camber angles are commonly defined with respect to the VCS (X, Y) plane, and individual toe angles are commonly defined with respect to the GCL (Geometric Center Line) or the thrust line of the vehicle.
In the prior art, the geometric center line (GCL) is calculated as the direction from the middle of the rear wheels to the middle of the front wheels. This is depicted in
A typical GCL measurement process when direct measurements are made between left and right sides is depicted in
In a drive direction aligner described herein, a mathematically equivalent GCL direction can be measured despite not directly measuring the left to right side transformation. The vector from the center of the left rear wheel 3312 to the left front wheel 3310 is denoted by 3314. The vector from the center of the right rear wheel 3313 to the right front wheel 3311 is denoted by 3315. When rear to front wheel vectors 3314 and 3315 are averaged, the vector is mathematically equivalent to the previously described GCL vector 3316.
The thrust direction 3317 is calculated based on the rear toe angles with respect to the GCL 3316. The front toe angles are calculated with respect to the thrust direction 3317.
To measure camber in a way that is independent of the tilt of the rolling surface with respect to gravity, we must measure the tilt of the rolling surface (e.g., an alignment lift) with respect to gravity. After we have performed this calibration, we can characterize the orientation of the plane of the alignment lift in the inclinometer coordinate system, and from there (using other calibrations and live measurements) transform the normal of the alignment lift to other coordinate systems.
There are various methods by which this lift orientation with respect to gravity can be performed. One method is depicted in
The three mutually orthonormal 3D Cartesian basis vectors that define the orientation of the VCS are defined from the geometric quantities defined above. The Y axis of the VCS, corresponding to the longitudinal axis of the vehicle, is defined as the GCL. The Z axis of the VCS corresponds to the vertical dimension of the vehicle, and is approximately aligned with the direction of gravity. We use the previously performed calibration of the alignment lift with respect to gravity to determine the transformation from the measured gravity vector to the orientation of the alignment lift normal in the inclinometer coordinate system. The alignment lift normal is transformed from the inclinometer coordinate system to the left camera coordinate system—this transformed vector constitutes the Z axis of the VCS. The alignment lift normal is further orthogonalized to remove the component that is parallel to the measured vehicle drive direction. The VCS X axis is then defined as the cross product of the VCS Y axis and the VCS Z axis.
Once the VCS has been determined and all wheel axes have been measured and transformed into the VCS, the alignment angles can then be determined in a well-known manner. The wheel axes are projected onto various 2D planes of the vehicle coordinate system. Camber angle is defined from the elevation angle of the wheel axes with respect to the VCS (X, Y) plane. The previously described tilt angle of the alignment lift with respect to gravity must also be incorporated and subtracted from the calculated camber angles. Rear toe angles are calculated with respect to the Geometric Center Line 3316 as described above. Front wheel toe angles are defined with respect to the vehicle thrust line 3317 as described above.
As shown in
In operation, as shown in the procedure 3500 of
Additionally, also in step 3505, the position (and orientation) of the axis of rotation of each of the front wheels, corresponding to the wheel on which the passive head having the alignment target is mounted, is calculated by firstly transforming the front wheel target's position into the corresponding reference target's coordinate system at each vehicle position, and secondly calculating the axis of rotation from the change of position of the front wheel target in the corresponding reference target coordinate system. Based on these computations, the two wheel axes' positions are determined on one side of the vehicle. A similar process can be performed on the other side of the vehicle to determine the two wheel axes' positions on the other side of the vehicle.
Next, as discussed herein above, a drive direction of the vehicle is calculated using the calculated poses of the first and second wheel targets 3080L, 3080R and the sensed orientation relative to gravity on the first and second sides of the vehicle. In step 3506, an orientation relative to gravity on the first side of the vehicle is measured using first gravity sensor 3102L attached to the first camera 3010L; and an orientation relative to gravity on the second side of the vehicle is measured using second gravity sensor 3102R attached to the second camera 3010R.
A drive direction of the first side of the vehicle is then calculated using the calculated poses of the first wheel target 3080L, and a drive direction of the second side of the vehicle is calculated using the calculated poses of the second wheel target 3080R, at step 3507. The drive direction and gravity direction of the first side of the vehicle is then transformed into a common coordinate system with the drive direction and gravity direction of the second side of the vehicle to obtain the vehicle drive direction, in step 3508. Wheel alignment measurements are then calculated as discussed herein above using the vehicle drive direction calculation, at step 3509.
In the foregoing description, the active heads 3020L, 3020R are described as each being attached to a rear wheel and the targets 3080L, 3080R are described as each being attached to a front wheel of the vehicle. However, the targets 3080L, 3080R could be attached to the rear wheels and the active heads 3020L, 3020R to the front wheels. The reference targets 3070L, 3070R could be attached to a rack, floor, tripod, or other type of attachment.
The host computer platform is communicatively connected to the alignment or calibration cameras through wired or wireless communication links. For this purpose, the host computer platform and each camera has a wired or wireless communication transceiver therein. In particular, in the case of a wireless communication link, the host computer platform and the camera(s) each have a wireless transceiver through which a wireless communication link can be established. The wired or wireless communication links are used to communicate captured image data from the camera(s) to the host computer platform, and can also be used to communicate control commands or software updates from the host computer platform to the camera(s).
In operation, when a wheel alignment procedure or a calibration procedure is performed, the CPU of the host computer platform causes one or more connected alignment and/or calibration camera(s) to capture images. Typically, the images are captured so as to show therein one or more alignment or reference targets according to which positions can be determined. A single image or plural images are captured, including images captured prior to and following movement of the vehicle notably in situations in which an axis of rotation of a wheel is to be determined.
The host computer platform can store the captured images in memory. Additionally, known positional relationships (when known) are stored in memory including, for example, a known positional relationship between a calibration camera (e.g., 300) and a reference target (e.g., 200) of a side-to-side reference system 100; a known positional relationship between a calibration target (e.g., 310) and a reference target (e.g., 210); a known positional relationship between two cameras (120, 140) that are mounted together in a camera assembly; and the like.
The host computer platform is operative to process the captured images in order to identify the alignment targets or reference targets therein, and to determine the position of the alignment targets or reference targets relative to the cameras based on the locations of the targets in the captured images. For example, methods such as those described in U.S. Pat. Nos. 7,313,869 and 7,369,222, and 7,415,324, which are hereby incorporated by reference in their entireties. In turn, the host computer platform can determine the alignment of the vehicle wheels based on the determined positions of the targets relative to the cameras and the further steps detailed herein, including steps based on the stored positional relationship data described above.
As such, aspects of the alignment measurement methods detailed above may be embodied in programming stored in the memory of the computer platform and configured for execution on the CPU of the computer platform. Furthermore, data on alignment targets including known relative position data of targets and/or cameras, data on alignment and calibration cameras, and the like, may be stored in the memory of the computer platform for use in computing alignment measurements.
The drawing figures presented in this document depict one or more implementations in accord with the present teachings, by way of example only, not by way of limitation. In the figures, like reference numerals refer to the same or similar elements.
In the foregoing detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant teachings. However, it should be apparent to those skilled in the art that the present teachings may be practiced without such details. In other instances, methods, procedures, components, and/or circuitry have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present teachings.
Unless otherwise stated, all measurements, values, ratings, positions, magnitudes, sizes, and other specifications that are set forth in this specification, including in the claims that follow, are approximate, not exact. They are intended to have a reasonable range that is consistent with the functions to which they relate and with what is customary in the art to which they pertain.
The scope of protection is limited solely by the claims that now follow. That scope is intended and should be interpreted to be as broad as is consistent with the ordinary meaning of the language that is used in the claims when interpreted in light of this specification and the prosecution history that follows and to encompass all structural and functional equivalents. Notwithstanding, none of the claims are intended to embrace subject matter that fails to satisfy the requirement of Sections 101, 102, or 103 of the Patent Act, nor should they be interpreted in such a way. Any unintended embracement of such subject matter is hereby disclaimed.
Except as stated immediately above, nothing that has been stated or illustrated is intended or should be interpreted to cause a dedication of any component, step, feature, object, benefit, advantage, or equivalent to the public, regardless of whether it is or is not recited in the claims.
It will be understood that the terms and expressions used herein have the ordinary meaning as is accorded to such terms and expressions with respect to their corresponding respective areas of inquiry and study except where specific meanings have otherwise been set forth herein. Relational terms such as first and second and the like may be used solely to distinguish one entity or action from another without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “a” or “an” does not, without further constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises the element.
The Abstract is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
While the foregoing has described what are considered to be the best mode and/or other examples, it is understood that various modifications may be made therein and that the subject matter disclosed herein may be implemented in various forms and examples, and that the teachings may be applied in numerous applications, only some of which have been described herein. It is intended by the following claims to claim any and all applications, modifications and variations that fall within the true scope of the present teachings.
This application claims the benefit of U.S. patent application Ser. No. 16/904,407, filed Jun. 17, 2020, which claims the benefit of U.S. patent application Ser. No. 16/423,503, filed May 28, 2019, now U.S. Pat. 10,692,241, which claims the benefit of U.S. patent application Ser. No. 15/678,825, filed Aug. 16, 2017, now U.S. Pat. No. 10,347,006, which claims the benefit of U.S. Provisional Patent Applications No. 62/375,716, filed Aug. 16, 2016, and No. 62/377,954, filed Aug. 22, 2016, the disclosures of which are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
62377954 | Aug 2016 | US | |
62375716 | Aug 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16423503 | May 2019 | US |
Child | 16904407 | US | |
Parent | 15678825 | Aug 2017 | US |
Child | 16423503 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16904407 | Jun 2020 | US |
Child | 17358555 | US |