1. Field of the Invention
The present invention relates to a position detection apparatus (an encoder) which detects a position.
2. Description of the Related Art
Conventionally, rotary encoders (position detection apparatuses) have been known which detect a position (a rotational displacement) of an object by reading a predetermined pattern of a scale attached to a rotational shaft of the object and configured to rotate corresponding to a rotation of the object. In the rotary encoder, when a rotation center and a pattern center of the scale are shifted to each other, a periodic error (an eccentric error) having characteristics of a sinusoidal wave with one period per rotation occurs.
Japanese Patent Laid-Open No. (“JP”) 6-58771 discloses a position detection apparatus which includes two sensors arranged at positions different by 180 degrees from each other with respect to a rotational shaft and corrects an eccentric error by averaging signals obtained from the two sensors.
The position detection apparatus disclosed in JP 6-58771 can correct the eccentric error to improve detection accuracy. However, it requires the two sensors to be arranged at positions shifted by 180 degrees from each other with respect to the rotational shaft. This leads to an increase in the size of holding members of the sensors, which prevents miniaturization of the position detection apparatus.
The present invention provides small-sized and highly-accurate position detection apparatus, lens apparatus, image pickup system, and machine tool apparatus.
A position detection apparatus as one aspect of the present invention is configured to detect a position of an object, and includes a scale which includes a pattern circumferentially and periodically formed on a circle whose center is a predetermined point, the scale being configured to rotate depending on a displacement of the object, a sensor unit relatively movable with respect to the scale, and a signal processor configured to process an output signal of the sensor unit to obtain position information of the object, the sensor unit includes a first detector configured to detect a first partial pattern formed in a first region apart from the predetermined point by a first distance in a radial direction on a half line starting from the predetermined point of the scale, and a second detector configured to detect a second partial pattern formed in a second region apart from the predetermined point by a second distance different from the first distance, and the signal processor is configured to reduce an error component contained in the position information due to a difference between a rotation center of the scale and the predetermined point based on a first detection signal outputted from the first detector and on a second detection signal outputted from the second detector.
A lens apparatus as another aspect of the present invention includes a lens displaceable in an optical axis direction, and the position detection apparatus.
An image pickup system as another aspect of the present invention includes the lens apparatus and an image pickup apparatus including an image pickup element configured to perform a photoelectric conversion of an optical image formed via the lens.
A machine tool apparatus as another aspect of the present invention includes a machine tool including at least one of a robot arm and a conveyer configured to convey an object to be assembled, and the position detection apparatus configured to detect at least one of a position and an attitude of the machine tool.
Further features and aspects of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Exemplary embodiments of the present invention will be described below with reference to the accompanied drawings. In the drawings, the same elements will be denoted by the same reference numerals and the descriptions thereof will be omitted.
First of all, referring to
As illustrated in
On the scale 10, a track 11 (a pattern) having a reflection portion (a black portion in
A position of light reaching the track 11 of paths from the light source 23 to the light receiving portions 21 and 22 is a read position on the track 11. Hereinafter, a length (distance) from a pattern center O of the track 11 to a read position of the track 11 is referred to as a “detection radius”. As illustrated in
Each of the light receiving portions 21 and 22 includes a plurality of light receiving elements arrayed in a length measurement direction (a direction orthogonal to a plane of paper of
Subsequently, referring to
As illustrated in
Subsequently, an angle detection operation of the signal processor 40 will be described. First, the A/D converter 41 samples two pairs of two-phase sinusoidal signals (analog signals) corresponding to the light receiving portions 21 and 22 to convert them to digital signals. Then, the phase detection processing unit 42 performs an arc-tangent calculation with respect to the sampled two pairs of two-phase sinusoidal signals (the digital signals) to determine a phase. Since a two-phase sinusoidal signal is equivalent to a sine signal “sin” and a cosine signal “cos”, a phase can be determined by performing an arc-tangent calculation. A description will be given below, with phases (the first detection signal and the second detection signal) corresponding to the light receiving portions 21 and 22 being denoted as θ1 and θ2, respectively.
The angle detection processing unit 43 (the position detection processing unit) detects an angle based on the phase θ1 determined by the phase detection processing unit 42. The phase θ1 continuously changes from 0 to 2π for each pair of the reflection portion and the non-reflection portion of the track 11, and then transitions from 2π to 0 immediately before the angle detection processing unit 43 starts reading signals from a subsequent pair of the reflection portion and the non-reflection portion. The angle detection processing unit 43 calculates an amount of phase change (an amount of phase shift) by detecting the transition to determine the angle based on the amount of phase change.
For instance, a case will be described in which the track 11 has 90 periods in 360 degrees, that is, the shift of the phase 2n is equivalent to four degrees. In this case, assuming that an initial phase of n/2 becomes 3π/2 after it transitions by two periods in a direction in which an angle increases, a total of phase change of 5π occurs, and thus it can be determined that the phase change is equivalent to an angle of 20 degrees by converting the phase to an angle. More generally, the amount of phase change can be determined by detecting a phase at fixed intervals and then accumulating a difference between a latest phase and a phase detected immediately before detecting the latest phase. A phase change amount s(i) observed at the i-th time detection of the phase θi as a phase change amount up to that time is represented by the following Expression (1), where the relationships of s(0)=0 and θ(0)=0 are satisfied and symbol i denotes a natural number.
s(i)=s(i−1)+(0(i)−θ(i−1)) (1)
Then, the angle detection processing unit 43 converts the phase change amount s(i) to an angle (a position) depending on the period of the track 11. Where symbol k denotes the ratio of a period and an angle of the track 11, the angle is represented as k·s(i). As described above, the angle detection processing unit 43 (a position detection processing unit) obtains position information of the object based on the first detection signal outputted from the light receiving portion 21.
The eccentricity detection processing unit 44 (an eccentric error calculating unit) calculates an error e1 contained in the phase θ1 by using detection radii r1 and r2 and the phases θ1 and θ2. Referring now to
In this embodiment, radii (detection radii) corresponding to the light receiving portions 21 and 22 are r1 and r2, respectively. Angles of the read positions on the track 11 for the light receiving portions 21 and 22 are equal to each other with reference to the rotational shaft 30. Accordingly, maximum values of detection errors caused by the light receiving portions 21 and 22 are ε/r1 and ε/r2, respectively. The relationship between a rotation angle and a detection error includes an error profile having the same phase within one period for one rotation of the scale 10. Where an error contained in a detected angle is e when a detection radius is r and an eccentricity is ε, the error e is represented by the following Expression (2). In Expression (2), symbol α is a constant.
e=(ε/r)·sin(θ+α) (2)
The difference indicated by the dashed line (C) in
θ1−θ2=(ε/r1−ε/r2)·sin(θ+α) (3)
The difference indicated by the dashed line (C) also has an error profile in which its phase is the same as that of each of the dotted line (A) and the dotted line (B) and its amplitude is different from that of each of the dotted line (A) and the dotted line (B). The ratio of an amplitude indicated by the dotted line (A) and an amplitude indicated by the dotted line (B) depends only on the detection radii r1 and r2. Therefore, the error e1 (the eccentric error) can be calculated as represented by the following Expression (4).
e1=(ε/r1)·sin(θ+α)=(θ1−θ2)·(r2/(r2−r1)) (4)
As described above, the eccentricity detection processing unit 44 (the eccentric error calculating unit) calculates an eccentric error based on the first detection signal outputted from the light receiving portion 21, the second detection signal outputted from the light receiving portion 22, the radius r1 (the first distance), and the radius r2 (the second distance).
The angle correction processing unit 45 (a position correcting unit) subtracts the error e1 determined by the eccentricity detection processing unit 44 from the angle k·s(i) determined by the angle detection processing unit 43 to determine a corrected angle (an angle in which an eccentric error has been reduced). In other words, the angle correction processing unit 45 (the position correcting unit) subtracts the eccentric error calculated by the eccentricity detection processing unit 44 from the position information obtained by the angle detection processing unit 43 to obtain corrected position information. As a result, the signal processor 40 of this embodiment can reduce the error (the eccentric error) to detect the angle with higher accuracy.
While, in this embodiment, the angles of the read positions on the track 11 by the light receiving portions 21 and 22 are equal to each other relative to the rotational shaft 30, this embodiment is not limited to this. Even when the read positions on the track 11 by the light receiving portions 21 and 22 are different from each other (even when the angles are different from each other), a relative offset amount between the rotation angle detected by the light receiving portion 21 and the rotation angle detected by the light receiving portion 22 which is caused by a relative displacement amount between the detected angles with respect to the eccentricity can be specified. For this reason, an eccentric error correction can be performed even in this case.
For instance, a case will be described in which, as illustrated in
(ε/r1)·sin(θ+α1)+c1 (5)
(ε/r2)·sin(θ+α2)+c2 (6)
In Expressions (5) and (6), since a shift amount of the detected positions in the circumferential direction due to the inclination of the sensor 20 is sufficiently small compared to the detection rotation angle, the relationship of α1≈α2 is satisfied. Therefore, a difference θ1−θ2 between the phases θ1 and θ2 is approximated as represented by the following Expression (7). A phase error corresponding to the light receiving portion 21 can be determined by the following Expression (8).
θ1−θ2≈(ε/r1−ε/r2)·sin(θ+α1)+φ (7)
(ε/r1)·sin(θ+α)=(θ1−θ2−φ)·(r2/(r2−r1)) (8)
If there is an alignment error between the scale 10 and the sensor 20, a term (an angle difference φ) to be an average value of a maximum value and a minimum value of Expression (7) has a value other than zero. In this embodiment, the determination of this amount (the angle difference φ) allows the detection and the correction (the reduction) of a shift amount of read positions, that is, an alignment shift. As described above, the signal processor 40 is further capable of reducing a difference (difference in position) between a region (a first region) of the radius r1 (the first distance) and a region (a second region) of the radius r2 (the second distance) in the length measurement direction that is a direction vertical to the radial direction of the scale 10. This difference is an error caused by a position shift between the regions of the radii r1 and r2 in the circumferential direction. The use of the term (ε/r1−ε/r2)·sin(θ+α1) in Expression (7) derived as a term which satisfies the relationship of α1≈α2 allows the detection of the eccentric amount to correct (reduce) the eccentric error.
Next, referring to
The correction table 46 previously stores a plurality of positions (position information), i.e., a plurality of detected angles, and a corrected value (a value to be used to reduce an eccentric error) corresponding to each position (position information). The angle correction processing unit 45 determines an angle in which an error (an eccentric error) has been corrected, i.e. corrected position information, based on the position information obtained by the angle detection processing unit 43, i.e. the angle (the detected angle) and the corrected value stored in the correction table 46.
Where j is an angle determined by the angle detection processing unit 43, c(j) is a corrected value corresponding to the angle j stored in the correction table 46, and x(j) is a corrected angle determined by the angle correction processing unit 45, the corrected angle x(j) can be determined as represented by the following Expression (9).
x(j)=j−c(j) (9)
In this embodiment, the corrected value can be stored in the correction table 46 by storing the combination of the angle j detected by the angle detection processing unit 43 and the error e (the corrected value c(j)) determined by the eccentricity detection processing unit 44. Furthermore, instead of storing all combinations of the angle j and its corresponding corrected value in the correction table 46, some values can be thinned (disregarded) without storing them. In this case, the corrected value c(j) corresponding to the angle may not exist. In order to cope with this situation, for instance, using corrected values c(k) and c(l) corresponding angles k and l (k<j<l) stored in the correction table 46, the angle correction processing unit 45 determines the corrected value c(j) corresponding to the angle j by linear interpolation to perform the correction as represented by the following Expression (10).
c(j)=(c(k)·(l−j)+c(l)·(j−k))/(l−k) (10)
A fitting to a function may be performed to reduce data volume to be stored in the correction table 46. An error e (an eccentric error) has one period per one rotation of the scale 10, and a detection radius r is constant. This makes it possible to approximately determine the error e as represented by the following Expression (11).
e=ε/r·sin(j+α) (11)
In this case, the correction table 46 needs only to store values of an eccentricity amount ε and a constant α.
Next, referring to
As illustrated in
The grating patterns of the track 12 have the pitch P1 of 544 periods per rotation and the pitch P2 of 128 periods per rotation. Similarly, the grating patterns of the track 13 have the pitch Q1 of 495 periods per rotation and the pitch Q2 of 132 periods per rotation. In this embodiment, the light receiving portions 21 and 22 detect a relative displacement between the scale 10a and the sensor 20 to classify outputs of the light receiving elements into four types of A (+), B (+), A (−), and B (−). Then, the light receiving portions 21 and 22 output two-phase false sinusoidal signals A and B by using the relationships of A=A (+)−A (−) and B=B (+)−B (−). In this embodiment, the sensor 20 has a function of selecting an array of the light receiving elements. This function enables the sensor 20 to selectively detect the pitches P1 and P2 and the pitches Q1 and Q2.
Subsequently, referring to
The sensor 20 outputs intensities of reflected lights at certain positions on the light receiving portions 21 and 22 as signals. Therefore, even when a detection period of the sensor 20 and a period (a pitch) of each pattern formed on the scale 10a are slightly shifted to each other, the sensor 20 outputs a signal with a period corresponding to the pitch of the pattern formed on the scale 10a. Accordingly, where a detection period of the sensor 20 is P1, the light receiving portion 21 outputs a two-phase false sinusoidal signal with the pitch P1 and the light receiving portion 22 outputs a two-phase false sinusoidal signal with the pitch Q1. Similarly, where a detection period of the sensor 20 is 4×P1, the light receiving portion 21 outputs a two-phase false sinusoidal signal with the pitch P2 and the light receiving portion 22 outputs a two-phase false sinusoidal signal with the pitch Q2. As described above, the light receiving portion 21 (the first detector) detects (the pattern of) the track 12 and the light receiving portion 22 (the second detector) detects (the pattern of) the track 13.
Subsequently, referring to
First, the sensor 20 is configured to output two pairs of two-phase false sinusoidal signals corresponding to the patterns (the grating patterns) with the pitches P1 and Q1 included in the tracks 12 and 13, with the detection period of the sensor 20 being set to P1. Then, an A/D converter 41 samples the four signals. Subsequently, the A/D converter 41 samples the two pairs of two-phase false sinusoidal signals corresponding to the patterns with the pitches P2 and Q2 which are formed on the tracks 12 and 13, with the detection period of the sensor 20 being set to 4×P1.
The phase detection processing unit 42 determines four phases θP1, θQ1, θP2, and θQ2 based on the four pairs of two-phase false sinusoidal signals sampled by the A/D converter 41. Similarly to the first and second embodiments, the four phases θP1, θQ1, θP2, and θQ2 are determined by the arc-tangent calculation. An absolute-type detection processing unit 47 performs a vernier calculation with respect to the four phases θP1, θQ1, θP2, and θQ2 to determine an angle.
Subsequently, referring to
θP3=MOD(θP2×4,2π) (12)
θP4=MOD(θP1−θP3,2π) (13)
In Expressions (12) and (13), symbol MOD (x, y) denotes a residue of x divided by y. In this case, the phase signal θP3 and the phase difference signal θP4 are determined as illustrated in
The phase θP2 and the phase difference signal θP4 have 128 periods and 32 periods respectively, and on the other hand, the phase difference signal θP4 is a value calculated by multiplying the phase θP2 by four as represented by Expression (12). Therefore, the amount of error is also quadrupled, and thus the error contained in the phase difference signal θP4 is larger than that contained in the phase θP2. In order to cope with this, as represented by the following Expression (14), a signal θP5 with 32 periods which has the same accuracy as that of the phase θP2 is determined.
θP5=ROUND((4×θP4−θP2)/(2π)×2π/4+θP2/4 (14)
In Expression (14), symbol ROUND(x) denotes rounding off x to the nearest integer.
Similarly, with respect also to the phase θP1 with 544 periods and the signal θP5 with 32 periods, as represented by the following Expression (15), a signal θP6 with 32 periods which has the same accuracy as that of the phase θP1 is determined.
θP6=ROUND((17×θP5−θP1)/(2π))×2π/17+θP1/17 (15)
The phases θQ1 and θQ2 have 495 periods per rotation and 132 periods per rotation, respectively. Therefore, as represented by the following Expression (16), a phase signal θQ3 is determined by multiplying a phase θQ2 by four and then normalizing the value by 2π. Then, as represented by the following Expression (17), a phase difference signal θQ4 of the phase θQ1 and the phase signal θQ3 is determined. Furthermore, as represented by the following Expression (18), a signal θQ5 having the same accuracy as that of the phase θQ2 is determined. In addition, as represented by the following Expression (19), a signal θQ6 having the same accuracy as that of the phase θQ1 is determined.
θQ3=MOD(θQ2×4,2π) (16)
θQ4=MOD(θQ3−θQ1,2π) (17)
θQ5=ROUND((4×θQ4−θQ2)/(2π))×2π/4+θQ2/4 (18)
θQ6=ROUND((15×θQ5−θQ1)/(2π))×2π/15+θQ1/15 (19)
Since the signals θP6 and θQ6 have 32 periods per rotation and 33 periods per rotation respectively, a phase difference θ7 between the signals θP6 and θQ6 is determined as represented by the following Expression (20).
θ7=MOD(θQ6−θP6,2π) (20)
The phase difference θ7 represents an angle since it is a signal with a period per rotation that is a periodic difference between the signal with 32 periods per rotation and the signal with 33 periods per rotation. The phase difference θ7, however, has a larger error compared with that of each of the signals θP6 and θQ6. In order to cope with this, signals θP8 and θQ8 which have the same accuracy as those of the signals θP6 and θQ6 are determined as represented by the following Expressions (21) and (22), respectively. Each of the signals θP8 and θQ8 is a signal with one period per rotation, which represents an angle.
θP8=ROUND((32×θ7−θP6)/(2π))×2π/32+θP6/32 (21)
θQ8=ROUND((33×θ7−θQ6)/(2π))×2π/33+θQ6/33 (22)
The eccentricity detection processing unit 44 determines an eccentric error based on the signals θP8 and θQ8. Since the signals θP8 and θQ8 represent angles, the relationship of θP8=θQ8 is satisfied when they do not have an eccentricity, and on the other hand, an error depending on the eccentricity is contained when they have the eccentricity. A difference θP8−θQ8 between the signal θP8 and the signal θQ8 is represented by the following Expression (23) as in the case of Expression (3).
θP8−θQ8=(ε/r1−ε/r2)sin(θ+α) (23)
In Expression (23), it can be assumed that the signal θP8 corresponding to the light receiving portion 21 contains an error represented by the following Expression (24). The eccentricity detection processing unit 44 calculates this error.
ε/r1·sin(θ+α)=(θP8−θQ8)(r2/(r2−r1)) (24)
The angle correction processing unit 45 determines an angle in which the error (the eccentric error) determined by the eccentricity detection processing unit 44 is subtracted from the signal θP8. This series of operations allows the determination of an error-corrected position (an error-reduced position).
The encoder 100a of this embodiment is an absolute position detection apparatus capable of detecting an absolute position. “Absolute position” used in this embodiment means a relative position of a pattern (or an object to be measured having the pattern thereon) with respect to a detector (the sensor unit) or a relative position of a moving object to be measured with respect to a fixed part. The absolute position detection apparatus is an apparatus capable of detecting a relative position (“absolute position” used in this embodiment) between them in a measurement performed by the detector. On the other hand, the encoders of the first and second embodiments, which are different from the absolute-type position detection apparatus as described in this embodiment, are incremental type encoders capable of detecting only a position shift (change of a position) in a measurement by the detector. An incremental-type position detection apparatus is capable of determining an absolute position based also on a detection result of an origin detection apparatus (an apparatus capable of uniquely determining a relative position) separately provided.
In this embodiment, as in the case of the first embodiment, when the read positions of the light receiving portions 21 and 22 are shifted on the track 11, an offset amount occurs in Expression (23). In order to cope with this, the encoder 100a may be configured to detect and correct the offset amount. Alternatively, a correction table may be used to correct an error as in the case of the second embodiment.
Next, referring to
In
The drive lens 52 constituting the lens unit is, for example, a focus lens for auto focus and displaceable in a Y direction that is a direction toward an optical axis OA (an optical axis direction). The drive lens 52 may be other drive lenses such as a zoom lens. A cylindrical body 50 (a movable portion) of the position detection apparatus in each of the embodiments described above is connected to an actuator (not illustrated in the drawing) configured to drive the drive lens 52. A rotation of the cylindrical element 50 around the optical axis OA by the actuator or by hand causes the scale 10 to be relatively displaced with respect to the sensor unit 53. This relative displacement causes the drive lens 52 to be driven in the Y direction (an arrow direction) that is the optical axis direction. A signal (an encoder signal) depending on a position (a displacement) of the drive lens 52 obtained from the sensor unit 53 of the position detection apparatus (the encoder) is output to the CPU 54. The CPU 54 generates a drive signal to move the drive lens 52 to a desired position, and the drive lens 52 is driven based on the drive signal.
The position detection apparatus of each embodiment is also applicable to various kinds of apparatuses other than the lens apparatus or the image pickup apparatus. For instance, a machine tool apparatus can be configured by a machine tool including a movable member such as a robot arm or a conveyer to convey an object to be assembled, and the position detection apparatus of each embodiment which detects a position or an attitude of the machine tool. This enables highly-accurate machining by detecting a position of the robot arm or the conveyer with high accuracy.
Embodiments of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions recorded on a storage medium (e.g., non-transitory computer-readable storage medium) to perform the functions of one or more of the above-described embodiment (s) of the present invention, and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiments. The computer may comprise one or more of a central processing unit (CPU), micro processing unit (MPU), or other circuitry, and may include a network of separate computers or separate computer processors. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
According to each of the embodiments described above, since a plurality of sensors can be adjacently arranged in the radial direction in order to correct an eccentric error, holding members of the sensors can be miniaturized. Moreover, when necessary, a tilt of each sensor (an attachment tilt of each sensor with respect to a direction toward a half line starting from a pattern center) can be detected and corrected. Thus, according to each embodiment, small-sized and highly-accurate position detection apparatus, lens apparatus, image pickup system, and machine tool apparatus can be provided.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2013-052747, filed on Mar. 15, 2013, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2013-052747 | Mar 2013 | JP | national |