CONTROL DEVICE, POSITION CONTROL SYSTEM, POSITION CONTROL METHOD, AND RECORDING MEDIUM

Information

  • Patent Application
  • 20180365791
  • Publication Number
    20180365791
  • Date Filed
    February 13, 2018
    6 years ago
  • Date Published
    December 20, 2018
    5 years ago
Abstract
A control device, a position control system, a position control method, and a recording medium are provided. The controller includes a feedback control unit and a position determination unit. The position determination unit acquires an encoder values from a driver at a cycle. The position determination unit acquires a measurement position obtained by measuring a positional relationship between a control object position and a target position by image processing, at a time interval that is longer than the cycle. The position determination unit determines an estimation interpolation position corresponding to the control object position by using the encoder value and the measurement position. The feedback control unit outputs control data for aligning the control object position at the target position by using the estimation interpolation position determined by the position determination unit to the driver.
Description
CROSS-REFERENCE TO RELATED APPLICATION

This application claims the priority benefit of Japanese application serial no. 2017-117319, filed on Jun. 15, 2017. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification.


BACKGROUND
Technical Field

The present disclosure relates to a technique for position control using a servo motor and an image sensor.


Description of Related Art

In factory automation (FA), various techniques (positioning techniques) for aligning a predetermined position (control object position) of a workpiece or the like at a target position by using a servo motor has been put into practical use.


At this time, there are a method using encoder values of the servo motor, a method using image processing by an image sensor, and the like as methods of measuring a difference (distance) between the control object position and the target position.


In the method using the encoder values, a moving distance of the control object position is calculated by sequentially adding the encoder values that are obtained at a predetermined cycle.


In the method using the image processing, the distance between the control object position and the target position is calculated (measured) by capturing an image including the control object position and the target position and analyzing the captured image.


In the method of performing image processing, a fact that a delay may occur due to the time related to the image processing or the like which may affect the accuracy of position control has been recognized. Therefore, the distance measured by the image processing is corrected by using a Kalman filter according to Japanese Patent Application Laid-open No. 2015-213139 (Patent Document 1).


However, a model for the delay has to be set in the Kalman filter according to the method described in Patent Document 1. Therefore, a high technical level is needed to perform highly accurate correction, and it is not easy to perform highly accurate correction. In addition, highly accurate correction cannot be performed unless an initial value is suitably set, and if highly accurate correction is performed, it takes time until the corrected distance is stably output. Therefore, this is not suitable for FA that requires high-speed and highly accurate control.


In addition, it is possible to perform high-speed positioning control in a case in which the distance is determined using only an encoder value. However, by effectively taking advantage of a distance measured by image processing it is expected that there would be further improvement in positioning accuracy.


Therefore, an aspect of the present disclosure is to realize high-speed and highly accurate positioning control using a simple configuration and processing.


SUMMARY

A control device according to the present disclosure includes a position determination unit and a feedback control unit. The position determination unit repeatedly acquires position-related data related to a control object from a drive device that moves the control object. The position determination unit acquires measurement data obtained by measuring a positional relationship between the control object and a target position by image processing, at a time interval that is longer than an acquisition interval of the position-related data. The position determination unit determines an estimated position corresponding to a control object position by using the position-related data and the measurement data. The feedback control unit outputs control data for aligning the control object at the target position to the drive device by using the estimated position determined by the position determination unit.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a functional block diagram illustrating a configuration of a position control system according to a first embodiment of the present disclosure.



FIG. 2(A) and FIG. 2(B) are appearance perspective views illustrating an application example of the position control system according to the first embodiment of the present disclosure.



FIG. 3 is a flowchart illustrating processing performed by a control device of the position control system according to the first embodiment of the present disclosure.



FIG. 4(A) is a graph illustrating temporal change in an encoder value PVm and a measurement position PVv, and FIG. 4(B) is a graph illustrating temporal change in an estimation interpolation position PV.



FIG. 5 is a flowchart illustrating a method of estimating and calculating an encoder value at an imaging time.



FIG. 6 is a diagram illustrating a concept of calculating an encoder value at the imaging time.



FIG. 7(A) is a diagram illustrating temporal change in distances between a target position and a control object position in a case in which estimation interpolation is performed (a configuration of the present application) and in a case in which estimation interpolation is not performed (a comparative configuration), FIG. 7(B) is a diagram illustrating temporal change in a measured distance in a case in which estimation interpolation is not performed, and FIG. 7(C) is a diagram illustrating temporal change in a measured distance and an estimation interpolation distance (the distance between the target position and the control object position) in a case in which estimation interpolation is performed.



FIG. 8 is a functional block diagram illustrating a configuration of a position control system according to a second embodiment of the present disclosure.





DESCRIPTION OF THE EMBODIMENTS

In this configuration, the position determination unit estimates the position-related data at an imaging time of an image used in the image processing. The position determination unit determines the estimated position by using the position-related data at the imaging time, the measurement data, and the position-related data at the same time as the time of the measurement data.


With this configuration, the control object position at a current time is corrected by using the estimated position-related data at the imaging time.


In addition, the position determination unit determines the estimated position by subtracting the estimated position-related data at the imaging time from a sum value of the measurement data and the position-related data at the same time in the control device.


With this configuration, a specific example of the aforementioned correction computation is illustrated. In this manner, the estimated position is calculated by simple computation.


In addition, the position determination unit estimates the position-related data at the imaging time by an interpolation value of position-related data at a plurality of times close to the imaging time in the control device.


With this configuration, a specific example of the aforementioned correction computation is illustrated. In this manner, the estimated position is calculated by simpler computation.


In addition, the position determination unit calculates the interpolation value by using a delay time related to the imaging and a delay time related to transmission of the position-related data in the control device.


With this configuration, a more specific example of the aforementioned correction computation is illustrated. In this manner, the control object position is accurately calculated by simpler computation.


According to the present disclosure, high-speed and highly accurate positioning control can be realized using a simple configuration and processing.


A positioning control technique according to a first embodiment of the present disclosure will be described with reference to drawings. FIG. 1 is a functional block diagram illustrating a configuration of a position control system according to the first embodiment of the present disclosure. Note that the solid line arrow represents that components are connected by a signal line or the like and the dotted line arrow represents a configuration in which information of an object is acquired in a non-contact manner in FIG. 1.


As illustrated in FIG. 1, the position control system 1 includes a controller 10, a driver 20, a motor 30, and an image sensor 40.


The controller 10 is a so-called programmable logic controller (PLC), for example, and performs various kinds of FA control. The controller 10 includes a feedback control unit 11 and a position determination unit 12.


The feedback control unit 11 executes computation such as proportional-integral-derivative (PID) control such that a target position SP and an estimation interpolation position PV coincide with each other by using the target position SP and the estimation interpolation position corresponding to a control object position (corresponding to the “estimated position” of the present disclosure). The feedback control unit 11 calculates an operation amount MV by performing this computation. The feedback control unit 11 outputs the operation amount MV to the driver 20.


The position determination unit 12 calculates the estimation interpolation position PV corresponding to the control object position by using an encoder value PVm from the driver 20 and a measurement position PVv from the image sensor 40, although detailed processing will be described later. The position determination unit 12 outputs the estimation interpolation position PV to the feedback control unit 11.


The feedback control is repeatedly executed at a control cycle Tc. That is, the feedback control unit 11 outputs the operation amount MV at every control cycle Tc. Note that the operation amount MV is any of a command position, a command speed, and a command torque for the driver 20.


The driver 20 performs operation control for the motor 30 by using the operation amount MV. In addition, the driver 20 acquires the encoder value PVm from the motor 30 and outputs the encoder value PVm to the position determination unit 12. At this time, the driver 20 outputs the encoder value PVm to the position determination unit 12 at the same cycle as the control cycle Tc.


The motor 30 is a servo motor, for example, and operates in response to operation control by the driver 20. A stage 91 moves and the control object position is moved by operations of the motor 30.


The driver 20 and the motor 30 are provided to correspond to the number of moving directions of the control object position, that is, the number of movable axes of the position control.


The image sensor 40 images the stage 91, more specifically, a region including the control object position and the target position SP on the stage 91. At this time, the image sensor 40 captures the image in response to an imaging trigger TRp from the controller 10. The controller 10 outputs the imaging trigger TRp at a time interval that is longer than the control cycle Tc. That is, the image sensor 40 captures an image at a time interval that is longer than the control cycle Tc.


The image sensor 40 calculates the measurement position PVv of the control object position with respect to the target position SP (a measurement distance between the target position SP and the control object position) by performing image processing on the captured image. The image sensor 40 outputs the measurement position PVv to the position determination unit 12.


Such a position control system 1 is applied to automation of an operation as illustrated in FIG. 2(A) and FIG. 2(B), for example. FIG. 2(A) is an appearance perspective view illustrating an application example of the position control system according to the first embodiment of the present disclosure, and FIG. 2(B) is a plan view illustrating an application example of the position control system according to the first embodiment of the present disclosure. In FIG. 2(B), illustration of a screw clamping machine is omitted.


As illustrated in FIGS. 2(A) and 2(B), a screw hole 920 is provided in a workpiece 92. The workpiece 92 is attached to a top surface of the stage 91. The stage 91 is a so-called XY table and is movable about two orthogonal axes that are parallel to the top surface. A drive shaft of the motor 31 is fixed to one of two movable tables that are included in the stage 91, and a drive shaft of the motor 32 is fixed to the other. In this manner, the stage 91 is moved in directions of the two orthogonal axes by driving the motor 31 and the motor 32. Therefore, the position of the screw hole 920 becomes the control object position.


A screw clamping machine 93 is arranged above the stage 91. The screw 930 is attached to a lower end of the screw clamping machine 93. The screw clamping machine 93 is moved in an up-down direction by a drive unit, illustration of which is omitted. Therefore, the screw clamping machine 93 is lowered, and a position at which it is brought into contact with the stage 91 becomes the target position SP.


A camera 401 and a camera 402 are arranged in an outer circumference of the stage 91. The camera 401 and the camera 402 are a part of components of the image sensor 40. The camera 401 and the camera 402 are arranged such that a tip end of the screw 930 and the screw hole 920 are within an imaging range. That is, the camera 401 and the camera 402 are arranged such that the control object position and the target position SP are within the imaging range. The camera 401 is a sensor that mainly images movement of the position of the screw hole 920 due to driving of the motor 31, and the camera 402 is a sensor that mainly images movement of the position of the screw hole 920 due to driving of the motor 32. In order to realize this, it is only necessary that an imaging axis of the camera 401 is orthogonal to a direction in which the stage 91 is moved by the motor 31, and that an imaging axis of the camera 402 is orthogonal to a direction in which the stage 91 is moved by the motor 32 as illustrated in FIG. 2(B), for example. Note that these orthogonal relationships may include errors.


The camera 401 and the camera 402 output captured images to an image processing unit. The image processing unit is a part of the components of the image sensor 40. The image processing unit calculates the measurement position PV corresponding to the distance between the tip end of the screw 930 and the screw hole 920, that is, the aforementioned distance between the control object position and the target position SP by performing image analysis.


In the position control system 1 with such a configuration, the position determination unit 12 of the controller 10 calculates the estimation interpolation position PV by performing the processing as illustrated in the flowchart in FIG. 3. FIG. 3 is a flowchart illustrating processing performed by the control device of the position control system according to the first embodiment of the present disclosure.


The position determination unit 12 initializes the estimation interpolation position PV and the encoder value PVm (S11).


When an output of the encoder value PVm from the driver 20 is started, the position determination unit 12 sequentially receives an input of the encoder value PVm (S12).


The position determination unit 12 detects whether or not the measurement position PVv has been obtained from the image sensor 40 (S13). If it is a time until which the measurement position PVv has been obtained (S13: YES), the position determination unit 12 detects whether or not the measurement position PVv is a normal value. If the measurement position PVv is a normal value (S14: YES), the position determination unit 12 accepts an input of the measurement position PVv (S15).


If the input of the measurement position PVv is accepted, the position determination unit 12 estimates an encoder value PVms at an imaging time, on the basis of which the measurement position PVv is calculated (S16). A specific estimation method will be described later. Note that in a case in which an exposure time of the cameras 401 and 402 is long, the imaging time is set using an intermediate time between an exposure start time (a time at which shutters of the cameras are opened) and an exposure end time (a time at which the shutters of the cameras are closed).


The position determination unit 12 calculates the estimation interpolation position PV by using the measurement position PVv and the encoder value PVm at the same time and the encoder value PVms at the imaging time, on the basis of which the measurement position PVv is calculated (S17). Specifically, the position determination unit 12 calculates the estimation interpolation position PV by using the following (Equation 1) in Step S17.






PV=PVv+(PVm−PVms)  (Equation 1)


If the position determination unit 12 calculates the estimation interpolation position PV, then the position determination unit 12 outputs the estimation interpolation position PV to the feedback control unit 11 (S18). In addition, the position determination unit 12 updates and stores the estimation interpolation position PV as a reference estimation interpolation position PVp and the encoder value PVm at the time as a reference encoder value PVmp.


If it is a time until which the measurement position PVv has not been obtained from the image sensor 40 (S13: NO), the position determination unit 12 detects whether or not the measurement position PVv has been output once or more (S20). In addition, if the measurement position PVv is not a normal value (S14: NO), the position determination unit 12 similarly detects whether or not the measurement position PVv has been output once or more (S20).


If the measurement position PVv has been output once or more (S20: YES), the position determination unit 12 calculates the estimation interpolation position PV by using the encoder value PVm, the reference estimation interpolation position PVp, and the reference encoder value PVmp (S21). Specifically, the position determination unit 12 calculates the estimation interpolation position PV by using the following (Equation 2) in Step S21.






PV=PVp+PVm=PVmp  (Equation 2)


If the measurement position PVv has not been output even once (S20: NO), the position determination unit 12 maintains the estimation interpolation position PV as an initial value.


Every time the encoder value PVm is input, the position determination unit 12 executes the aforementioned processing, and if the control is not to be ended, (S19: NO), the position determination unit 12 returns to Step S12 (the acception of the input of a new encoder value PVm) and continues the processing. If the control is to be ended (S19: YES), the position determination unit 12 ends the processing.


By executing such processing, the controller 10 can calculate the estimation interpolation position PV by using the highly accurate measurement position PVv at the time at which the highly accurate measurement position PVv obtained by the image processing is input, and can realize highly accurate positioning control. Here, the measurement position PVv is input at a long time interval. However, the controller 10 calculates the estimation interpolation position PV and performs position control at every input time of the encoder value PVm that is input at a short cycle between times at which measurement positions PVv that are adjacent on a time axis are input. In this manner, highly accurate and short-cycle positioning control can be performed. In addition, since the controller 10 performs processing using the aforementioned basic arithmetic operations at this time, it is possible to realize high-speed and highly accurate positioning control using a simple configuration and processing.


At this time, the controller 10 performs the aforementioned processing, thereby calculating the estimation interpolation position as illustrated in FIG. 4(A). Therefore, the controller 10 can highly accurately calculate the estimation interpolation position PV by using the measurement position PVv obtained by the image processing. FIG. 4(A) is a graph illustrating temporal change in the encoder value PVm and the measurement position PVv, and FIG. 4(B) is a graph illustrating temporal change in the estimation interpolation position PV.


As illustrated in FIG. 4(A), the calculation of the measurement distance corresponding to the measurement position PVv is updated at a long cycle since time for image capturing and image processing by the image sensor 40 is needed. Therefore, even if the measurement position PVv is obtained at the time tn at which the estimation interpolation position PV is calculated, the measurement position PVv is obtained from the image captured at an imaging time tv1 in the past before the calculation time tn, and the measurement position PVv is a result of highly accurately calculating the control object position at the imaging time tv1, where the time tc0 is control start time.


A time (tn−tv1) has elapsed from the calculation time tn to the imaging time tv1, and the control object position has moved. Therefore, correction has to be performed by the amount of the movement of the control object position.


Here, the encoder value PVm is updated at a shorter cycle than the cycle at which the measurement position PVv is calculated since the control cycle of the motor 30 is short. By using this, the position determination unit 12 of the controller 10 performs the computation represented by (Equation 1). Specifically, the position determination unit 12 acquires the encoder value PVms at the imaging time tv1 and the encoder value PVm at the calculation time tn. The position determination unit 12 calculates the estimation interpolation position PV at the time tn at which the calculation is performed by adding an amount of change ΔPVm (PVm−PVms) of the encoder value PVm corresponding to the time (tn−tv1) to the measurement position PVv. Although the estimation interpolation position PV becomes discontinuous at the calculation time tn at this time, it is more preferable to use smoothing processing (for example, movement averaging processing) on the estimation interpolation position PV in this case since it is possible to smooth the temporal change in the estimation interpolation position PV.


The estimation interpolation position PV highly accurately reflects the control object position at the calculation time tn by using the processing. Therefore, it is possible to perform highly accurate positioning control.


Further, the position determination unit 12 of the controller 10 calculates the encoder value PVms at the imaging time by using the processing described below. FIG. 5 is a flowchart illustrating a method of estimating and calculating the encoder value at the imaging time. FIG. 6 is a diagram illustrating a concept of calculating the encoder value at the imaging time.


As illustrated in FIG. 5, the position determination unit 12 acquires the imaging time (S61). The position determination unit 12 acquires encoder values PVm at a plurality of times close to the imaging time (S62). The position determination unit 12 calculates an interpolated value of the encoder values PVm at the plurality of times as the encoder value PVms at the imaging time (S63). Note that in a case in which the imaging time coincides with the time at which the encoder value is calculated, the encoder value may be used without any change.


Specifically, the position determination unit 12 calculates the encoder value PVms at the imaging time by using the concept illustrated in FIG. 6. The calculation time t is repeated at a cycle Tc, and the cycle Tc corresponds to the control cycle of the controller 10.


The position determination unit 12 acquires an encoder value PVm(n) at the time tn at which the measurement position PVv is obtained and the estimation interpolation position Pv is calculated. The position determination unit 12 acquires an imaging time tvi in the past before the calculation time tn. The position determination unit 12 detects two times in the vicinity of the imaging time tv1, for example, a calculation time t(n−k) in the past and a calculation time t(n−k+1) in the past with the imaging time tvi interposed therebetween on the time axis, for example.


The position determination unit 12 acquires an encoder value PVm(n−k) at the calculation time t(n−k) and an encoder value PVm(n−k+1) at the calculation time t(n−k+1). The encoder values in the past can be stored by providing a storage unit in the controller 10.


The position determination unit 12 calculates an encoder value PVms(ni) at the imaging time tvi by using the interpolation value of the encoder value PVm(n−k) and the encoder value PVm(n−k+1).


Specifically, the position determination unit 12 calculates the encoder value PVms(ni) at the imaging time tvi by using the following (Equation 3).






PVms(ni)=PVm(n−k)+Kk*(PVm(n−k+2)−PVm(n−k+1))  (Equation 3)


Here, Kk is an interpolation coefficient. In a case in which Tc−Ted≤Tsd<2Tc−Ted where Tc represents the control cycle, Ted represents a transmission delay time of the encoder value PVm, and Tsd represents a transmission delay time of the imaging trigger TRp, the interpolation coefficient Kk is calculated by using the following (Equation 4).






Kk={Tsd−(Tc−Ted)}/Tc  (Equation 4)


By using such a method of calculating the interpolation value, it is possible to highly accurately calculate the encoder value PVms(ni) at the imaging time tvi. In this manner, it is possible to calculate a more highly accurate estimation interpolation position PV and more highly accurate positioning control becomes possible.



FIG. 7(A) is a diagram illustrating temporal change in distance between the target position and the control object position in a case in which estimation interpolation is performed (the configuration of the present application) and in a case in which estimation interpolation is not performed (a comparative configuration). FIG. 7(B) is a diagram illustrating temporal change in the measurement distance in a case in which estimation interpolation is not performed. FIG. 7(C) is a diagram illustrating temporal change in the measurement distance and the estimation interpolated distance (the distance between the target position and the control object position) in a case in which estimation interpolation is performed.


As illustrated in FIG. 7(A), it is possible to increase the speed of the distance position control between the target position and the control object position by performing the estimation interpolating as illustrated in FIG. 7(A). In addition, the measurement distances (corresponding to the measurement position PVv) also converge at a high speed by performing the estimation interpolating control as illustrated in FIGS. 7(B) and 7(C), and it is possible to perform high-speed and highly accurate position control of the target position. This is because of the following reason. Since the measurement position PVv is updated at a long cycle and is position information including a delay, overshooting or vibration tends to occur, and it is not possible to significantly strengthen a gain of feedback control. Meanwhile, it becomes possible to strengthen the gain of the feedback control by using the estimation interpolation position PV.


Next, a position control technique according to a second embodiment of the present disclosure will be described with reference to a drawing. FIG. 8 is a functional block diagram illustrating a configuration of a position control system according to the second embodiment of the present disclosure.


As illustrated in FIG. 8, a position control system 1A according to the second embodiment is different from the position control system 1 according to the first embodiment in processing performed by a driver 20A and an image sensor 40A. Since other processing performed by the position control system 1A is similar to that of the position control system 1, description of similar portions will be omitted.


The controller 10, the driver 20A, and the image sensor 40A have synchronized time measurement functions.


The driver 20A outputs the encoder value PVm to the position determination unit 12 at an independently determined cycle. At this time, the driver 20A associates and outputs each encoder value PVm with an acquired time tm.


The image sensor 40A captures an image at an independent time interval and outputs the measurement position PVv and the imaging time ti to the position determination unit 12.


The position determination unit 12 calculates the estimation interpolation position PV by performing processing similar to that in the aforementioned first embodiment by using the time tm at which the encoder value is acquired and the imaging time ti.


Also if such a configuration and processing are employed, it is possible to realize high-speed and highly accurate positioning control using a simple configuration and processing in a manner similar to that in the first embodiment.

Claims
  • 1. A control device comprising: a position determination unit that repeatedly acquires position-related data related to a control object from a drive device that moves the control object, acquires measurement data obtained by measuring a positional relationship between the control object and a target position by image processing, at a time interval that is longer than an acquisition interval of the position-related data, and determines an estimated position of the control object by using the position-related data and the measurement data; anda feedback control unit that outputs control data for aligning the control object at the target position to the drive device by using the estimated position determined by the position determination unit,wherein the position determination unit estimates the position-related data at an imaging time of an image used in the image processing, anddetermines the estimated position by using the position-related data at the imaging time, the measurement data, and the position-related data at the same time as the time of the measurement data.
  • 2. The control device according to claim 1, wherein the position determination unit determines the estimated position by adding a difference between the position-related data at the same time as the time of the measurement data and the position-related data at the imaging time to the measurement data.
  • 3. The control device according to claim 1, wherein the position determination unit estimates the position-related data at the imaging time using an interpolation value of the position-related data at a plurality of times close to the imaging time.
  • 4. The control device according to claim 3, wherein the position determination unit calculates the interpolation value by using a delay time related to the imaging and a delay time related to transmission of the position-related data.
  • 5. The control device according to claim 4, wherein the position-related data is acquired at a constant cycle, andthe position determination unit calculates the interpolation value by using the cycle.
  • 6. A position control system comprising: the control device according to claim 1;the drive device; andan image sensor that performs the image processing.
  • 7. A position control method comprising: a process of repeatedly acquiring position-related data related to a control object from a drive device that moves the control object;a process of acquiring measurement data obtained by measuring a positional relationship between the control object and a target position by image processing, at an interval that is longer than an acquisition interval of the position-related data;a position determining process of determining an estimated position of the control object by using the position-related data and the measurement data; anda feedback control process of outputting control data for aligning the control object at the target position by using the estimated position determined in the position determining process to the drive device,wherein in the position determining process, the position-related data at an imaging time of an image used in the image processing is estimated, andthe estimated position is determined by using the position-related data at the imaging time, the measurement data, and the position-related data at the same time as the time of the measurement data.
  • 8. A non-transitory recording medium, storing position control program that causes a computer to execute: processing of repeatedly acquiring position-related data related to a control object from a drive device that moves the control object;processing of acquiring measurement data obtained by measuring a positional relationship between the control object and a target position by image processing, at an interval that is longer than an acquisition interval of the position-related data;position determining processing of determining an estimated position of the control object by using the position-related data and the measurement data; andfeedback control processing of outputting control data for aligning the control object at the target position by using the estimated position determined in the position determining process to the drive device,wherein in the position determining processing, the position-related data at an imaging time of an image used in the image processing is estimated, andthe estimated position is determined by using the position-related data at the imaging time, the measurement data, and the position-related data at the same time as the time of the measurement data.
  • 9. The control device according to claim 2, wherein the position determination unit estimates the position-related data at the imaging time using an interpolation value of the position-related data at a plurality of times close to the imaging time.
  • 10. A position control system comprising: the control device according to claim 2;the drive device; andan image sensor that performs the image processing.
  • 11. A position control system comprising: the control device according to claim 3;the drive device; andan image sensor that performs the image processing.
  • 12. A position control system comprising: the control device according to claim 4;the drive device; andan image sensor that performs the image processing.
  • 13. A position control system comprising: the control device according to claim 5;the drive device; andan image sensor that performs the image processing.
  • 14. A position control system comprising: the control device according to claim 9;the drive device; andan image sensor that performs the image processing.
Priority Claims (1)
Number Date Country Kind
2017-117319 Jun 2017 JP national