The present invention relates to the field of control and more particularly that of robotic control applications using a matrix optical sensor.
In the industry, it is known to embark cameras such as matrix optical sensors on robots. For many applications, it is necessary to know precisely the positions of the end effectors on the robots. In the case of an optical sensor, the position of the optical center of the camera serves as an optical reference for the robot.
An example of a common application is the control of a surface by thermography. On large parts, it is then necessary to make several acquisitions taken from different points of view using an infrared camera positioned on a robotic arm.
It is known to use matrix sensor inspection (e.g. in the infrared range) on composite parts, but mainly in the laboratory or in production on surfaces with a relatively simple geometry. Relatively simple geometry means the absence of curvatures or variations in relief at the surface.
The development of a method for controlling parts with complex geometry under industrial conditions requires the mastery of:
The control of the viewing area is based on the precise positioning of the surface to be controlled at a given focusing distance between the surface and the optical center of the camera, and according to the depth of field of the camera.
The design of the robot trajectory is often carried out by teach-in or by experimental methods directly on the part to be controlled.
A method of controlling a surface of interest of a part by means of a camera to be mounted on a carrier robot, the camera comprising a sensor and optics associated with an optical center C, an angular aperture and a depth of field PC and defining a sharpness volume, the method comprising the following steps:
This method allows to automatically define the crossing points for the robot, and consequently a predefined trajectory allowing it to successively move the camera at the acquisition points. The advantage of this method is that it can be carried out entirely in a virtual environment whereas the usual procedure consists of creating a trajectory by experimental learning directly on the part.
According to an example, the generation of the three-dimensional virtual model of the sharpness volume includes the operations of:
This three-dimensional virtual model of the sharpness volume allows a simple and virtual representation of the optics parameters. It is directly related to the characteristics of the optics.
According to a preferred embodiment, the surface is located between the first sharp plane PPN and the last sharp plane DPN of each three-dimensional virtual model unit model of the sharpness volume.
This particular positioning is facilitated by the use of a three-dimensional virtual model of the volume of sharpness, and guarantees a sharp image with each acquisition during the surface control.
According to a particular feature, the generation of the three-dimensional virtual model of the sharpness volume comprises an operation of dividing said three-dimensional virtual model of the sharpness volume into a working area strictly included therein, and a peripheral overlapping area surrounding the working area. In the paving operation, the unit models of the three-dimensional virtual model of the volume of sharpness can be distributed so as to overlap two by two in said peripheral areas.
The generation of a work area makes it easier and faster to position unit volumes of the sharpness volume. As a matter of fact, the working area allows to discriminate an overlapping area in which the unit volumes overlap. This also gives the operator control over the desired level of overlapping.
According to a particular characteristic, the position of each unit model of the three-dimensional virtual model of the volume of sharpness is defined at least by the distance d between a singular point P of the three-dimensional virtual model of the surface to be controlled and its orthogonal projection on the first sharp plane PPN or on the last sharp plane DPN. This feature allows the operator to have control over the distance between the camera and the surface to be controlled. As a matter of fact, depending on the geometrical characteristics of the surface to be controlled, it may be relevant to put the distance d under a constraint. Controlling this distance makes it possible to master the spatial resolution of the images viewed.
According to another characteristic, the singular point P can be the barycentre of the three-dimensional virtual model of sharpness volume.
According to a particular feature, in the paving operation, the position of each unit model of the three-dimensional virtual model of the volume of sharpness is defined by the angle between an X-axis associated with the three-dimensional virtual model of the volume of sharpness and the normal N to the surface of interest at the point of intersection of the X-axis and the surface. The X-axis is, for example, an axis of symmetry of the three-dimensional virtual model of the sharpness volume. This feature allows the operator to have control over the angular orientation of each unit model of the three-dimensional virtual model of the sharpness volume. This makes it possible to control the orientation of the shooting on certain areas of the surface to be controlled.
The invention will be better understood and other details, characteristics and advantages of the invention will become readily apparent upon reading the following description, given by way of a non limiting example with reference to the appended drawings, wherein:
The present invention relates to a method for controlling a surface 1 of interest of a part 2 by means of a camera 3 mounted on a carrier robot 4. The mounting of the camera 3 on the carrier robot 4 can for example be carried out using tooling 5 as shown in
The part 2 can for example be a mechanical part.
The camera 3 comprises a sensor and optics associated with an optical centre C, an angular aperture and a depth of field PC and defining a sharpness volume 6, as shown in
The method includes the steps of:
For each position of said unit models, it is then possible to automatically calculate passage points for the robot, and consequently a predefined trajectory allowing it to successively move the camera at the acquisition points.
For each position of a unit model of the three-dimensional virtual model of the sharpness volume 6, the position of the optical axis of the corresponding camera 3 differs. Three optical axes, Y, Y′ and Y″ are shown as examples in
According to a preferred embodiment, the generation of the three-dimensional virtual model of sharpness volume 6 includes the operations of:
According to a special feature, the surface 1 is located, during paving, between the first sharp plane PPN and the last sharp plane DPN of each unit model of the three-dimensional virtual model of the sharpness volume 6, as shown in
The geometric characteristics of the camera 3 are supplier data. These include:
The focusing distance I is user-defined. The geometry of the sharpness volume 6 can be adjusted by a calculation making it possible to manage overlapping areas 7.
Each position of a unit model of the three-dimensional virtual model of sharpness volume 6 on the surface 1 corresponds to a shooting position.
Thus, in the course of this operation, the generation of the three-dimensional virtual model of the sharpness volume 6 may additionally include an operation of dividing the three-dimensional virtual model of the sharpness volume 6 into a working area 8 strictly included therein, and an overlapping peripheral area 7 surrounding the working area 8. An example of a sharpness volume 6 divided into a working area 8 and an overlapping area 7 is shown in
The geometry and dimensions of the working area 8 are governed by the geometry of the generated sharpness volume 6 and a parameter for the desired percentage of overlapping in each image. This parameter can be modulated by an operator. This dividing step makes it easy to manage the desired level of overlapping between two acquisitions.
For each type of sensor, equations are used to calculate the dimensions of the working area 8.
As an example, the following equations are given for applications in the visible range and in particular when using silver sensors.
The calculation of the working area at a focusing distance I is governed by the equations (1) and (2), which calculate the horizontal field of view (HFOV) and the vertical field of view (VFOV) in millimetres, respectively:
nh being the number of horizontal pixels, nv the number of vertical pixels and p the distance between the centers of two adjacent pixels on the acquired images.
The depth of field PC is the difference between the distance from C to the last sharp plane DPN, noted [C, DPN], and the distance from C to the first sharp plane PPN, noted [C,PPN], as shown in equation (3):
PC=[C,DPN]−[C,PPN] (3)
The equations for determining distances [C,DPN] and [C,PPN] vary depending on the sensor. For example, for a silver film camera, these distances are calculated by the equations (4) and (5) where D is the diagonal of the sensor calculated by the equation (6), c is the perimeter of the circle of confusion defined by the equation (7), and H is the hyperfocal distance:
The variables calculated by the equations (4) to (8) may vary depending on the type of sensor used. They are given here as an example.
In the case where the operator has selected a non-zero overlap percentage, the positions of the sharpness volume 6 are set to overlap two by two in the overlap areas 7 during the paving operation of the surface 1. An example of overlapping between the sharpness volumes 6 is shown in
The use of a sharpness volume allows a control of the viewing area and facilitates the integration of certain constraints such as the distance between the camera 3 and the surface 1, the normality to the surface, the centering on a particular point of the surface 1, the control of the working area 8 and the overlapping area 7.
According to a particular feature, the position of each unit model of the three-dimensional virtual model of the sharpness volume 6 is defined at least by a distance d which can be the distance d1 between a singular point P of the three-dimensional model of the surface 1 of interest and its orthogonal projection on the plane PPN, as shown in
Number | Date | Country | Kind |
---|---|---|---|
1757011 | Jul 2017 | FR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/FR2018/051888 | 7/23/2018 | WO | 00 |