There are many industrial robotic applications which require a robot manipulator's end-effector to fully cover a 3D surface region in a constrained motion. Constrained surface coverage in this context is focused on placing commonly used coverage patterns (such as raster, spiral, or dualspiral) onto the surface for the manipulator to follow. The manipulator must continuously satisfy surface task constraints imposed on the end-effector while maintaining manipulator joint constraints. While there is substantial research for coverage on planar surfaces, methods for constrained coverage of 3D (spatial) surfaces are limited to certain (parametric or spline) surfaces.
A generalized approach and methodology for addressing surface traversal and coverage of a 3 Dimensional (3-D) object receives a 3-D wireframe or similar Cartesian based representation, and converts the 3-D representation to a u,v system or mapping.
Often employed for texture mapping, u.v grid systems define a two dimensional form of an object, often referred to as “unfolding” of an object. Configurations herein define a u.v grid system directly on the 3-D object for computing a coverage path, typically an aggregation of raster passes to traverse an entire 3-D surface. From a robotic manipulator, a 3D freeform surface, and task constraints, the approach determines whether there exists a feasible continuous motion plan to cover the surface, and if so, produces a uniform coverage path that best satisfies task constraints resulting from the physical object and robot kinematics.
Configurations herein are based, in part, on the observation that surface modeling and analysis of 3-D objects is often undertaken for tasks such as surface textures in computer graphics, as well as practical uses in industrial tasks such as painting, spray coating, abrasive blasting, polishing, shotcreting and others. Unfortunately, conventional approaches to surface analysis and mapping often partition an object surface into a series or set of surfaces, typically based on mathematical representations of the different partitions. UV mapping may be employed, which is the 3D modeling process of projecting a 3D model's surface to a 2D image, often for texture mapping. This is computationally intensive and requires manual segregation of the partitions for individual treatment. Accordingly, configurations herein substantially overcome the shortcomings of conventional approaches by defining a uv system directly on the free form space, often defined by a point cloud or 3-D cartesian representation.
In further detail, in a robotic environment having a task object represented by a 3D surface, a method for planning a coverage path includes receiving a representation of the task object in a 3-dimensional coordinate system, where the task object is defined by a surface, and evaluating the task object for determining a feasibility of a coverage pattern, such that the coverage pattern defines a continuous traversal of the surface. From the uv representation, an application computes a coverage path for traversing the surface according to the coverage pattern, where the coverage path is defined by the uv coordinate set representing the surface. The singular path results from computing a uv grid directly on the surface from the representation of the task object, where the representation is a 3 dimensional free form surface and the uv grid defining the surface of the task object. No constituent faces or sides of the object need be partitioned, or “unwrapped.”
The foregoing and other objects, features and advantages of the invention will be apparent from the following description of particular embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention.
A robotically driven surface coverage exhibits a path planning problem on 3D surfaces. Such surface coverage planning may be invoked for many industrial applications, such as painting, spray coating, abrasive blasting, polishing, shotcreting, laser ablation and others. Configurations herein provide a principled and general approach that includes an automatic robotic system to find feasible robotic end-effector paths for covering a 3D freeform surface, based on provided key parameters related to the specific task. Therefore, the approach enables a human worker who only has the domain knowledge of a specific coverage task to operate the general and automatic robotic system effectively for completing the task.
The use of robots for surface coverage applications is common in many industries including those of automobiles, furniture, aircraft, construction, agricultural, and appliances. Industrial manipulator coverage applications (such as spray coating/painting, abrasive blasting, polishing, shotcrete, laser ablation, etc.) require a manipulator's end-effector to traverse the entire surface once while satisfying task criteria in terms of application thickness, cycle time, and material waste. The quality of production is usually determined by manipulator tool coverage uniformity and how it was applied on the surface (i.e. angle of approach, surface offset, etc.). Manual generation of a continuous and even coverage tool path on a 3D freeform surface is complex, time consuming, ad hoc, and difficult to optimize. Conventional approaches generally expect a human operator having substantial knowledge of robotic manipulation. With the need for rapid and efficient production and repairs in related industries, it would be beneficial to enable automated and optimal robotic coverage on 3D freeform surfaces, as defined herein.
In a conventional approach to surface mapping and coverage, uv coordinates are typically applied to each individual face of a task object. This means a shared spatial vertex position can have different UV coordinates for each of its triangles, so adjacent triangles can be cut apart and positioned on different areas of the texture map. The resulting decomposition of multiple surface regions complicates processing and prevents a single coverage path from being defined across the entire surface, a property referred to as singularity, discussed further below. Such a conventional uv mapping process at its simplest requires three steps: unwrapping the mesh, creating the texture, and applying the texture to a respective face of polygon. In the claimed approach, in contrast, no such unwrapping needs to be performed, and no disparate individual treatment of multiple faces is needed.
Configurations herein address the following fundamental problem: given a manipulator, a 3D freeform surface, and task constraints, how to enable a human worker, who is a task expert but not trained in robotics, to use automatic algorithms to determine the feasibility for the end-effector to traverse the entire surface continuously while maintaining the task and manipulator constraints, and if it is feasible, produce an evenly spaced, continuous coverage path that best satisfies task constraints.
In a robotic environment as in
Generation of the uv grid 105 involves discretization of the Cartesian points or references defining the surface.
Coverage feasibility refers to the manipulator's end effector 140 path satisfying continuously both task and manipulator constraints, which depends on manipulator kinematics and surface parameters, while covering an entire surface. Violation of manipulator constraints causes either no solution for inverse kinematics or singularity configurations that prevent the end-effector to move smoothly along the surface.
In general, the position and orientation constraints on the manipulator or its end-effector are either equality or inequality constraints: Equality constraints on application surface coverage include end-effector offset and uniform thickness, which in turn constrain the end-effector position and velocity; inequality constraints include limits on tool angle to surface, which constrain the end-effector orientation, manipulator joint limits, which constrain the joint positions, and velocity change limits, which constrain both the end effector and manipulator trajectories. To avoid expensive inverse kinematic calculations, the disclosed approach searches for the existence of a feasible end-effector path between uv-cells in the manipulator's joint space. Feasibility checking regarding freeform surfaces includes deriving constraints on 3-D polygonal meshes and improving the end-effector joint space search to be more efficient based on information from the manipulator Jacobian, as described below.
If the feasibility check determines that a feasible continuous coverage path exists for a 3D freeform surface given the task and manipulator constraints, a coverage pattern is placed on the uv grid and mapped to the Cartesian space to facilitate coverage motion determination. Coverage patterns such as raster scans or Archimedean spirals are generally used for constrained surface coverage applications due to their constant separation between adjacent turns, thus resulting in uniform coverage.
Finally, given that a feasible coverage path exists (from feasibility checking) and a coverage pattern, the application 130 generates the end-effector's coverage path that satisfies the position and orientation constraints. The initial end-effector position can be determined by applying an offset d from the surface (as required by the position task constraint), and the initial endeffector orientation can be determined as the optimal tool angle to surface (usually the surface normal, as required by the orientation constraint). An initial end-effector path can be created in terms of the sequence of end-effector positions and orientations corresponding to the centers of uv-cells along the coverage pattern. However, such an initial path may not be feasible, and our system finds a feasible path by detecting singularities along the initial path and modifying the end-effector configurations mainly through altering the end-effector orientations to avoid singularities.
Configurations herein consider a 3D polygon mesh representation of a physical freeform surface, S. The physical surface of an object can be scanned using modern 3D scanning technology and approximated as a 3D polygon mesh, which is the most common way to represent a general surface when there is no other more precise model. Additionally, S must fit within the workspace of a robotic arm to allow the end-effector to cover the surface. A large surface may be segmented to small S ′s using a hierarchical approach.
The application 130 computes the uv grid by receiving the representation as a 3-dimensional polygon mesh defining a non-intersecting free form surface, as in
The application 130 defines and considers these types of general 3D freeform surfaces that can be generated from some transformation of a planar, non self-intersecting freeform curve C, along or about some axis, called in
The curve C can be known or approximated or computed from input. It should be noted that the above surface types share an important property that curve C is theoretically the same along each axis of translation or about each axis of rotation. For a SoTR, the surface can be a continuous combination of translations and rotations of curve C, which may occur simultaneously, as long as the rotation axis remains in the same direction, and the translation axis is always orthogonal to the rotation axis. The above surfaces can capture many surfaces in the real world.
An interface for the disclosed approach provides a human worker, who is a task-domain expert but not a robotics expert, with a visual and numerical representation of the spatial and orientation arrangement between the surface S and the manipulator. The application 130 denotes Sf as the coordinate frame of S. The human worker may initially adjust the position and orientation of Sf with respect to the manipulator base (world frame), thus changing the arrangement between S and the manipulator. Additionally, Sf may be positioned by the human worker with respect to surface mesh vertices to facilitate automatic generation of the uv grid.
To enable automatic uv grid generation, the human worker first determines the surface S as one of the three types defined. Then, they align the z-axis of the surface frame Sf with axis , and our underlying automatic system helps the human worker to verify if Sf is placed accurately by showing the resulting curve C 301-A . . . 301-C (301 generally).
The application 130 generates uv space for the three types of freeform surfaces, surface of translation (SoT), surface of rotation (SoR), and surface of translation and rotation (SoTR), in turn. For each type of surface, the surface coordinate frame Sf is set such that the z axis is along the axis , as shown in
For a uv space for SoT, let C0 be the curve C on the xy plane at the beginning of translation (obtained by the human worker) with z0.
The approach sweeps (i.e., translates) a plane perpendicular to the z axis from z0 to increase z value, starting from the xy plane. The sweeping pauses at every triangle vertex encountered and records the z value as zi, i=1, . . . , m. The approach then obtains the corresponding curve Ci as the intersection of the xy plane with surface S at zi.
Let (xi,j, yi,j, zi,j) be the coordinates of the j th triangle vertex on the curve Ci,j=0, . . . , ni. These points share the same u coordinate:
For j=1, . . . , ni, let Δxi,j=xi,j−xi,j−1 and Δyi,j=yi,j−yi,j−1. We can compute vi,j by initializing vi,0=0 and:
For a uv space for SoR:
Denote the angle of rotation as ¢ and radius (i.e. vertex distance from axis of rotation) as r(z), which is an unknown function of z due to the freeform curve C. Let Co indicate the curve C on the plane of ϕ=ϕ0.
Now, by increasing ϕ and rotating the corresponding plane (which goes through the z axis), our method pauses the rotating plane at every triangle vertex encountered and records the ϕ value as ϕi, i=1, . . . , n. Our algorithm then obtains the corresponding curve Ci as the intersection of the plane with surface S at ϕi.
Let (xi,j, yi,j, zj) be the local coordinates of the j th triangle vertex on the curve Ci with Ci, for j=0, . . . , ni. Compute its u coordinate as:
For j=1, . . . , ni, let Δzj=zj−zj−1 and Δrj=rj−rj−1. We can compute vi,j by initializing vi,0=0:
For uv Space for SoTR, in this case, all rotation axes are parallel to the z-axis, and all translation axes are orthogonal to the z-axis. Let Co be on plane P that also includes the z axis.
The SoTR surface and the xy plane intersects at curve U, which intersects Co at point p0. The approach sweeps the plane along U while maintaining it perpendicular to U at each point, and the sweeping pauses at every triangle vertex encountered on S by
and records the Cartesian position of the intersection point with U. Let pi indicate the i th intersection point between the swept
and U, and let the corresponding C curve be Ci, for i=1, . . . , m.
Let (xi,j, yi,j, zi,j) be the coordinates of the j th triangle vertex on the curve Ci, for j=0, . . . , ni. For i=1, . . . , m, let Δjxi,j=xi,j−xi−1,j and Δjyi,j=yi−1,j−yi,j. By initializing u0,j=0, we can compute the u coordinate of the point as:
Now, Let Δixi,j=xi,j−xi,j−1, Δiyi,j=yi,j−yi,j−1, Δizi,j=zi,j−zi,j−1, j=1, . . . , ni. By initializing vi,0=0,
As mentioned above, one of the advantages of the disclosed approach is discretization of the uv space. For every triangle on the surface S, its vertex with position τi=[τix, τiy, τiz]T, i=1,2,3, now has uv coordinates [τiu, τiv]T. Note that the uv coordinates at each of those points are real numbers as computed above, even though those discrete points are indexed by integers.
Additionally, for any other point s inside the triangle, its coordinates of either Cartesian space or uv space can be found based on its distance to one of the triangle vertices. Let ps indicates point s ′s position in the Cartesian space, point s ′s barycentric coordinates, wi, can be found from the following equations:
From the barycentric coordinates, the point s ′s uv-space coordinates [u, v] can be determined. In general, the Barycentric coordinates wi ′s can be used to find either Cartesian or uv-space coordinates of a point inside a triangle from the corresponding vertex coordinates of the triangle.
To discretize the uv space into a grid with uniform cells, let Δu and Δv be the desired dimensions of an uv-cell. We denote a uv-cell with the cell center coordinates [uj, vk], where j and k are integer indices for the discretization, even though the uv-space coordinates uj and vk are real numbers from the original continuous space.
To perform feasibility checking, it is important to first introduce a general position and orientation constraint formulation and next present an improved joint-space feasibility checking algorithm, which in turn determines end-effector coverage path feasibility. In this step, determination of the feasibility further includes identifying task based on kinematics of a robot for movement along an entirety of the uv coordinate system representing the surface of the task object, meaning can the robot physically traverse the surface 103 through proper manipulation within the range of movement of the robotic arms. The process 130 then identifying manipulator constraints indicative of a manipulator coupled to the robot attaining placement within a predetermined offset distance of the surface of the task object, meaning can the end effector (paint sprayer, welding torch, and others) attain a proper orientation with the surface, generally meaning at an appropriate offset and angle.
In general, the end-effector's position and orientation task constraints are related, in that the distance constraint from the end-effector position to the surface S should be measured along the end-effector approach vector, which is determined by the end-effector's orientation task constraint. In a particular configuration, the end-effector's position and orientation constraints were introduced independently, which prohibited deviations of the approach vector from the optimal surface normal in order to satisfy the end-effector position constraint (to maintain constant offset) with respect to the surface. As a result, it was difficult to handle transitions around sharp edges and vertices.
In order to achieve smoother transitions around edges and vertices of a 3D freeform surface (in polygonal mesh) during surface traversal, shown in
We r** denote the end-effector position in Cartesian space with respect to the surface coordinate frame as p and the end-effector approach vector as the unit vector a (along the z axis of the end-effector), such that:
Let p′ be on the uv-cell [uj, vk], which is on a triangle of the surface with vertex positions τ1, τ2, and τ3. Using the task constraint of Eq. (13), we have
With end-effector position p expressed in terms of the triangle parameters of the uv-cell [uj, vk] and the angle α as in equations (13)-(15), we can relate an n-dimensional robotic manipulator's link parameters, l, and joint variables, q, directly to the task constraint equations using forward kinematics to obtain joint space task constraints.
The manipulator joint configurations that satisfy the joint space task constraints Eq. (16) form what we call a J manifold. We can discretize the J-manifold into an n dimensional grid, where each cell, called a J-cell, corresponds to a joint configuration q. The distance between two neighboring J-cell joint configurations is dq such that each joint variable's value qi in q is increased or decreased by δq or remains the same to reach its neighboring J-cell. Thus, a given J-cell has 3n−1 neighboring J-cells. Through forward kinematics a J-cell corresponds to a feasible end-effector configuration, an E-cell with position p and orientation R. We denote the space of all E-cells as the E-manifold.
An E-cell with position p maps to a point on the surface with position p′, by Eq. (13), and p′ can be converted to uv space. Note that multiple E-cells with different a (approach) vectors may map to the same surface position p′ or the same uv coordinates. Mappings between the J-manifold, E-manifold, and the uv grid are illustrated in
The coverage feasibility checking process of Table I explores the uv grid, using a tree search, reaching every uv-cell once. Starting from an initial uv-cell, uv1, which inversely maps to a J-cell, jc1, the algorithm stores neighbor uv-cells in Neighbors and randomly generates a none-zero dq by randomly assigning a value from {−δq, 0, δq} for each joint. It calls the process of Table II to search the J-manifold moving through neighboring J-cells, until a corresponding E-cell is reached, which maps inside a neighboring uv-cell (i.e. the uv-cell with its center closest to the corresponding uv point of that E-cell), and establish that there is a neighboring feasibility continuity of the manipulator to cover the two uv cells.
The process of Table II joint space search is made more efficient by allowing the search to continue in a direction, dq, instead of a random direction for each J-cell transition, thus narrowing the search space. If following a direction dq, the search reaches a joint configuration q that satisfies joint space task constraints, then dq is used again until the constraints are no longer satisfied. Once these constraints are no longer satisfied, dq is adjusted using the Jacobian matrix J(q) to identify the order of joint variable qi which least affects the resulting change dx in the end-effector configuration. Starting with the least significant joint variable, the adjustment procedure generates a new value that is not previously used for that joint in dq, trying to obtain a new dq that is close to the previous dq in the hope of satisfying joint space task constraints with as little deviation from the original search direction as possible.
This process of Table I continues until every reachable uv-cell pair continuity is checked via a tree search and a uv space connectivity graph, G, is built based on neighboring feasibility continuity results. If the resulting graph is a single connected component, we determine that there is at least one end-effector path that can cover the surface S continuously while maintaining constraints. Upon determining the feasibility of a coverage pattern, the application 103 computes the coverage path for defining a continuous traversal of the surface of the task object relative to the uv coordinate set.
The worst case time complexity of the uv grid search depends on the worst case time complexity of the feasibility search between two adjacent uv-cells, which is essentially a depth-first search (DFS) in the joint space. The worst case time complexity of the DFS is O(bm), where b is the maximum branching factor and m is the maximum depth of search. b≤3n−1, where n is the degrees of freedom of the manipulator. Usually, b<<3n−1 because (1) not all joints move during the transition from one uv-cell to an adjacent one, and (2) many joint configurations cause a violation of constraints. m depends on the joint limits and the value of δq. Essentially, if δq is used to discretize the high-dimensional joint space into a hyper-grid, m is bounded by the number of J-cells. Thus, the worst-case time complexity of the uv grid search algorithm is O(N·bm), where N is the number of uv cells. The actual running time for feasibility check between two adjacent uv-cells takes just a few milliseconds.
A common raster pattern is a set of parallel, straight “scan” lines which are separated perpendicularly by a distance, ω, and connected at their ends in an alternating manner. We denote the direction of the lines as the start direction, which may include creating a raster pattern on the uv grid with a horizontal start direction. In general, we can determine if a start direction will result in a raster coverage pattern that covers the surface uninterrupted by using a systematic check on the scan lines. If each scan line intersects the surface boundaries only twice, then that start direction is suitable for the uv grid surface, as shown in
We denote UV as a coverage pattern on the uv grid with uv; being the i th uv point of the pattern such that UV=[uv1, uv2, . . . , uvM]. Each uvi indicates a center of a uv-cell; uvi and uvi+1 indicates centers of neighboring uv cells, for 1≤i<M. Note that the UV coverage pattern should only cross between uv-cells which are connected in the connectivity graph of the uv grid.
We denote H as the coverage pattern UV expressed in Cartesian coordinates, with hi being the position of uv; on the pattern such that H=[h1, h2, . . . , hM].
The application 130 must then perform singularity detection by computing whether a raster scan along the coverage path follows an unbroken continuous sequence along a uv coordinate set representing the surface. Let {R1, R2, . . . , RM} denote a sequence of end-effector orientations, such that Ri is the rotation matrix of the manipulator initialized with the end-effector's z-axis along the normal bi of the triangle that contains the i th waypoint uv; of the coverage pattern. This is to best satisfy the orientation task constraint.
We compute the end-effector's position pi corresponding to each waypoint uvi with position hi in the Cartesian space as:
Now we initialize the end-effector coverage path P, s.t. P=[E1, E2, . . . , EM], where Ei, for i=1, . . . , M, is a homogeneous transformation matrix consisting of position pi and rotation matrix Ri. We denote Q as the sequence of corresponding joint states of P, with qi being the i th joint state on the path such that Q=[q1, q2, . . . , qM].
If a feasible coverage path exists (as determined above), it does not mean the generated initial path P is a feasible coverage path. The following subsections describe singularity detection on initial coverage path P, and if singularities are detected, how we alter path P to avoid singularities and best satisfy task constraints.
Even if the surface S is placed within the robot's workspace, depending on the shape and size of S, there can be internal singularities to prevent the manipulator from following the initial path. Two kinds of singularities have to be considered: joint space singularities and Cartesian space singularities.
We refer to joint space singularities along a joint space path as those configurations that exceed a certain joint limit, which result in failures of motion/trajectory planning. We refer to Cartesian space singularities along a Cartesian space path as those end-effector configurations that cannot be reached with the imposed task constraints.
The disclosed approach checks the end-effector path P for such singularities through the corresponding manipulator joint space path Q. Linear interpolation is performed between joint configurations qi and qi+1, for i=1, . . . , M, on path Q, such that:
for a small ϵ. Forward kinematics yields corresponding Cartesian configurations. If the end-effector position pi
It is preferable, if not mandatory, to avoid singularity. For every singularity encountered along the path (either a joint space singularity or a Cartesian space singularity), our planner alters the end-effector orientations in a neighborhood of the singularity configuration, called orientation smoothing, until a singularity free path is obtained.
For each singularity encountered along the end-effector initial coverage path P, say at the i th waypoint, our method re-orients the end-effector approach vector ai−j closer to that of the previous waypoint's approach vector a(i−j)−1 with:
where
for j=0 to i−1. The path can be smoothed up to N times, where the difference between neighboring approach vectors are reduced to 2−N original difference. Satisfaction of task constraints are checked after orientation smoothing at each point. The worst case time complexity for each smoothing processes is O(M) and the total algorithm execution time is proportional to how many singularities are detected along the initial coverage path P.
The complete manipulator path planner is shown in Table III, which alters the initial manipulator path to create a singularity free path, satisfying equality task constraints, and best satisfying inequality task constraints. If it is not possible to create such a path under the current coverage pattern, the planner returns “no feasible solution”. Another coverage pattern can be tried.
Other suitable 2D coverage patterns can be applied depending on the resulting uv grid to cover the surface. It is significant that generation of the uv grid allows evenly spaced application of (different) coverage patterns. A coverage path at turns of a coverage pattern can be further smoothed by interpolating between adjacent uv-cells in the initial coverage pattern.
To produce a smooth coverage along a uniform coverage pattern on the surface S, the corresponding end-effector path P has non-uniform transitions from one configuration to the next. This is a result of Eq. (17), which relates the position and orientation constraints, such as the end-effector's position offset from the surface is along its approach direction. This is important to produce a smooth and uniform surface coverage
While the system and methods defined herein have been particularly shown and described with references to embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
This patent application claims the benefit under 35 U.S.C. § 119 (e) of U.S. Provisional Patent App. No. 63/529,190, filed Jul. 27, 2023, entitled “ROBOTIC PATH PLANNING,” incorporated herein by reference in entirety.
This invention was developed with U.S. Government support under contract No. W911NF1920108, awarded by the Army Research Lab, and under NRT grant 1922761, awarded by the NSF (National Science Foundation).
| Number | Date | Country | |
|---|---|---|---|
| 63529190 | Jul 2023 | US |