A number of existing product and simulation systems are offered on the market for the design and simulation of objects, e.g., humans, parts, and assemblies of parts, amongst other examples. Such systems typically employ computer aided design (CAD) and/or computer aided engineering (CAE) programs. These systems allow a user to construct, manipulate, and simulate complex three-dimensional models of objects or assemblies of objects. These CAD and CAE systems, thus, provide a representation of modeled objects using edges, lines, faces, polygons, or closed volumes. Lines, edges, faces, polygons, and closed volumes may be represented in various manners, e.g., non-uniform rational basis-splines (NURBS).
CAD systems manage parts or assemblies of parts of modeled objects, which are mainly specifications of geometry. In particular, CAD files contain specifications, from which geometry is generated. From geometry, a representation is generated. Specifications, geometries, and representations may be stored in a single CAD file or multiple CAD files. CAD systems include graphic tools for representing the modeled objects to designers; these tools are dedicated to the display of complex objects. For example, an assembly may contain thousands of parts. A CAD system can be used to manage models of objects, which are stored in electronic files.
CAD and CAE systems use of a variety of CAD and CAE models to represent objects. These models may be programmed in such a way that the models have the properties (e.g., physical, material, or other physics based) of the underlying real-world object or objects that the models represent. CAD/CAE models may be used to perform simulations of the real-word objects that the models represent.
Simulating a human interacting with an object is a common simulation task implemented and performed by CAD and CAE systems. Performing these simulations requires setting grasping parameters. These parameters include the locations where the human model grasps the object model and the finger positioning on that object (i.e., the grasp itself). For instance, instantiating and positioning a digital human model (DHM) in a scene to simulate a manufacturing task typically requires specifying how to grasp the object(s) being manufactured, e.g., assembled.
While grasp is a popular topic in the field of digital human modeling, no solution exists which can automatically determine grasping for objects, e.g., unknown objects, while accounting for posture of the DHM performing the grasping.
An embodiment provides a grasp planner for unknown objects grasped by a DHM. Such a grasp planner takes into account final DHM posture when choosing the preferred grasp. This is particularly useful to achieve plausible DHM posture. Embodiments may be implemented in existing ergonomics frameworks, such as the Smart Posturing Engine (SPE™) framework available from Dassault Systemes, which automatically places and postures a DHM in a 3D environment, and focuses on grasping objects in virtual manufacturing contexts. Moreover, embodiments can also be implemented in existing ergonomics applications such as Dassault Systèmes'/DELMIA's “Ergonomic Workplace Design” application that helps manufacturing engineers design safe and efficient workplaces.
Another embodiment is directed to a computer-implemented method of determining position and orientation of an end effector of a DHM for grasping an object. Such an embodiment begins by receiving (i) a computer-based model of an object, (ii) a computer-based model of an environment, and (iii) an indication of position of a DHM in the environment. Next, an oriented bounding box surrounding the received model of the object is determined, where the oriented bounding box includes a plurality of faces. For each of the plurality of faces, a candidate grasp location, a candidate grasp orientation, and a candidate grasp type are determined and, then, from amongst the plurality of faces, one or more graspable faces is determined based on: (a) the determined candidate grasp location of each face, (b) the determined candidate grasp orientation of each face, (c) the received model of the environment, and (d) dimensions of each face. From amongst the determined one or more graspable faces, an optimal graspable face is identified based on a predetermined grasping hierarchy and the received indication of position of the DHM in the environment. An inverse kinematic solver is then utilized to determine position and orientation of an end effector of the DHM grasping the object based on the determined candidate grasp location, the determined candidate grasp orientation, and the determined candidate grasp type of the determined optimal graspable face.
According to an embodiment, determining the oriented bounding box comprises determining a minimum bounding box surrounding the received model of the object and determining a principal axis of inertia of the object based on the received model of the object. Such an embodiment orients the determined minimum bounding box based on the determined principal axis of inertia and sets the oriented minimum bounding box as the oriented bounding box surrounding the received model of the object. Yet another embodiment determines a candidate grasp orientation for a given face of the plurality of faces by setting the candidate grasp orientation for the given face based on the determined principal axis of inertia of the object.
An embodiment determines a candidate grasp location for a given face of the plurality of faces by, first, calculating a geometrical center of the object based on the received model of the object. Such an embodiment then projects from the calculated geometrical center of the object to the given face and sets location of an intersection of the projection and the given face as the candidate grasp location for the given face.
Another embodiment determines a candidate grasp type for a given face of the plurality of faces by calculating length of a first edge and a second edge of the given face, wherein the first edge and the second edge are perpendicular to each other. Such an embodiment also calculates length of a face edge normal to the first edge and the second edge. In turn, the candidate grasp type for the given face is determined based on: (i) the calculated length of the first edge, (ii) the calculated length of the second edge, and (iii) the calculated length of the face edge normal to the first edge and the second edge.
According to an embodiment, each determined candidate grasp type is one of: a pinch type, a medium-wrap type, and a precision sphere type.
As noted above, an embodiment determines one or more graspable faces based on: (a) the determined candidate grasp location of each face, (b) the determined candidate grasp orientation of each face, (c) the received model of the environment, and (d) the dimensions of each face. According to an embodiment, such an embodiment identifies a given face as a graspable face if (i) the end effector of the DHM, at the determined candidate grasp location in the determined candidate grasp orientation, does not collide with an element in the model of the environment and (ii) dimensions of the given face do not exceed a threshold.
In yet another embodiment, the DHM includes a left end effector and a right end effector. Such an embodiment may further include receiving an indication of the end effector, from amongst the left end effector and the right end effector, of the DHM grasping the object. This indication may be used to select the predetermined grasping hierarchy.
Embodiments may also configure the inverse kinematic solver. For instance, one such embodiment configures the inverse kinematic solver to have an unconstrained rotation degree of freedom along an axis normal to the determined optimal graspable face.
Another embodiment applies a respective label to each face of the plurality of faces. In one such embodiment, the respective label of each face is a function of position of the DHM in relation to the face. In such an embodiment, the predetermined grasp hierarchy may indicate a preferred order of graspable faces as a function of each respective label.
Embodiments can simulate physical interaction between the DHM and the object using the determined position and orientation of the end effector. Such functionality can be used to design, amongst other examples, real-world manufacturing lines, and modify/improve real-world environments to improve, for instance, ergonomics.
Yet another embodiment is directed to a system that includes a processor and a memory with computer code instructions stored thereon. In such an embodiment, the processor and the memory, with the computer code instructions, are configured to cause the system to implement any embodiments or combination of embodiments described herein.
Another embodiment is directed to a cloud computing implementation for determining position and orientation of an end effector of a DHM for grasping an object. Such an embodiment is directed to a computer program product executed by a server in communication across a network with one or more clients. The computer program product comprises program instructions which, when executed by a processor, causes the processor to implement any embodiments or combination of embodiments described herein.
The foregoing will be apparent from the following more particular description of example embodiments, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments.
A description of example embodiments follows.
Digital Human Models (DHMs) offer the unique possibility to simulate worker tasks in a three-dimensional (3D) environment. This is particularly useful in the manufacturing world because such simulations allow users to, amongst other examples, detect ergonomic problems before production lines are built and detect and correct ergonomic problems in existing production lines. This does not replace traditional ergonomics, but can help detect problems in the virtual stage of the design phase to avoid costly changes on the production line in the real world.
Today, different DHMs are available in commercial products: DELMIA Ergonomics (Dassault Systemes), Jack™ (Badler 1999), and Santos® Pro (VSR 2004). Zhou (2009) explained that the biggest challenge in DHM applications is the low efficiency of the manikin positioning in 3D, due to the time-consuming processes of manual posture creation and moving each joint separately. Jack (Cort 2019) and IMMA (Hanson 2014) proposed methods to automatically posture a manikin in a 3D environment. However, the posture prediction process in these existing methods is not fully automatic because the manikin must be placed close to the object by the user before resolving the posture. However, this is a step forward to reduce the time the manikin posture creation phase takes.
Dassault Systèmes released an application called “Ergonomic Workplace Design” (EWD) that helps manufacturing engineers design safe and efficient workplaces in 3D. The Smart Posture Engine (SPE™) technology was developed to reach that particular goal. The SPE is a framework that performs an autonomous posturing of a DHM based on minimal user inputs (Lemieux 2017), (Lemieux 2016), (Zeighami 2019).
Embodiments, which can be implemented as part of the SPE™ focus on the grasp planning portion of automatic posture generation. Bohg (2013) divided the grasp problem into three categories based on whether the object to grasp is: (1) known, (2) familiar, or (3) unknown. Known objects are previously encountered objects for which grasps have been previously generated. Familiar objects are new objects that can be grasped in a similar way to a known object. Unknown objects are objects for which there is no prior grasp experience.
As explained by (Zhou 2009), grasp planners typically try to find the best hand location on the object without considering the final DHM posture. Such methods often produce results with unrealistic final postures when reaching for the object.
A grasping algorithm was described in Bourret 2019 to automatically grasp tools that were considered known objects. The objective of this tool grasping algorithm was to have a better DHM posture when grasping the tools by allowing range of motion to the hand on the object. A method has also been proposed to automatically find grasping cues on familiar tools, so as to allow the grasp planner to grasp familiar objects automatically (Macloud 2019) (Macloud 2021).
Embodiments introduce a complementary grasp planner for grasping, e.g., with a single hand, unknown objects, which may be referred to herein as “parts”. Like methods used for known and familiar objects, embodiments provide a grasp planner that accounts for different aspects of the DHM final posture when choosing the proper way to grasp the unknown object. Amongst other applications, embodiments determine a visually plausible grasp on unknown objects in a manufacturing context.
The method 100 starts at step 101 by receiving (i) a computer-based model of an object, (ii) a computer-based model of an environment, and (iii) an indication of position of a DHM in the environment. Next, at step 102, an oriented bounding box surrounding the received model of the object is determined. In such an embodiment, the determined oriented bounding box includes a plurality of faces. In turn, at step 103, for each of the plurality of faces, a candidate grasp location, a candidate grasp orientation, and a candidate grasp type are determined. Then, at step 104, from amongst the plurality of faces, one or more graspable faces is determined based on: (a) the determined candidate grasp location of each face, (b) the determined candidate grasp orientation of each face, (c) the received model of the environment, and (d) dimensions of each face. From amongst the determined one or more graspable faces, an optimal graspable face is identified at step 105 based on a predetermined grasping hierarchy and the received indication of position of the DHM in the environment. An inverse kinematic solver is then utilized at step 106 to determine position and orientation of an end effector of the DHM grasping the object based on the determined candidate grasp location, the determined candidate grasp orientation, and the determined candidate grasp type of the determined optimal graspable face.
The method 100 is computer-implemented and, as such, the models and indication received at step 101 may be received from any memory or other such data source that is communicatively coupled or capable of being communicatively coupled to the processor(s) implementing the method 100. In embodiments, the model received at step 101 may be any computer-based models known in the art. For instance, according to an embodiment, the model of the object and the model of the environment are each CAD models. Moreover, the indication of position received at step 101 indicates location of the DHM in the three-dimensional space of the environment as represented by the model of the environment.
According to an embodiment of the method 100, determining the oriented bounding box at step 102 comprises determining a minimum bounding box surrounding the received model of the object and determining a principal axis of inertia of the object based on the received model of the object. Such an embodiment, at step 102, orients the determined minimum bounding box based on the determined principal axis of inertia and sets the oriented minimum bounding box as the oriented bounding box surrounding the received model of the object. In an embodiment of the method 100, the oriented bounding box is determined at step 102 using the functionality described hereinbelow in relation to
Step 103 of the method 100 determines a candidate grasp location, a candidate grasp orientation, and a candidate grasp type for each face of the bounding box determined at step 102.
In an embodiment, a candidate grasp orientation for a given face of the plurality of faces is determined at step 103 by setting the candidate grasp orientation for the given face based on a determined principal axis of inertia of the object. Another embodiment of the method 100 implements the functionality described hereinbelow in relation to
An example implementation of the method 100 determines a candidate grasp location for a given face of the plurality of faces at step 103 by, first, calculating a geometrical center of the object based on the model of the object received at step 101. Such an embodiment projects from the calculated geometrical center of the object to the given face and sets location of an intersection of the projection and the given face as the candidate grasp location for the given face. Such functionality may be implemented for each face of the plurality of faces of the bounding box. In an example embodiment, candidate grasp locations are determined at step 103 utilizing the functionality described hereinbelow in relation to
Embodiments of the method 100 may identify, at step 103, one of a plurality of different grasp types for each face.
Another embodiment of the method 100 determines a candidate grasp type for a given face of the plurality of faces at step 103 by calculating length of a first edge and a second edge of the given face and calculating length of a face edge normal to the first edge and the second edge. In such an embodiment, the first edge and the second edge are perpendicular to each other. In turn, the candidate grasp type for the given face is determined at step 103 based on: (i) the calculated length of the first edge, (ii) the calculated length of the second edge, and (iii) the calculated length of the face edge normal to the first edge and the second edge. An example of such functionality is described hereinbelow in relation to
At step 104, the method 100 determines one or more graspable faces based on: (a) the determined candidate grasp location of each face, (b) the determined candidate grasp orientation of each face, (c) the received model of the environment, and (d) the dimensions of each face. According to an embodiment of the method 100, the determining at step 104 identifies a given face as a graspable face if (i) the end effector of the DHM, at the determined candidate grasp location in the determined candidate grasp orientation, does not collide with an element in the model of the environment and (ii) dimensions of the given face do not exceed a threshold. An embodiment of the method 100 implements the functionality described hereinbelow in relation to
At step 105, the method 100 determines an optimal graspable face based on a predetermined grasping hierarchy and the received indication of position of the DHM in the environment. Table 1, described herein below, is an example hierarchy that may be used in embodiments. According to an embodiment, the indicated position of the DHM dictates the hierarchy that is utilized at step 105 to determine the optimal graspable face.
In yet another embodiment of the method 100, the DHM includes a left end effector and a right end effector. Such an embodiment may further include receiving, e.g., at step 101, an indication of the end effector, from amongst the left end effector and the right end effector, of the DHM grasping the object. Such an embodiment may select the predetermined grasping hierarchy used at step 105 based on the received indication of the end effector. In other words, such an embodiment uses a different hierarchy depending on the end effector (right or left) performing the grasping.
Another embodiment of the method 100 applies a respective label to each face of the plurality of faces. In such an embodiment, each label is a function of position of the DHM in relation to the face. In such an embodiment, the predetermined grasp hierarchy utilized at step 105 indicates a preferred order of graspable faces as a function of the labels. This hierarchy can be used to select the optimal face as a function of each respective label. An example of such functionality is described hereinbelow in relation to
Embodiments of the method 100 may configure the inverse kinematic solver used at step 106. For instance, one such embodiment configures the inverse kinematic solver to have an unconstrained rotation degree of freedom along an axis normal to the determined optimal graspable face.
Yet another example embodiment of the method 100 simulates physical interaction between the DHM and the object using the determined position and orientation of the end effector. Results of such a simulation may, amongst other examples, be used to improve ergonomics for a human in a real-world environment. For instance, if the method 100 is implemented during the design stage of a manufacturing line, results of the simulation may be used to improve ergonomics in the design and ultimately the real-world manufacturing line that is built. Similarly, the method 100 can be used to evaluate an existing real-world manufacturing line. In such an embodiment, the models received at step 101 are based on measurements of the real-world manufacturing line and a simulation performed using the determined grasp from step 106 indicates behavior of the human in the real-world environment. The determined behavior may, for instance, indicate that there is an ergonomics issue with the manufacturing line and a shelf should be lowered so that the human can more easily grasp the object. In this way, embodiments can be used to improve real-world environments.
Virtual Environment Example
Amongst other examples, embodiments provide methodologies to determine grasps of unknown objects in manufacturing contexts. One such example context is the production line environment 220 illustrated in
Inputs And Outputs
The inputs of an embodiment are: a 3D model of an object to grasp, a 3D model of an environment, and an indication (e.g., 3D coordinates) of initial position of a DHM in the 3D environment.
The outputs of embodiments may include an indication grasp type to use and a grasp target, e.g., position and orientation of an end effector. This grasp type and grasp target can be used in a DHM posture solving method, such as an inverse kinematic method, which is an element of the SPE framework, to determine position and orientation of the upper limb end effector (i.e., the hand). According to an embodiment, the end effector can reach the target on the object with the DHM using an inverse kinematic solver.
Example Method
Bounding Box And Target Calculation
Embodiments, e.g., at step 551 of the method 550, approximate the object to be grasped using the object's minimum oriented bounding box.
Embodiments use the determined bounding box 663 and associate, e.g., in computer memory, a potential grasp target with each face of the bounding box 663.
Embodiments also determine candidate grasp orientations for each face of the bounding box, i.e., for each candidate grasping location.
Grasp Type Determination
Feix 2015 described a taxonomy of different grasps that a human can perform. In Feix's work, a statistical analysis was performed of the different grasp characteristics based on measuring the object (size, weight), and grasp frequency for each grasp type. An embodiment leverages this statistical analysis and uses three of the most frequently used grasp types.
According to an embodiment, for each grasp type, e.g., 991a-c, an open and closed hand configuration is created, e.g., manually by a user so as to correspond to certain grasp types, and used during hand closure on the object.
From amongst the various grasp types, e.g., 991a-c, embodiments select which grasp type to use for each face of the bounding box, e.g., each target grasp location 774a-f. An example embodiment uses dimensions of the bounding box faces to determine the grasp type for each face, e.g., each candidate grasp target location 774a-f and orientation 886a-f
These dimensions 1101, 1102, and 1103 are then used in the following logic to select the grasp type to use:
Based upon the above logic, a small object is grasped with a pinch grasp 991a and a bigger object that has a small 1103 dimension, e.g., a flat object, is grasped using a precision sphere grasp 991c (using the tip of the fingers). Otherwise, a medium wrap grasp 991b is used. The values in the above logic are based upon a Feix 2014 article and have been refined based on results of testing performed on different manufacturing parts. Further, it is noted that embodiments are not limited to using the above logic and specific dimensions therein and embodiments can consider different grasp types and use different tolerances, i.e., dimensions for selecting grasp types.
Face Labeling
An embodiment labels faces of the bounding box. According to an embodiment, the labeling enables (i) ranking of the grasps and (ii) using heuristics to determine an optimal grasping location.
Graspable Faces
Embodiments determine which faces of the bounding box are graspable. In one such embodiment checks are performed to identify graspable faces.
One such embodiment, first, evaluates accessibility of each face.
After checking accessibility, the second check when identifying graspable faces is based on face dimensions. For each of the accessible faces, dimensions, e.g., 1101 and 1102 shown in the
Grasp Ranking
After identifying the graspable faces, embodiments rank the faces to determine an optimal graspable face. Table 1 below illustrates bounding box face rankings according to an embodiment.
Table 1 shows that when the top side is graspable, it is considered the optimal grasping face. If the top side is not graspable, i.e., it is inaccessible or too big, the next face in the ranking that is graspable (right/left, bottom, front, back) is considered the optimal graspable face. If no face is graspable, the top face is chosen. In an embodiment using Table 1, the second rank is right/left, and the side selected is based on which end effector is involved in the grasping. Specifically, if the left end effector, e.g., hand, is used to grasp the object, then, the second rank is the left side and if the right hand is used to grasp the object, then, the second rank is the right side. In an embodiment, when grasping an object with the right hand, the left side is considered to not be graspable because it would result in the DHM having to be in an unrealistic posture. A similar logic is also applied when grasping with the left hand, i.e., the right face is considered ungraspable.
Grasp Execution
After determining an optimal face to grasp, embodiments determine the grasp, i.e., position and orientation, of an end effector. An embodiment determines the grasp using the determined candidate grasp location, the determined candidate grasp orientation, and the determined candidate grasp type of the determined optimal graspable face using an inverse kinematic solver.
Example Results
Embodiments work well when grasping objects that are well represented by their oriented bounding box. More complex and bigger parts may be further segmented into multiple smaller subparts (Miller 2003) and, in turn, embodiments may be implemented on more specific locations on the object, i.e., the smaller subparts.
Computer Support
Embodiments can be implemented in the Smart Posture Engine (SPE) framework inside Dassault Systèmes application “Ergonomic Workplace Design”. With the Ergo4All (Bourret 2021) technology, the SPE enables assessment and minimization of ergonomic risks involved in simulated workplaces.
Moreover, embodiments may be implemented in any computer architectures known to those of skill in the art. For instance,
It should be understood that the example embodiments described herein may be implemented in many different ways. In some instances, the various methods and machines described herein may each be implemented by a physical, virtual, or hybrid general purpose computer, such as the computer system 1600, or a computer network environment such as the computer environment 1710, described herein below in relation to
Embodiments or aspects thereof may be implemented in the form of hardware, firmware, or software. If implemented in software, the software may be stored on any non-transient computer readable medium that is configured to enable a processor to load the software or subsets of instructions thereof. The processor then executes the instructions and is configured to operate or cause an apparatus to operate in a manner as described herein.
Further, firmware, software, routines, or instructions may be described herein as performing certain actions and/or functions of the data processors. However, it should be appreciated that such descriptions contained herein are merely for convenience and that such actions in fact result from computing devices, processors, controllers, or other devices executing the firmware, software, routines, instructions, etc.
It should be understood that the flow diagrams, block diagrams, and network diagrams may include more or fewer elements, be arranged differently, or be represented differently. But it further should be understood that certain implementations may dictate the block and network diagrams and the number of block and network diagrams illustrating the execution of the embodiments be implemented in a particular way.
Accordingly, further embodiments may also be implemented in a variety of computer architectures, physical, virtual, cloud computers, and/or some combination thereof, and thus, the data processors described herein are intended for purposes of illustration only and not as a limitation of the embodiments.
The teachings of all patents, published applications and references cited herein are incorporated by reference in their entirety.
While example embodiments have been particularly shown and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the embodiments encompassed by the appended claims.
This application claims the benefit of U.S. Provisional Application No. 63/312,954, filed on Feb. 23, 2022. The entire teachings of the above application are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63312954 | Feb 2022 | US |