Systems and methods for high accuracy fixtureless assembly

Information

  • Patent Grant
  • 11449021
  • Patent Number
    11,449,021
  • Date Filed
    Monday, December 17, 2018
    6 years ago
  • Date Issued
    Tuesday, September 20, 2022
    2 years ago
Abstract
An approach to positioning one or more robotic arms in an assembly system may be described herein. For example, a system for robotic assembly may include a first robot, a second robot, and a control unit. The control unit may be configured to receive a first target location proximal to a second target location. The locations may indicate where the robots are to position the features. The control unit may be configured to calculate a first calculated location of the first feature of the first subcomponent, measure a first measured location of the first feature of the first subcomponent, determine a first transformation matrix between the first calculated location and the first measured location, reposition the first feature of the first subcomponent to the first target location using the first robot, the repositioning based on the first transformation matrix.
Description
BACKGROUND
Field

The present disclosure relates to transport structures such as automobiles, trucks, trains, boats, aircraft, motorcycles, metro systems, and the like, and more specifically to techniques for performing operations with robotic arms.


Background

A transport structure such as an automobile, truck or aircraft employs a large number of interior and exterior nodes. These nodes provide structure to the automobile, truck, and aircraft, and respond appropriately to the many different types of forces that are generated or that result from various actions like accelerating and braking. These nodes also provide support. Nodes of varying sizes and geometries may be integrated into a transport structure, for example, to provide an interface between panels, extrusions, and/or other structures. Thus, nodes are an integral part of transport structures.


Most nodes must be coupled to, or interface securely with, another part or structure in secure, well-designed ways. In order to securely connect a node with another part or structure, the node may need to undergo one or more processes in order to prepare the node to connect with the other part or structure. For example, the node may be machined at an interface in order to connect with various other parts or structures. Further examples of processes include surface preparation operations, heat treatment, electrocoating, electroplating, anodization, chemical etching, cleaning, support removal, powder removal, and so forth.


In order to produce a transport structure (e.g., a vehicle, an aircraft, a metro system, etc.), one or more assembly operations may be performed after a node is manufactured. For example, a node may be connected with a part, e.g., in order to form a portion of a transport structure (e.g., a vehicle chassis, etc.). Such assembly may involve a degree of accuracy that is within one or more tolerance thresholds of an assembly system, e.g., in order to ensure that the node is securely connected with the part and, therefore, the transport structure may be satisfactorily produced.


When robotic apparatuses (e.g., robotic end-of-arm tool center point) perform assembly operations, the robotic apparatuses are to be accurately positioned in order for the assembly operations to be accurately performed. For example, a robotic arm with which a node is engaged may be positioned so that the node is accurately connected with a part. Thus, a need exists for an approach to correctly positioning at least one robotic apparatus (e.g., a robotic end-of-arm tool center point) with a degree of precision that is within tolerance threshold(s) of an assembly system when performing various assembly operations.


SUMMARY

The present disclosure generally relates to assembly operations performed in association with the production of transport structures. Such assembly operations may include connection of nodes (e.g., additively manufactured nodes) with parts and/or other structures. Because transport structures are to be safe, reliable, and so forth, approaches to accurately performing various assembly operations associated with the production of transport structures may be beneficial. Such approaches to various assembly operations may be performed by at least one robotic arm that may be instructed via computer-generated instructions. Accordingly, a computer may implement various techniques to generate instructions for at least one robotic arm that causes the at least one robotic arm to be correctly positioned when performing various assembly operations.


In the present disclosure, systems and methods for positioning a robotic arm may be described. In one aspect, a method of robotic assembly includes receiving a first target location indicating where a first robot is to position a first feature of a first subcomponent. The first target location may be proximal to a second target location indicating where a second robot is to position a second feature of a second subcomponent such that the first subcomponent and the second subcomponent form a component when coupled together with the first feature of the first subcomponent in the first location and the second feature of the second subcomponent in the second location. The method of robotic assembly also includes calculating a first calculated location of the first feature of the first subcomponent and measuring a first measured location of the first feature of the first subcomponent. Additionally, the method of robotic assembly includes determining a first transformation matrix between the first calculated location and the first measured location and repositioning the first feature of the first subcomponent to the first target location using the first robot. The repositioning may be based on the first transformation matrix.


In one aspect, a system for robotic assembly includes a first robot, a second robot, and a control unit. The control unit may be configured to receive a first target location indicating where the first robot is to position a first feature of a first subcomponent. The first target location may be proximal to a second target location indicating where the second robot is to position a second feature of a second subcomponent such that the first subcomponent and the second subcomponent form a component when coupled together with the first feature of the first subcomponent in the first location and the second feature of the second subcomponent in the second location. The control unit may also be configured to calculate a first calculated location of the first feature of the first subcomponent and measure a first measured location of the first feature of the first subcomponent. Additionally, the control unit may be configured to determine a first transformation matrix between the first calculated location and the first measured location and reposition the first feature of the first subcomponent to the first target location using the first robot. The repositioning may be based on the first transformation matrix.


In one aspect, a robotic assembly control unit includes at least one processor and a memory coupled to the at least one processor. The memory includes instructions configuring the control unit to receive a first target location indicating where a first robot is to position a first feature of a first subcomponent. The first target location is proximal to a second target location indicating where a second robot is to position a second feature of a second subcomponent such that the first subcomponent and the second subcomponent form a component when coupled together with the first feature of the first subcomponent in the first location and the second feature of the second subcomponent in the second location. The memory also includes instructions configuring the control unit to calculate a first calculated location of the first feature of the first subcomponent and measure a first measured location of the first feature of the first subcomponent. Additionally, the memory includes instructions configuring the control unit to determine a first transformation matrix between the first calculated location and the first measured location and reposition the first feature of the first subcomponent to the first target location using the first robot. The repositioning is based on the first transformation matrix.


In one aspect, a computer-readable medium stores computer executable code for robotic assembly. In an aspect, the computer-readable medium may be cloud-based computer-readable mediums, such as a hard drive on a server attached to the Internet. The code, when executed by a processor, causes the processor to receive a first target location indicating where a first robot is to position a first feature of a first subcomponent. The first target location may be proximal to a second target location indicating where a second robot is to position a second feature of a second subcomponent such that the first subcomponent and the second subcomponent form a component when coupled together with the first feature of the first subcomponent in the first location and the second feature of the second subcomponent in the second location. The code, when executed by a processor, causes the processor to calculate a first calculated location of the first feature of the first subcomponent and measure a first measured location of the first feature of the first subcomponent. The code, when executed by a processor, causes the processor to determine a first transformation matrix between the first calculated location and the first measured location and reposition the first feature of the first subcomponent to the first target location using the first robot. The repositioning is based on the first transformation matrix.


It will be understood that other aspects of mechanisms for realizing high accuracy fixtureless assembly with additively manufactured components and the manufacture thereof will become readily apparent to those skilled in the art from the following detailed description, wherein it is shown and described only several embodiments by way of illustration. As will be realized by those skilled in the art, the disclosed subject matter is capable of other and different embodiments, and its several details are capable of modification in various other respects, all without departing from the invention. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a diagram illustrating an exemplary embodiment of certain aspects of a Direct Metal Deposition (DMD) 3-D printer.



FIG. 2 is a conceptual flow diagram of a 3-D printing process using a 3-D printer.



FIGS. 3A-D is a diagram illustrating exemplary powder bed fusion (PBF) systems during different stages of operation.



FIG. 4 is a diagram illustrating a perspective of a first assembly system including a plurality of robots acting as fixtures.



FIG. 5 is a diagram illustrating a perspective of a second assembly system including a plurality of robots acting as fixtures.



FIG. 6 is a diagram illustrating a fixture point printed directly on a part.



FIG. 7 is a diagram illustrating part scanning and fitting on a fixture.



FIG. 8 is a conceptual flow diagram in accordance with the systems and methods described herein.





DETAILED DESCRIPTION

The detailed description set forth below in connection with the appended drawings is intended to provide a description of various exemplary embodiments and is not intended to represent the only embodiments in which the invention may be practiced. The terms “exemplary,” “illustrative,” and the like used throughout the present disclosure mean “serving as an example, instance, or illustration,” and should not necessarily be construed as preferred or advantageous over other embodiments presented in the present disclosure. The detailed description includes specific details for the purpose of providing a thorough and complete disclosure that fully conveys the scope of the invention to those skilled in the art. However, the invention may be practiced without these specific details. In some instances, well-known structures and components may be shown in block diagram form, or omitted entirely, in order to avoid obscuring the various concepts presented throughout the present disclosure. In addition, the figures may not be drawn to scale and instead may be drawn in a way that attempts to most effectively highlight various features relevant to the subject matter described.


Additive Manufacturing (3-D Printing). Additive manufacturing (AM) is advantageously a non-design specific manufacturing technique. AM provides the ability to create complex structures within a part. For example, nodes can be produced using AM. A node is a structural member that may include one or more interfaces used to connect to other spanning components such as tubes, extrusions, panels, other nodes, and the like. Using AM, a node may be constructed to include additional features and functions, depending on the objectives. For example, a node may be printed with one or more ports that enable the node to secure two parts by injecting an adhesive rather than welding multiple parts together, as is traditionally done in manufacturing complex products. Alternatively, some components may be connected using a brazing slurry, a thermoplastic, a thermoset, or another connection feature, any of which can be used interchangeably in place of an adhesive. Thus, while welding techniques may be suitable with respect to certain embodiments, additive manufacturing provides significant flexibility in enabling the use of alternative or additional connection techniques.


A variety of different AM techniques have been used to 3-D print components composed of various types of materials. Numerous available techniques exist, and more are being developed. For example, Directed Energy Deposition (DED) AM systems use directed energy sourced from laser or electron beams to melt metal. These systems utilize both powder and wire feeds. The wire feed systems advantageously have higher deposition rates than other prominent AM techniques. Single Pass Jetting (SPJ) combines two powder spreaders and a single print unit to spread metal powder and to print a structure in a single pass with apparently no wasted motion. As another illustration, electron beam additive manufacturing processes use an electron beam to deposit metal via wire feedstock or sintering on a powder bed in a vacuum chamber. Single Pass Jetting is another exemplary technology claimed by its developers to be much quicker than conventional laser-based systems. Atomic Diffusion Additive Manufacturing (ADAM) is still another recently developed technology in which components are printed, layer-by-layer, using a metal powder in a plastic binder. After printing, plastic binders are removed, and the entire part is sintered at once into a desired metal. One of several such AM techniques, as noted, is DMD. FIG. 1 illustrates an exemplary embodiment of certain aspects of a DMD 3-D printer 100. DMD printer 100 uses feed nozzle 102 moving in a predefined direction 120 to propel powder streams 104a and 104b into a laser beam 106, which is directed toward a workpiece 112 that may be supported by a substrate. Feed nozzle may also include mechanisms for streaming a shield gas 116 to protect the welded area from oxygen, water vapor, or other components.


The powdered metal is then fused by the laser 106 in a melt pool region 108, which may then bond to the workpiece 112 as a region of deposited material 110. The dilution area 114 may include a region of the workpiece where the deposited powder is integrated with the local material of the workpiece. The feed nozzle 102 may be supported by a computer numerical controlled (CNC) robot or a gantry, or another computer-controlled mechanism. The feed nozzle 102 may be moved under computer control multiple times along a predetermined direction of the substrate until an initial layer of the deposited material 110 is formed over a desired area of the workpiece 112. The feed nozzle 102 can then scan the region immediately above the prior layer to deposit successive layers until the desired structure is formed. In general, the feed nozzle 102 may be configured to move with respect to all three axes, and in some instances to rotate on its own axis by a predetermined amount.



FIG. 2 is a flow diagram 200 illustrating an exemplary process of 3-D printing. A data model of the desired 3-D object to be printed is rendered (operation 210). A data model is a virtual design of the 3-D object. Thus, the data model may reflect the geometrical and structural features of the 3-D object, as well as its material composition. The data model may be created using a variety of methods, including CAE-based optimization, 3D modeling, photogrammetry software, and camera imaging. CAE-based optimization may include, for example, cloud-based optimization, fatigue analysis, linear or non-linear finite element analysis (FEA), and durability analysis.


3-D modeling software, in turn, may include one of numerous commercially available 3-D modeling software applications. Data models may be rendered using a suitable computer-aided design (CAD) package, for example in an STL format. STL is one example of a file format associated with commercially available stereolithography-based CAD software. A CAD program may be used to create the data model of the 3-D object as an STL file. Thereupon, the STL file may undergo a process whereby errors in the file are identified and resolved.


Following error resolution, the data model can be “sliced” by a software application known as a slicer to thereby produce a set of instructions for 3-D printing the object, with the instructions being compatible and associated with the particular 3-D printing technology to be utilized (operation 220). Numerous slicer programs are commercially available. Generally, the slicer program converts the data model into a series of individual layers representing thin slices (e.g., 100 microns thick) of the object being printed, along with a file containing the printer-specific instructions for 3-D printing these successive individual layers to produce an actual 3-D printed representation of the data model.


The layers associated with 3-D printers and related print instructions need not be planar or identical in thickness. For example, in some embodiments depending on factors like the technical sophistication of the 3-D printing equipment and the specific manufacturing objectives, etc., the layers in a 3-D printed structure may be non-planar and/or may vary in one or more instances with respect to their individual thicknesses.


A common type of file used for slicing data models into layers is a G-code file, which is a numerical control programming language that includes instructions for 3-D printing the object. The G-code file, or other file constituting the instructions, is uploaded to the 3-D printer (operation 230). Because the file containing these instructions is typically configured to be operable with a specific 3-D printing process, it will be appreciated that many formats of the instruction file are possible depending on the 3-D printing technology used.


In addition to the printing instructions that dictate what and how an object is to be rendered, the appropriate physical materials necessary for use by the 3-D printer in rendering the object are loaded into the 3-D printer using any of several conventional and often printer-specific methods (operation 240). In DMD techniques, for example, one or more metal powders may be selected for layering structures with such metals or metal alloys. In selective laser melting (SLM), selective laser sintering (SLS), and other PBF-based AM methods (see below), the materials may be loaded as powders into chambers that feed the powders to a build platform. Depending on the 3-D printer, other techniques for loading printing materials may be used.


The respective data slices of the 3-D object are then printed based on the provided instructions using the material(s) (operation 250). In 3-D printers that use laser sintering, a laser scans a powder bed and melts the powder together where the structure is desired and avoids scanning areas where the sliced data indicates that nothing is to be printed. This process may be repeated thousands of times until the desired structure is formed, after which the printed part is removed from a fabricator. In fused deposition modeling, as described above, parts are printed by applying successive layers of model and support materials to a substrate. In general, any suitable 3-D printing technology may be employed for purposes of the present disclosure.


Another AM technique includes powder-bed fusion (“PBF”). Like DMD, PBF creates ‘build pieces’ layer-by-layer. Each layer or ‘slice’ is formed by depositing a layer of powder and exposing portions of the powder to an energy beam. The energy beam is applied to melt areas of the powder layer that coincide with the cross-section of the build piece in the layer. The melted powder cools and fuses to form a slice of the build piece. The process can be repeated to form the next slice of the build piece, and so on. Each layer is deposited on top of the previous layer. The resulting structure is a build piece assembled slice-by-slice from the ground up.



FIGS. 3A-D illustrate respective side views of an exemplary PBF system 300 during different stages of operation. As noted above, the particular embodiment illustrated in FIGS. 3A-D is one of many suitable examples of a PBF system employing principles of the present disclosure. It should also be noted that elements of FIGS. 3A-D and the other figures in the present disclosure are not necessarily drawn to scale but may be drawn larger or smaller for the purpose of better illustration of concepts described herein. PBF system 300 can include a depositor 301 that can deposit each layer of metal powder, an energy beam source 303 that can generate an energy beam, a deflector 305 that can apply the energy beam to fuse the powder, and a build plate 307 that can support one or more build pieces, such as a build piece 309. PBF system 300 can also include a build floor 311 positioned within a powder bed receptacle. The walls of the powder bed receptacle 312 generally define the boundaries of the powder bed receptacle, which is sandwiched between the walls 312 from the side and abuts a portion of the build floor 311 below. Build floor 311 can progressively lower build plate 307 so that depositor 301 can deposit a next layer. The entire mechanism may reside in a chamber 313 that can enclose the other components, thereby protecting the equipment, enabling atmospheric and temperature regulation and mitigating contamination risks. Depositor 301 can include a hopper 315 that contains a powder 317, such as a metal powder, and a leveler 319 that can level the top of each layer of deposited powder.


Referring specifically to FIG. 3A, this figure shows PBF system 300 after a slice of build piece 309 has been fused, but before the next layer of powder has been deposited. In fact, FIG. 3A illustrates a time at which PBF system 300 has already deposited and fused slices in multiple layers, e.g., 150 layers, to form the current state of build piece 309, e.g., formed of 150 slices. The multiple layers already deposited have created a powder bed 321, which includes powder that was deposited but not fused.



FIG. 3B shows PBF system 300 at a stage in which build floor 311 can lower by a powder layer thickness 323. The lowering of build floor 311 causes build piece 309 and powder bed 321 to drop by powder layer thickness 323, so that the top of the build piece and powder bed are lower than the top of powder bed receptacle wall 312 by an amount equal to the powder layer thickness. In this way, for example, a space with a consistent thickness equal to powder layer thickness 323 can be created over the tops of build piece 309 and powder bed 321.



FIG. 3C shows PBF system 300 at a stage in which depositor 301 is positioned to deposit powder 317 in a space created over the top surfaces of build piece 309 and powder bed 321 and bounded by powder bed receptacle walls 312. In this example, depositor 301 progressively moves over the defined space while releasing powder 317 from hopper 315. Leveler 319 can level the released powder to form a powder layer 325 that has a thickness substantially equal to the powder layer thickness 323 (see FIG. 3B). Thus, the powder in a PBF system can be supported by a powder support structure, which can include, for example, a build plate 307, a build floor 311, a build piece 309, walls 312, and the like. It should be noted that the illustrated thickness of powder layer 325 (i.e., powder layer thickness 323 (FIG. 3B)) is greater than an actual thickness used for the example involving 350 previously-deposited layers discussed above with reference to FIG. 3A.



FIG. 3D shows PBF system 300 at a stage in which, following the deposition of powder layer 325 (FIG. 3C), energy beam source 303 generates an energy beam 327 and deflector 305 applies the energy beam to fuse the next slice in build piece 309. In various exemplary embodiments, energy beam source 303 can be an electron beam source, in which case energy beam 327 constitutes an electron beam. Deflector 305 can include deflection plates that can generate an electric field or a magnetic field that selectively deflects the electron beam to cause the electron beam to scan across areas designated to be fused. In various embodiments, energy beam source 303 can be a laser, in which case energy beam 327 is a laser beam. Deflector 305 can include an optical system that uses reflection and/or refraction to manipulate the laser beam to scan selected areas to be fused.


In various embodiments, the deflector 305 can include one or more gimbals and actuators that can rotate and/or translate the energy beam source to position the energy beam. In various embodiments, energy beam source 303 and/or deflector 305 can modulate the energy beam, e.g., turn the energy beam on and off as the deflector scans so that the energy beam is applied only in the appropriate areas of the powder layer. For example, in various embodiments, the energy beam can be modulated by a digital signal processor (DSP).


The present disclosure presents various approaches to positioning at least one robotic arm in an assembly system. For example, an assembly system may include two robots, each of which may include a respective robotic arm. A first robotic arm may be configured to engage with a node during various operations performed with the node. For example, the first robotic arm may engage with a node that is to be connected with a part, and the part may be engaged by a second robotic arm. Various operations performed with the node (e.g., connecting the node with a part) may be performed with a relatively high degree of precision. Accordingly, at least one of the robotic arms may be positioned (e.g., repositioned) during an operation with the node in order to function in accordance with the precision commensurate with the operation.


In some aspects, the first robotic arm may engage with the node and the second robotic arm may engage with a part. An operation with the node may include connecting the node with the part. Thus, the first robotic arm may be positioned relative to the second robotic arm and/or the second robotic arm may be positioned relative to the first robotic arm. When the first and/or second robotic arms are configured to move, the first and/or second robotic arms may be positioned (e.g., repositioned) relative to the other one of the first and/or second robotic arms. Such positioning may correct the position(s) of the first and/or second robotic arms, e.g., to maintain the precision necessary for operations with a node, including connecting a node with a part by the first and second robotic arms.


The present disclosure provides various different embodiments of positioning one or more robotic arms of an assembly system for assembly processes and/or post-processing operations. It will be appreciated that various embodiments described herein may be practiced together. For example, an embodiment described with respect to one illustration of the present disclosure may be implemented in another embodiment described with respect to another illustration of the present disclosure.



FIG. 4 is a diagram illustrating a perspective of a first assembly system 400 including a plurality of robots 402, 404 acting as fixtures for two nodes 406, 408. The assembly system 400 may be employed in various operations associated with the assembly of a node-based transport structure. In one embodiment, the assembly system 400 may perform at least a portion of the assembly of the node-based transport structure without any fixtures. For example, the assembly system 400 may be implemented for connecting a first node 406 (node 1) with a second node 408 (node 2) (although other implementations are possible without departing from the scope of the present disclosure). In an aspect, at least one of the first node 406 (e.g., first subcomponent) or the second node 408 (e.g., second subcomponent) may include a complex structure such as a chassis for a transport structure.


The assembly system 400 may include a first robotic arm 410 on the first robot 402 (robot 1). The first robotic arm 410 may have a distal end 414 and a proximal end 416. The distal end 414 may be configured for movement, e.g., for operations associated with a node and/or part, e.g., the first node 406. The proximal end 416 may secure the first robotic arm 410, e.g., to a base 418.


The distal end 414 of the first robotic arm 410 may be connected with a tool flange. The tool flange may be configured to connect with one or more components (e.g., tools) so that the first robotic arm 410 may connect with the one or more components and position the one or more components as the first robotic arm 410 moves.


In the illustrated embodiment, the distal end 414 of the first robotic arm 410 may be connected with an end effector, e.g., by means of the tool flange. That is, the end effector may be connected with the tool flange, and the tool flange may be connected with the distal end 414 of the first robotic arm 410. The end effector may be a component configured to interface with various parts, nodes, and/or other structures. Illustratively, the end effector may be configured to engage with a node 406 (however, the end effector may be configured to engage with a part or other structure). Examples of an end effector may include jaws, grippers, pins, or other similar components capable of engaging a node, part, or other structure.


As illustrated, the assembly system 400 may further include a second robotic arm 412 on the second robot 404. The second robotic arm 412 may have a distal end 420 and a proximal end 422. The proximal end 422 of the second robotic arm 412 may be connected with a base 424, e.g., in order to secure the second robotic arm 412. Illustratively, the first robotic arm 410 and the second robotic arm 412 may be located in the assembly system 400 to be approximately facing one another, e.g., so that the distal end 414 of the first robotic arm 410 extends towards the distal end 420 of the second robotic arm 412 and, correspondingly, the distal end 420 of the second robotic arm 412 extends towards the distal end 414 of the first robotic arm 410. However, the first and second robotic arms 410, 412 may be differently located in the assembly system 400 in other embodiments, e.g., according to an assembly operation that is to be performed.


Similar to the first robotic arm 410, the distal end 420 of the second robotic arm 412 may be connected with a tool flange, and the tool flange may be connected with an end effector. The end effector may be configured to engage with a node, part, or other structure, such as the node 408 (node 2) that is to be connected with the node 406.


Industrial robots may produce highly repeatable movements. For example, the robots 402, 404 may be able to position each of the respective robot arms 410, 412 repeatedly. The repeatability of the positioning may be accurate to about 60 microns when compared to another positioning, in an example. However, the placement relative to a specific positioning or absolute positioning may suffer from lower accuracy, e.g., approximately 400 microns.


Accordingly, because positioning of a robot arm 410, 412 may be accurate relative to a previous positioning, but not as accurate relative to an absolute position, e.g., a specific x, y, z location where the robot may be directed by a control unit, robots 402, 404 may generally not be suitable for use as high accuracy fixtures. The absolute positioning accuracy issue may be amplified when assembly hard points are driven by nominal location data. For example, parts tolerances may add in ways that negatively impact the tolerances of the assembled part when robotic positioning is used.


Metrology is the science of measurement. Using measurements to guide the robot, metrology guidance may be used to guide the robots 402, 404 as assembly fixtures, as is discussed in greater detail with respect to FIG. 5. For example, metrology systems may offer accuracy in the range of approximately 30 microns. Using the guidance of a metrology system industrial robots can realize greatly improved accuracy (in the micron scale). With this improvement in accuracy, the tool center point (TCP) of industrial robots can be used as a high accuracy flexible fixture.



FIG. 5 is a diagram illustrating a perspective of a second assembly system 500 including a plurality of robots 502, 504 acting as fixtures. The second assembly system 500 is generally similar to the assembly system 400 but includes further details of using measurements to guide the robot. Metrology guidance may be used to guide the robots 502, 504 as assembly fixtures. The second assembly system 500 includes a metrology device 526, metrology targets 528, and cell frames 530. The cell frames may define the work area and provide a frame of reference within the work area.


A metrology's system accuracy may be applied to a critical motion path segment of multiple robots. In an aspect, the fixtureless assembly process may include a cell reference frame that may be created using computer-aided design (CAD). The cell reference frame may be matched to a physical robot cell.


The metrology device may be a metrology unit such as a laser, greyscale camera, or another device capable of taking measurements based on metrology targets. The metrology targets 528 may be mounted on robot flange and offset to the robot TCP.


Nominal target frames may be stored in the robot program, PLC, metrology software, or another database. Nominal frames may be dynamic and driven by scan results and/or probe results, as discussed with respect to FIG. 6 below. Each robot control unit 532 may be digitally connected to the metrology unit and metrology software.


In an aspect, the metrology process, in the context of assembling a node-based structure, may include a first robot sending a signal to a metrology unit to aim/focus on a critical location. The metrology unit may aim or focus on a location including a target and lock onto the target. For example, the metrology unit may use a small diameter scan and lock.


The metrology unit measures robot TCP location or another critical feature offset from a target. An aspect may compare a measured location value to a dynamic nominal location value. For example, a system may compare where a robot thinks a node is located to where the node is actually located. A system may then compute a transformation matrix to move from a current location to a goal location. The transformation matrix may be applied to a robot control unit/PLC 532, and the robot may move to a desired location. A confirmation measurement may be performed. Accuracy boundaries may be adjustable, such as gain or other values minimize cycle time. Additionally, a second robot may send a signal to the metrology unit to aim/focus on a location. The process may continue and be repeated.


In an aspect, the control unit 532 may cause a scanner 534 to scan a first subcomponent to determine a relative location of the first feature of the first subcomponent relative to the TCP. A robot 502, 504 may be configured to pick up the first subcomponent based on the scanning.


In an aspect, multiple metrology units and/or types of metrology units may be integrated to reduce cycle time. Measurements may also be taken in parallel with correction applied in parallel. Corrections may be applied only to the particular segments of the robot path. One metrology system may be used to apply a correction to n number of robots.


The control unit 532 illustrated in FIG. 5 may be a robotic assembly control unit 532. The robotic assembly control unit 532 may include at least one processor and a memory coupled to the at least one processor. The memory may include instructions configuring the control unit 532 to receive a first target 528 location indicating where a first robot (e.g., robot 1) is to position a first feature of a first subcomponent. The first target 528 location proximal to a second target 528 location indicating where a second robot (e.g., robot 2) is to position a second feature of a second subcomponent such that the first subcomponent and the second subcomponent form a component when coupled together with the first feature of the first subcomponent in the first location and the second feature of the second subcomponent in the second location.


The control unit 532 may calculate a first calculated location of the first feature of the first subcomponent and measure a first measured location of the first feature of the first subcomponent. Additionally, the control unit 532 may determine a first transformation matrix between the first calculated location and the first measured location and reposition the first feature of the first subcomponent to the first target location using the first robot, the repositioning based on the first transformation matrix. In an aspect, repositioning of the first feature of the first subcomponent and/or repositioning the second feature of the second subcomponent may be further based on a relative comparison of the first calculated location and the second calculated location. Accordingly, the features on the subcomponent may be located relative to each other directly, rather than relative to another reference frame rather than an absolute reference to a cell frame.


Accordingly, the control unit 532 may be coupled to the metrology device 526 and the robots (e.g., robot 1 and robot 2). Accordingly, measuring a first measured location of a first feature of a first subcomponent and measuring a second measured location of a second feature of a second subcomponent may use the same metrology unit, e.g., the metrology device 526. Using information from the metrology device 526, the control unit 532 may calculate the first calculated location of the first feature of the first subcomponent and measure the first measured location of the first feature of the first subcomponent. The control unit 532 may then determine a first transformation matrix. The transformation matrix may be applied to adjust the position of a component held by the robot arm 510, 512. The measurements and calculations may be completed iteratively until the component or node is positioned as accurately as needed for the task. For example, two nodes may be located accurately enough to for a fixture to connect the two nodes.


The control unit 532 is illustrated in FIG. 5 as an individual unit coupled to the robots (robot 1 and robot 2) and the metrology device 526. In other aspects, the control unit 532 may be made up of multiple sub-control units. These multiple sub-control units may be distributed among different devices. For example, the control unit 532 may be distributed between some combination of a separate unit, within one or more robots, and/or within one or more metrology devices 526. For example, processing functionality may be located in the metrology device 526, the first robot (robot 1), the second robot (robot 2), and an external control unit, e.g., coupled to the metrology device 526, the first robot (robot 1), the second robot (robot 2).



FIG. 6 is a diagram illustrating a fixture point 602 printed directly on a part 600. The part includes a part gripper portion 604. An end of a robot arm 510, 512 may grip the part at the part gripper portion 604. Accordingly, the part 600 may be positioned by the robotic arm 510, 512. Thus, the control unit 532 may determine target 528 locations using measurements from the metrology device 526. The control unit 532 may then control the robots (robot 1 and/or robot 2) to position components being held by the robot arms 510, 512.


The part 600 may be characterized, e.g., scanned, probed, or otherwise measured. As part of the characterization, features such as joints, bolt locations, or other features may be a fit to nominal data using a CAD model. For example, as part of the characterization, features such as joints, bolt locations, or other features may be a best fit to nominal data using a CAD model or other fit, e.g., any fit to determining an accurate characterization between two features. The characterization may measure the part relative to a TCP frame 606. A best fit may be calculated based on the geometries of the features relative to the TCP frame 606. Once the best fit of the features is performed, a fixture point 602 on the part may be calculated as a product of the calculated best fit. The fixture point 602 may allow the physical part to determine the fixture location that will lead to the most accurate assembly. The fixture point 602 may be printed directly into the product with the robot TCP acting as the fixture and a scanning fixture may be constructed with the robotic interface on it. Using the calculated best fit, adaptive fixture positions may be relocated in real time. The relocation of the fixture point may be driven by product geometry to maximize product accuracy and minimize overall assembly tolerance.


For example, the part 600 may include a sphere 608. The fixture point 602 may be at the center of the sphere 608. However, the sphere 608 may be imperfect. Accordingly, the part may be characterized to select a best location for the fixture point. The fixture point 602 may be an offset relative to the TCP frame.


The control unit 532 of FIG. 5 may cause a scanner 534 to scan a first subcomponent (e.g., part 600) to determine a relative location of the first feature (e.g., fixture point 602) of the first subcomponent (e.g., part 600) relative to the TCP. The robot 502, 504 may be configured to pick up the first subcomponent based on the scanning. For example, the robot 502, 504 may pick up the first subcomponent at or near the TCP based on the scanning. In an aspect, the robot 502, 504 may pick up the first subcomponent at or near the part gripper along the TCP based on the scanning. Accordingly, the system may be able to locate the first feature (e.g., fixture point 602) of the first subcomponent (e.g., part 600) relative to the TCP based on where the robot 502, 504 picking up the first subcomponent (e.g., part 600) based on the scanning.



FIG. 7 is a diagram illustrating part 700 scanning and fitting on a fixture. In the diagram illustrates scanning the part 700 (1), determining a fit for the part 700 (2), and calculating a delta from a frame for the part 700.


In an example, a fixture may have a feature which can easily be probed or scanned (1) to represent the robot TCP. For example, a sphere 708 with two flats 710 aligned so that its center is concentric with the center of the robot gripper/TCP 712. The CAD file of each part includes the fixture 714 attached to each part 700.


When determining the fit (2), the scan 750 may be of both the part 700 and the fixture 714. The scan 750 is then overlaid with the CAD design 752 so that the features may be fit. The features may carry varying significance in the best fitting calculation. With just the main features, the fit may be used to determine a new location of the TCP 754, e.g., at the center of the fixture sphere. The new location of the TCP 754 may be recorded as a calculated delta from a frame representing a CAD representation 756 of a part and a frame 758 based on an actual physical part with a newly located TCP 754. The new TCP location 754 may be communicated in real time via a digital signal to an assembly cell software. The new location of the TCP 754 becomes the goal frame 758 in reference to a cell working frame 756 for the metrology system to correct the robot TCP to 754. The goal frame 758 may be used in place of an ideal fixture location (TCP) from an idealized CAD design. The goal frame 758 may be based on product geometry and may be applied in real time to the assembly process of an actual physical part. For example, a best fit or other fit, e.g., a fit to determining an accurate characterization between two features.



FIG. 8 is a conceptual flow diagram in accordance with the systems and methods described herein. At 802 a control unit 532 may receive a first target location indicating where a first robot is to position a first feature of a first subcomponent. The first target location may be proximal to a second target location indicating where a second robot is to position a second feature of a second subcomponent such that the first subcomponent and the second subcomponent form a component when coupled together with the first feature of the first subcomponent in the first location and the second feature of the second subcomponent in the second location. The first target location may be a tool center point (TCP) and/or an offset from a TCP.


Accordingly, the control unit 532 receive target 528 location information from the metrology device 526. The location information may indicate locations for nodes held by robot arms. For example, the location of the nodes may be known relative to the targets 528. In an aspect, at least one of the first subcomponent may be a complex structure. The complex structure may be an automobile chassis.


At 804, the control unit 532 may calculate a first calculated location of the first feature of the first subcomponent. The first calculated location may include a dynamic nominal location indicating a calculated location of a moving first feature at a specific time. The specific time may coincide with the measuring of the first location of the first feature.


For example, the control unit 532 may calculate a location using the location information received from the metrology device 526. Because the location of the nodes may be known relative to the targets 528, the control unit 532 may calculate a first calculated location of the first feature of the first subcomponent.


At 806, the control unit 532 may measure a first measured location of the first feature of the first subcomponent. Measuring a first measured location of the first feature of the first subcomponent may include scanning the shape of the part. Additionally, scanning the shape of the part may include scanning the part (e.g., a first subcomponent) to determine a relative location of a first feature of the part relative to the TCP of the part. The first robot may be configured to pick up the part based on the scanning. For example, the first robot may pick up a first subcomponent on a TCP. Accordingly, the first robot may position the first feature based on the first feature's relative position to the TCP.


The first robot may signal a control unit 532 causing the control unit 532 to measure the first measured location of the first feature of the first subcomponent. In an aspect, measuring the first measured location of the first feature of the first subcomponent and measuring the second measured location of the second feature of the second subcomponent use a same metrology unit.


At 808, the control unit 532 may determine a first transformation matrix between the first calculated location and the first measured location. Measuring a first measured location of the first feature of the first subcomponent comprises measuring a fixture point printed on the first subcomponent. For example, the control unit 532 may calculate frame deltas from an idealized frame based on a CAD design and a frame based on an actual physical device that may have differences from an idealized CAD design. (By idealized CAD design, the Applicant means the model design without inclusions of tolerances. The actual CAD design will generally include tolerances. A part that is made within tolerance may then be modeled as described herein using the frame delta. Parts not made within tolerances might be discarded.)


At 810, the control unit 532 may reposition the first feature of the first subcomponent to the first target location using the first robot, the repositioning based on the first transformation matrix. Repositioning the first feature of the first subcomponent to the first target location using the first robot based on the first transformation matrix comprises sending the first transformation matrix to a control unit 532 in the first robot. Repositioning the first feature of the first subcomponent may be further based on a relative comparison of the first calculated location and a second calculated location.


In an aspect, the control unit 532 may repeat the calculating 804, measuring 806, determining 808, and repositioning 810 steps. In another aspect, the control unit 532 may repeat one or more of 802, 804, 806, 808, and 810. In an aspect, the repeating of one or more of 802, 804, 806, 808, and 810 may be relative to a second target. For example, the control unit 532 may receive a second target location indicating where the second robot is to position the second feature of the second subcomponent. The control unit 532 may calculate a second calculated location of the second feature of the second subcomponent. The control unit 532 may also measure a second measured location of the second feature of the second subcomponent. Additionally, the control unit 532 may determine a second transformation matrix between the second calculated location and the second measured location. The control unit 532 may also reposition the second feature of the second subcomponent to the second target location using the second robot. The repositioning may be based on the second transformation matrix.


At 812, the control unit 532 may adjust at least one of accuracy boundaries or gain based on at least one of the repeating of the calculating, measuring, determining, and repositioning steps.


At 814, the control unit 532 may characterize at least two features on the first subcomponent, the at least two features including the first target location.


At 816, the control unit 532 may determine a fit, such as a best fit, for the at least two features. Repositioning the first feature of the first subcomponent to the first target location may use the first robot. The repositioning may be based on the first transformation matrix and may be further based on the best fit.


At 818, the control unit 532 may attach the first subcomponent to the second subcomponent. Attaching the first subcomponent to the second subcomponent may include attaching the first subcomponent to the second subcomponent using an ultra-violet (UV) adhesive.


The present disclosure is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these exemplary embodiments presented throughout the present disclosure will be readily apparent to those skilled in the art, and the concepts disclosed herein may be applied to other techniques for printing nodes and interconnects. Thus, the claims are not intended to be limited to the exemplary embodiments presented throughout the disclosure but are to be accorded the full scope consistent with the language claims. All structural and functional equivalents to the elements of the exemplary embodiments described throughout the present disclosure that are known or later come to be known to those of ordinary skill in the art are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f), or analogous law in applicable jurisdictions, unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.”

Claims
  • 1. A method of robotic assembly, comprising: receiving a first target location of a first subcomponent, the first target location being a desired position of a first feature of the first subcomponent, wherein the first target location further comprises a location offset from a tool center point (TCP); calculating a first calculated location of the first feature of the first subcomponent;scanning the first subcomponent to determine a relative location of the first feature of the first subcomponent relative to the TCP, a first robot configured to pick up the first subcomponent based on the scanning;measuring a first measured location of the first feature of the first subcomponent;determining a first transformation matrix between the first calculated location and the first measured location;repositioning the first feature of the first subcomponent to the first target location using the first robot, based on the first transformation matrix;receiving a second target location indicating where a second robot is to position a second feature of a second subcomponent relative to the first calculated location of the first feature of the first subcomponent;calculating a new location for the first feature based on receiving the second target location of the second subcomponent; andrepositioning the first feature based on the new location.
  • 2. The method of claim 1, further comprising: the second target location being a desired position of the second feature of the second subcomponent and proximal to the first target location;calculating a second calculated location of the second feature of the second subcomponent;measuring a second measured location of the second feature of the second subcomponent;determining a second transformation matrix between the second calculated location and the second measured location; andrepositioning the second feature of the second subcomponent to the second target location using the second robot, the repositioning based on the second transformation matrix.
  • 3. The method of claim 2, wherein measuring the first measured location of the first feature of the first subcomponent and measuring the second measured location of the second feature of the second subcomponent use a same metrology unit.
  • 4. The method of claim 2, wherein at least one of repositioning the first feature of the first subcomponent or repositioning the second feature of the second subcomponent is further based on a relative comparison of the first calculated location and the second calculated location.
  • 5. The method of claim 1, wherein at least one of the first subcomponent or the second subcomponent comprise a plurality of components.
  • 6. The method of claim 5, wherein the plurality of components comprises a chassis for a transport structure.
  • 7. The method of claim 1, wherein the first calculated location comprises a dynamic nominal location indicating a calculated location of a moving first feature at a specific time, the specific time coinciding with the measuring of the first measured location of the first feature.
  • 8. The method of claim 1, further comprising repeating the calculating, measuring, determining, and repositioning steps.
  • 9. The method of claim 8, further comprising adjusting at least one of accuracy boundaries or gain based on at least one of the repeating of the calculating, measuring, determining, and repositioning steps.
  • 10. The method of claim 1, wherein measuring the first measured location of the first feature of the first subcomponent comprises measuring by a processor.
  • 11. The method of claim 1, wherein repositioning the first feature of the first subcomponent to the first target location using the first robot based on the first transformation matrix comprises repositioning based on a relative comparison of the first calculated location and a second calculated location of the second feature of the second subcomponent.
  • 12. The method of claim 1, further comprising: measuring at least two locations on the first subcomponent, the at least two locations including the first target location,determining a fit for the at least two locations, andwherein the repositioning based on the first transformation matrix is further based on the fit.
  • 13. The method of claim 1, wherein measuring the first measured location of the first feature of the first subcomponent comprises scanning a shape of a part.
  • 14. The method of claim 1, wherein measuring the first measured location of the first feature of the first subcomponent comprises measuring a fixture point printed on the first subcomponent.
  • 15. The method of claim 1, further comprising attaching the first subcomponent to the second subcomponent.
  • 16. The method of claim 15, wherein attaching the first subcomponent to the second subcomponent includes attaching the first subcomponent to the second subcomponent using a ultra-violet (UV) adhesive.
  • 17. A system for robotic assembly, comprising: a first robot;a second robot; anda control unit coupled to the first robot and the second robot, wherein the control unit_includes a processor configured to: receive a first target location of a first subcomponent coupled to the first robot, the first target location being a desired position of a first feature of the first subcomponent, wherein the first target location further comprises a location offset from a tool center point (TCP);calculate a first calculated location of the first feature of the first subcomponent;scan the first subcomponent to determine a relative location of the first feature of the first subcomponent relative to the TCP, the first robot configured to pick up the first subcomponent based on the scan:measure a first measured location of the first feature of the first subcomponent;determine a first transformation matrix between the first calculated location and the first measured location;reposition the first feature of the first subcomponent to the first target location using the first robot, based on the first transformation matrix;receive a second target location indicating where the second robot is to position a second feature of a second subcomponent relative to the first calculated location of the first feature of the first subcomponent;calculate a new location for the first feature based on receiving the second target location of the second subcomponent; andreposition the first feature based on the new location.
  • 18. The system of claim 17, the second target location being a desired position of the second feature of the second subcomponent and proximal to the first target location; wherein the control unit is further configured to calculate a second calculated location of the second feature of the second subcomponent; measure a second measured location of the second feature of the second subcomponent; determine a second transformation matrix between the second calculated location and the second measured location; and reposition the second feature of the second subcomponent to the second target location using the second robot based on the second transformation matrix.
  • 19. The system of claim 18, wherein the control unit is configured to use a same metrology unit to measure the first measured location of the first feature of the first subcomponent and to measure the second measured location of the second feature of the second subcomponent.
  • 20. The system of claim 18, wherein at least one of to reposition the first feature of the first subcomponent or to reposition the second feature of the second subcomponent is further based on a relative comparison of the first calculated location and the second calculated location.
  • 21. The system of claim 17, wherein at least one of the first subcomponent or the second subcomponent comprises a plurality of components.
  • 22. The system of claim 21, wherein the plurality of components comprises a chassis for a transport structure.
  • 23. The system of claim 20, wherein the first calculated location comprises a dynamic nominal location indicating a calculated location of a moving first feature at a specific time, the specific time coinciding with the measuring of the first measured location of the first feature.
  • 24. The system of claim 17, the control unit further configured to repeat the calculating, measuring, determining, and repositioning steps.
  • 25. The system of claim 24, the control unit further configured to adjust at least one of accuracy boundaries or gain based on at least one of the repeating of the calculating, measuring, determining, and repositioning steps.
  • 26. The system of claim 20, wherein the first robot is configured to signal the control unit to cause the control unit to measure the first measured location of the first feature of the first subcomponent.
  • 27. The system of claim 20, wherein to reposition the first feature of the first subcomponent to the first target location using the first robot based on the first transformation matrix comprises to reposition based on a relative comparison of the first calculated location and a second calculated location of the second feature of the second subcomponent.
  • 28. The system of claim 17, the control unit further configured to: characterize at least two features on the first subcomponent, the at least two features including the first target location,determine a fit for the at least two features, andwherein to reposition based on the first transformation matrix is further based on the fit.
  • 29. The system of claim 17, wherein to measure the first measured location of the first feature of the first subcomponent comprises to scan a shape of a part.
  • 30. The system of claim 17, wherein to measure the first measured location of the first feature of the first subcomponent comprises to measure a fixture point printed on the first subcomponent.
  • 31. The system of claim 17, the control unit further configured to attach the first subcomponent to the second subcomponent.
  • 32. The system of claim 31, wherein to attach the first subcomponent to the second subcomponent includes to attach the first subcomponent to the second subcomponent using a ultra-violet (UV) adhesive.
  • 33. The system of claim 17, wherein the control unit comprises a distributed control unit located in at least one of the first robot or the second robot.
  • 34. A robotic assembly control unit, comprising: at least one processor; anda memory coupled to the at least one processor, the memory including instructions configuring the control unit to: receive a first target location of a first subcomponent, the first target location being a desired position of a first feature of the first subcomponent, wherein the first target location further comprises a location offset from a tool center point (TCP);calculate a first calculated location of the first feature of the first subcomponent;scan the first subcomponent to determine a relative location of the first feature of the first subcomponent relative to the TCP, a first robot configured to pick up the first subcomponent based on the scan:measure a first measured location of the first feature of the first subcomponent; determine a first transformation matrix between the first calculated location and the first measured location;reposition the first feature of the first subcomponent to the first target location using the first robot, based on the first transformation matrix;receive a second target location indicating where a second robot is to position a second feature of a second subcomponent relative to the first calculated location of the first feature of the first subcomponent;calculate a new location for the first feature based on receiving the second target location of the second subcomponent; andreposition the first feature based on the new location.
  • 35. A non-transitory computer-readable medium storing computer executable code for robotic assembly, the code, when executed by a processor, causing the processor to: receive a first target location of a first subcomponent, the first target location being a desired position of a first feature of the first subcomponent, wherein the first target location further comprises a location offset from a tool center point (TCP);calculate a first calculated location of the first feature of the first subcomponent;scan the first subcomponent to determine a relative location of the first feature of the first subcomponent relative to the TCP, a first robot configured to pick up the first subcomponent based on the scan;measure a first measured location of the first feature of the first subcomponent;determine a first transformation matrix between the first calculated location and the first measured location;reposition the first feature of the first subcomponent to the first target location using the first robot, based on the first transformation matrix;receive a second target location indicating where a second robot is to position a second feature of a second subcomponent relative to the first calculated location of the first feature of the first subcomponent;calculate a new location for the first feature based on receiving the second target location of the second subcomponent; andreposition the first feature based on the new location.
US Referenced Citations (363)
Number Name Date Kind
5203226 Hongou et al. Apr 1993 A
5742385 Champa Apr 1998 A
5990444 Costin Nov 1999 A
6010155 Rinehart Jan 2000 A
6096249 Yamaguchi Aug 2000 A
6140602 Costin Oct 2000 A
6250533 Otterbein et al. Jun 2001 B1
6252196 Costin et al. Jun 2001 B1
6318642 Goenka et al. Nov 2001 B1
6365057 Whitehurst et al. Apr 2002 B1
6391251 Keicher et al. May 2002 B1
6409930 Whitehurst et al. Jun 2002 B1
6468439 Whitehurst et al. Oct 2002 B1
6554345 Jonsson Apr 2003 B2
6585151 Ghosh Jul 2003 B1
6644721 Miskech et al. Nov 2003 B1
6811744 Keicher et al. Nov 2004 B2
6866497 Saiki Mar 2005 B2
6919035 Clough Jul 2005 B1
6926970 James et al. Aug 2005 B2
7152292 Hohmann et al. Dec 2006 B2
7344186 Hausler et al. Mar 2008 B1
7500373 Quell Mar 2009 B2
7586062 Heberer Sep 2009 B2
7637134 Burzlaff et al. Dec 2009 B2
7710347 Gentilman et al. May 2010 B2
7716802 Stern et al. May 2010 B2
7745293 Yamazaki et al. Jun 2010 B2
7766123 Sakurai et al. Aug 2010 B2
7852388 Shimizu et al. Dec 2010 B2
7908922 Zarabadi et al. Mar 2011 B2
7951324 Naruse et al. May 2011 B2
8094036 Heberer Jan 2012 B2
8163077 Eron et al. Apr 2012 B2
8286236 Jung et al. Oct 2012 B2
8289352 Vartanian et al. Oct 2012 B2
8297096 Mizumura et al. Oct 2012 B2
8354170 Henry et al. Jan 2013 B1
8383028 Lyons Feb 2013 B2
8408036 Reith et al. Apr 2013 B2
8429754 Jung et al. Apr 2013 B2
8437513 Derakhshani et al. May 2013 B1
8444903 Lyons et al. May 2013 B2
8452073 Taminger et al. May 2013 B2
8599301 Dowski, Jr. et al. Dec 2013 B2
8606540 Haisty et al. Dec 2013 B2
8610761 Haisty et al. Dec 2013 B2
8631996 Quell et al. Jan 2014 B2
8675925 Derakhshani et al. Mar 2014 B2
8678060 Dietz et al. Mar 2014 B2
8686314 Schneegans et al. Apr 2014 B2
8686997 Radet et al. Apr 2014 B2
8694284 Berard Apr 2014 B2
8720876 Reith et al. May 2014 B2
8752166 Jung et al. Jun 2014 B2
8755923 Farahani et al. Jun 2014 B2
8787628 Derakhshani et al. Jul 2014 B1
8818771 Gielis et al. Aug 2014 B2
8873238 Wilkins Oct 2014 B2
8978535 Ortiz et al. Mar 2015 B2
9006605 Schneegans et al. Apr 2015 B2
9071436 Jung et al. Jun 2015 B2
9101979 Hofmann et al. Aug 2015 B2
9104921 Derakhshani et al. Aug 2015 B2
9126365 Mark et al. Sep 2015 B1
9128476 Jung et al. Sep 2015 B2
9138924 Yen Sep 2015 B2
9149988 Mark et al. Oct 2015 B2
9156205 Mark et al. Oct 2015 B2
9186848 Mark et al. Nov 2015 B2
9244986 Karmarkar Jan 2016 B2
9248611 Divine et al. Feb 2016 B2
9254535 Buller et al. Feb 2016 B2
9266566 Kim Feb 2016 B2
9269022 Rhoads et al. Feb 2016 B2
9327452 Mark et al. May 2016 B2
9329020 Napoletano May 2016 B1
9332251 Haisty et al. May 2016 B2
9346127 Buller et al. May 2016 B2
9389315 Bruder et al. Jul 2016 B2
9399256 Buller et al. Jul 2016 B2
9403235 Buller et al. Aug 2016 B2
9418193 Dowski, Jr. et al. Aug 2016 B2
9457514 Schwärzler Oct 2016 B2
9469057 Johnson et al. Oct 2016 B2
9478063 Rhoads et al. Oct 2016 B2
9481402 Muto et al. Nov 2016 B1
9486878 Buller et al. Nov 2016 B2
9486960 Paschkewitz et al. Nov 2016 B2
9502993 Deng Nov 2016 B2
9525262 Stuart et al. Dec 2016 B2
9533526 Nevins Jan 2017 B1
9555315 Aders Jan 2017 B2
9555580 Dykstra et al. Jan 2017 B1
9557856 Send et al. Jan 2017 B2
9566742 Keating et al. Feb 2017 B2
9566758 Cheung et al. Feb 2017 B2
9573193 Buller et al. Feb 2017 B2
9573225 Buller et al. Feb 2017 B2
9586290 Buller et al. Mar 2017 B2
9595795 Lane et al. Mar 2017 B2
9597843 Stauffer et al. Mar 2017 B2
9600929 Young et al. Mar 2017 B1
9609755 Coull et al. Mar 2017 B2
9610737 Johnson et al. Apr 2017 B2
9611667 GangaRao et al. Apr 2017 B2
9616623 Johnson et al. Apr 2017 B2
9626487 Jung et al. Apr 2017 B2
9626489 Nilsson Apr 2017 B2
9643361 Liu May 2017 B2
9662840 Buller et al. May 2017 B1
9665182 Send et al. May 2017 B2
9672389 Mosterman et al. Jun 2017 B1
9672550 Apsley et al. Jun 2017 B2
9676145 Buller et al. Jun 2017 B2
9684919 Apsley et al. Jun 2017 B2
9688032 Kia et al. Jun 2017 B2
9690286 Hovsepian et al. Jun 2017 B2
9700966 Kraft et al. Jul 2017 B2
9703896 Zhang et al. Jul 2017 B2
9713903 Paschkewitz et al. Jul 2017 B2
9718302 Voung et al. Aug 2017 B2
9718434 Hector, Jr. et al. Aug 2017 B2
9724877 Flitsch et al. Aug 2017 B2
9724881 Johnson et al. Aug 2017 B2
9725178 Wang Aug 2017 B2
9731730 Stiles Aug 2017 B2
9731773 Gami et al. Aug 2017 B2
9741954 Bruder et al. Aug 2017 B2
9747352 Karmarkar Aug 2017 B2
9764415 Seufzer et al. Sep 2017 B2
9764520 Johnson et al. Sep 2017 B2
9765226 Dain Sep 2017 B2
9770760 Liu Sep 2017 B2
9773393 Velez Sep 2017 B2
9776234 Schaafhausen et al. Oct 2017 B2
9782936 Glunz et al. Oct 2017 B2
9783324 Embler et al. Oct 2017 B2
9783977 Alqasimi et al. Oct 2017 B2
9789548 Golshany et al. Oct 2017 B2
9789922 Dosenbach et al. Oct 2017 B2
9796137 Zhang et al. Oct 2017 B2
9802108 Aders Oct 2017 B2
9809977 Carney et al. Nov 2017 B2
9817922 Glunz et al. Nov 2017 B2
9818071 Jung et al. Nov 2017 B2
9821339 Paschkewitz et al. Nov 2017 B2
9821411 Buller et al. Nov 2017 B2
9823143 Twelves, Jr. et al. Nov 2017 B2
9829564 Bruder et al. Nov 2017 B2
9846933 Yuksel Dec 2017 B2
9854828 Langeland Jan 2018 B2
9858604 Apsley et al. Jan 2018 B2
9862833 Hasegawa et al. Jan 2018 B2
9862834 Hasegawa et al. Jan 2018 B2
9863885 Zaretski et al. Jan 2018 B2
9870629 Cardno et al. Jan 2018 B2
9879981 Dehghan Niri et al. Jan 2018 B1
9884663 Czinger et al. Feb 2018 B2
9898776 Apsley et al. Feb 2018 B2
9914150 Pettersson et al. Mar 2018 B2
9919360 Buller et al. Mar 2018 B2
9931697 Levin et al. Apr 2018 B2
9933031 Bracamonte et al. Apr 2018 B2
9933092 Sindelar Apr 2018 B2
9957031 Golshany et al. May 2018 B2
9958535 Send et al. May 2018 B2
9962767 Buller et al. May 2018 B2
9963978 Johnson et al. May 2018 B2
9971920 Derakhshani et al. May 2018 B2
9976063 Childers et al. May 2018 B2
9987792 Flitsch et al. Jun 2018 B2
9988136 Tiryaki et al. Jun 2018 B2
9989623 Send et al. Jun 2018 B2
9990565 Rhoads et al. Jun 2018 B2
9994339 Colson et al. Jun 2018 B2
9996890 Cinnamon et al. Jun 2018 B1
9996945 Holzer et al. Jun 2018 B1
10002215 Dowski et al. Jun 2018 B2
10006156 Kirkpatrick Jun 2018 B2
10011089 Lyons et al. Jul 2018 B2
10011685 Childers et al. Jul 2018 B2
10012532 Send et al. Jul 2018 B2
10013777 Mariampillai et al. Jul 2018 B2
10015908 Williams et al. Jul 2018 B2
10016852 Broda Jul 2018 B2
10016942 Mark et al. Jul 2018 B2
10017384 Greer et al. Jul 2018 B1
10018576 Herbsommer et al. Jul 2018 B2
10022792 Srivas et al. Jul 2018 B2
10022912 Kia et al. Jul 2018 B2
10027376 Sankaran et al. Jul 2018 B2
10029415 Swanson et al. Jul 2018 B2
10040239 Brown, Jr. Aug 2018 B2
10046412 Blackmore Aug 2018 B2
10048769 Selker et al. Aug 2018 B2
10052712 Blackmore Aug 2018 B2
10052820 Kemmer et al. Aug 2018 B2
10055536 Maes et al. Aug 2018 B2
10058764 Aders Aug 2018 B2
10058920 Buller et al. Aug 2018 B2
10061906 Nilsson Aug 2018 B2
10065270 Buller et al. Sep 2018 B2
10065361 Susnjara et al. Sep 2018 B2
10065367 Brown, Jr. Sep 2018 B2
10068316 Holzer et al. Sep 2018 B1
10071422 Buller et al. Sep 2018 B2
10071525 Susnjara et al. Sep 2018 B2
10072179 Drijfhout Sep 2018 B2
10074128 Colson et al. Sep 2018 B2
10076875 Mark et al. Sep 2018 B2
10076876 Mark et al. Sep 2018 B2
10081140 Paesano et al. Sep 2018 B2
10081431 Seack et al. Sep 2018 B2
10086568 Snyder et al. Oct 2018 B2
10087320 Simmons et al. Oct 2018 B2
10087556 Gallucci et al. Oct 2018 B2
10099427 Mark et al. Oct 2018 B2
10100542 GangaRao et al. Oct 2018 B2
10100890 Bracamonte et al. Oct 2018 B2
10107344 Bracamonte et al. Oct 2018 B2
10108766 Druckman et al. Oct 2018 B2
10113600 Bracamonte et al. Oct 2018 B2
10118347 Stauffer et al. Nov 2018 B2
10118579 Lakic Nov 2018 B2
10120078 Bruder et al. Nov 2018 B2
10124546 Johnson et al. Nov 2018 B2
10124570 Evans et al. Nov 2018 B2
10137500 Blackmore Nov 2018 B2
10138354 Groos et al. Nov 2018 B2
10144126 Krohne et al. Dec 2018 B2
10145110 Carney et al. Dec 2018 B2
10151363 Bracamonte et al. Dec 2018 B2
10152661 Kieser Dec 2018 B2
10160278 Coombs et al. Dec 2018 B2
10161021 Lin et al. Dec 2018 B2
10166752 Evans et al. Jan 2019 B2
10166753 Evans et al. Jan 2019 B2
10171578 Cook et al. Jan 2019 B1
10173255 TenHouten et al. Jan 2019 B2
10173327 Kraft et al. Jan 2019 B2
10178800 Mahalingam et al. Jan 2019 B2
10179640 Wilkerson Jan 2019 B2
10183330 Buller et al. Jan 2019 B2
10183478 Evans et al. Jan 2019 B2
10189187 Keating et al. Jan 2019 B2
10189240 Evans et al. Jan 2019 B2
10189241 Evans et al. Jan 2019 B2
10189242 Evans et al. Jan 2019 B2
10190424 Johnson et al. Jan 2019 B2
10195693 Buller et al. Feb 2019 B2
10196539 Boonen et al. Feb 2019 B2
10197338 Melsheimer Feb 2019 B2
10200677 Trevor et al. Feb 2019 B2
10201932 Flitsch et al. Feb 2019 B2
10201941 Evans et al. Feb 2019 B2
10202673 Lin et al. Feb 2019 B2
10204216 Nejati et al. Feb 2019 B2
10207454 Buller et al. Feb 2019 B2
10209065 Estevo, Jr. et al. Feb 2019 B2
10210662 Holzer et al. Feb 2019 B2
10213837 Kondoh Feb 2019 B2
10214248 Hall et al. Feb 2019 B2
10214252 Schellekens et al. Feb 2019 B2
10214275 Goehlich Feb 2019 B2
10220575 Reznar Mar 2019 B2
10220881 Tyan et al. Mar 2019 B2
10221530 Driskell et al. Mar 2019 B2
10226900 Nevins Mar 2019 B1
10232550 Evans et al. Mar 2019 B2
10234342 Moorlag et al. Mar 2019 B2
10237477 Trevor et al. Mar 2019 B2
10252335 Buller et al. Apr 2019 B2
10252336 Buller et al. Apr 2019 B2
10254499 Cohen et al. Apr 2019 B1
10257499 Hintz et al. Apr 2019 B2
10259044 Buller et al. Apr 2019 B2
10268181 Nevins Apr 2019 B1
10269225 Velez Apr 2019 B2
10272860 Mohapatra et al. Apr 2019 B2
10272862 Whitehead Apr 2019 B2
10275564 Ridgeway et al. Apr 2019 B2
10279580 Evans et al. May 2019 B2
10285219 Fetfatsidis et al. May 2019 B2
10286452 Buller et al. May 2019 B2
10286603 Buller et al. May 2019 B2
10286961 Hillebrecht et al. May 2019 B2
10289263 Troy et al. May 2019 B2
10289875 Singh et al. May 2019 B2
10291193 Dandu et al. May 2019 B2
10294552 Liu et al. May 2019 B2
10294982 Gabrys et al. May 2019 B2
10295989 Nevins May 2019 B1
10303159 Czinger et al. May 2019 B2
10307824 Kondoh Jun 2019 B2
10310197 Droz et al. Jun 2019 B1
10313651 Trevor et al. Jun 2019 B2
10315252 Mendelsberg et al. Jun 2019 B2
10336050 Susnjara Jul 2019 B2
10337542 Hesslewood et al. Jul 2019 B2
10337952 Bosetti et al. Jul 2019 B2
10339266 Urick et al. Jul 2019 B2
10343330 Evans et al. Jul 2019 B2
10343331 McCall et al. Jul 2019 B2
10343355 Evans et al. Jul 2019 B2
10343724 Polewarczyk et al. Jul 2019 B2
10343725 Martin et al. Jul 2019 B2
10350823 Rolland et al. Jul 2019 B2
10356341 Holzer et al. Jul 2019 B2
10356395 Holzer et al. Jul 2019 B2
10357829 Spink et al. Jul 2019 B2
10357957 Buller et al. Jul 2019 B2
10359756 Newell et al. Jul 2019 B2
10369629 Mendelsberg et al. Aug 2019 B2
10382739 Rusu et al. Aug 2019 B1
10384393 Xu et al. Aug 2019 B2
10384416 Cheung et al. Aug 2019 B2
10389410 Brooks et al. Aug 2019 B2
10391710 Mondesir Aug 2019 B2
10392097 Pham et al. Aug 2019 B2
10392131 Deck et al. Aug 2019 B2
10393315 Tyan Aug 2019 B2
10400080 Ramakrishnan et al. Sep 2019 B2
10401832 Snyder et al. Sep 2019 B2
10403009 Mariampillai et al. Sep 2019 B2
10406750 Barton et al. Sep 2019 B2
10412283 Send et al. Sep 2019 B2
10416095 Herbsommer et al. Sep 2019 B2
10421496 Swayne et al. Sep 2019 B2
10421863 Hasegawa et al. Sep 2019 B2
10422478 Leachman et al. Sep 2019 B2
10425793 Sankaran et al. Sep 2019 B2
10427364 Alves Oct 2019 B2
10429006 Tyan et al. Oct 2019 B2
10434573 Buller et al. Oct 2019 B2
10435185 Divine et al. Oct 2019 B2
10435773 Liu et al. Oct 2019 B2
10436038 Buhler et al. Oct 2019 B2
10438407 Pavanaskar et al. Oct 2019 B2
10440351 Holzer et al. Oct 2019 B2
10442002 Benthien et al. Oct 2019 B2
10442003 Symeonidis et al. Oct 2019 B2
10449696 Elgar et al. Oct 2019 B2
10449737 Johnson et al. Oct 2019 B2
10461810 Cook et al. Oct 2019 B2
20060108783 Ni et al. May 2006 A1
20070017081 Becker Jan 2007 A1
20090240372 Bordyn et al. Sep 2009 A1
20110022216 Andersson Jan 2011 A1
20130010081 Tenney Jan 2013 A1
20140277669 Nardi et al. Sep 2014 A1
20150336271 Spicer et al. Nov 2015 A1
20160023355 Komatsu Jan 2016 A1
20170043477 Kitayama Feb 2017 A1
20170050277 Shi Feb 2017 A1
20170052534 Ghanem Feb 2017 A1
20170057082 Grigorenko et al. Mar 2017 A1
20170095930 Warashina Apr 2017 A1
20170113344 Schonberg Apr 2017 A1
20170341309 Piepenbrock et al. Nov 2017 A1
20180299267 Durand et al. Oct 2018 A1
20180345483 Sirkett Dec 2018 A1
20190358816 Saito Nov 2019 A1
Foreign Referenced Citations (38)
Number Date Country
1996036455 Nov 1996 WO
1996036525 Nov 1996 WO
1996038260 Dec 1996 WO
2003024641 Mar 2003 WO
2004108343 Dec 2004 WO
2005093773 Oct 2005 WO
2007003375 Jan 2007 WO
2007110235 Oct 2007 WO
2007110236 Oct 2007 WO
2008019847 Feb 2008 WO
2007128586 Jun 2008 WO
2008068314 Jun 2008 WO
2008086994 Jul 2008 WO
2008087024 Jul 2008 WO
2008107130 Sep 2008 WO
2008138503 Nov 2008 WO
2008145396 Dec 2008 WO
2009083609 Jul 2009 WO
2009098285 Aug 2009 WO
2009112520 Sep 2009 WO
2009135938 Nov 2009 WO
2009140977 Nov 2009 WO
2010125057 Nov 2010 WO
2010125058 Nov 2010 WO
2010142703 Dec 2010 WO
2011032533 Mar 2011 WO
2014016437 Jan 2014 WO
2014187720 Nov 2014 WO
2014195340 Dec 2014 WO
2015193331 Dec 2015 WO
2016116414 Jul 2016 WO
2017036461 Mar 2017 WO
2019030248 Feb 2019 WO
2019042504 Mar 2019 WO
2019048010 Mar 2019 WO
2019048498 Mar 2019 WO
2019048680 Mar 2019 WO
2019048682 Mar 2019 WO
Non-Patent Literature Citations (8)
Entry
US 9,202,136 B2, 12/2015, Schmidt et al. (withdrawn)
US 9,809,265 B2, 11/2017, Kinjo (withdrawn)
US 10,449,880 B2, 10/2019, Mizobata et al. (withdrawn)
International Search Report & Written Opinion received in PCT/US2019/066759 dated Mar. 5, 2020.
Jorge Corona-Gastuera et al,; “An Approach for Intelligent Fixtureless Assembly: Issues and Experiments;” A. Gelbukh, A. de Albornoz, and H. Terashima (Eds): MICAI 2005, LNAI 3789, pp. 1052-1061, 2005. © Springer-Verlag Berlin Heidelberg 2005.
Bone, G. and Capson D., “Vision-Guided fixtureless Assembly of Automotive Components”, Robotics and Computer Integrated Manufacturing, vol. 19, pp. 79-87, 2003. DOI: 10.1016/S0736-5845(02)00064-9.
Ogun, P. et al., 2015. “3D Vision Assisted Flexible Robotic Assembly of Machine Components.” IN: Proceedings of 2015 8th International Conference on Machine Vision (ICMV 2015), Barcelona, spain, Nov. 19-21, 2015 (Proceedings of SPIE, 9878, DOI: 10.1117/12.2229053).
James K. Mills et al., “Robotic Fixtureless Assembly of Sheet Metal Parts Using Dynamic Finite Element Models: Modelling and Stimulation.” Laboratory for Nonlinear Systems Control, Department of Mechanical Engineering, University of Toronto, 5 King's College Road, Toronto, Ontario, Canada M5S 1A4. IEEE International Conference on Robotics and Automation 0-7803-1965-6/95 $4.00 © 1995 IEEE, 1995.
Related Publications (1)
Number Date Country
20200192311 A1 Jun 2020 US