Embodiments herein relate to methods and systems for forming three-dimensional printed parts.
Additive manufacturing systems are used to build 3D parts from digital representations of the parts using one or more additive manufacturing techniques. Examples of commercially available additive manufacturing techniques include extrusion-based techniques, ink jetting, selective laser sintering, powder/binder jetting, electron beam melting, and stereolithographic processes. For each of these techniques the 3D digital representation of the part is initially digitally sliced into multiple horizontal layers. For each sliced layer, in some embodiments, a tool path is then generated, which provides instructions for the particular additive manufacturing system to form the given layer.
One particularly desirable additive manufacturing method is selective toner electrophotographic process (STEP) additive manufacturing, which allows for rapid, high quality production of 3D parts. STEP manufacturing is performed by applying layers of thermoplastic material that are carried from an electrophotography (EP) engine by a transfer medium (e.g., a rotatable belt or drum). The layer is then transferred to a build platform to print the 3D part (or support structure) in a layer-by-layer manner, where the successive layers are transfused together to produce a 3D part (or support structure). The layers are placed down in an X-Y plane, with successive layers positioned on top of one another in a Z-axis perpendicular to the X-Y plane.
A support structure is sometimes built utilizing the same deposition techniques by which the part material is deposited. The supporting structures are often built underneath overhanging portions, or in cavities of parts, under construction that are not supported by the part material itself. The part material adheres to the support material during fabrication and the support material is subsequently removable from the completed 3D part when the printing process is complete. In typical STEP processes, layers of the part material and support material are deposited next to each other in a common X-Y plane. These layers of part and support material are each built on top of one another (layers of part material built on top of other layers of part material; and layers of support material built on to top of other layers of support material) along the Z-axis to create a composite part that contains both part material and support material.
Although STEP deposition can produce very high-quality parts, it is still desirable to form even better parts. For example, in some implementations it is desirable to have better control over structural properties, such as height of the build material (part material and support material) to maintain precise part sizing.
The present disclosure is directed to a method for selective thermoplastic electrophotographic process (STEP) additive manufacturing. In particular, this disclosure is directed to measuring the height of the build (the deposited part material and support material) against a reference (desired) height and making adjustments in subsequent deposited layers to correct any deficiencies in the build.
In an example embodiment the method comprises providing a height sensor for detecting the local height of a portion of an additively manufactured build relative to a known datum; building a topographical height map using data from the height sensor; generating an error matrix based upon the height map; using the error matrix to generate a corresponding correction matrix; and using the correction matrix to apply future layers of build material to maintain planarity of the top of the part. Maintaining planarity can be very important for geometric accuracy for viability of making large parts, since loss of planarity can result in improper deposition of new layers, compounding the problem as the build grows in height.
Various sensors can be used, but in some embodiments the height sensor or sensors are optical, pneumatic, or capacitive. Typically just one type of sensor will be used, but in some embodiments two or more different types of sensors can be used. The height sensor can operate as, for example, a line-scan, area-scan, or raster-scan. When the height sensor is a line-scan or a raster-scan sensor the line advance motion can be supplied by the motion of the build (the part being formed on a moving platen) along its process direction. Thus, the sensors themselves are typically stationary but measure a height along the build as the build travels past (typically below) the sensor or sensors. In certain embodiments the height sensor comprises an array of individual sensors that span the full build area and are triggered to sample the build height based on an external input synchronized to platen motion. The sensor or sensors can be placed immediately adjacent to the transfer, fusing, or transfusing element, such that the reciprocating cycle time of the build can be minimized.
In some embodiments a previously-printed image is used as a mask for the sensor reading and the average of the masked error matrix is used as feedback for one or more biases (vzero) in the system to control average layer thickness. It is often desirable to correct for inconsistencies and errors in the measurement process. For example, movement of the build platen along its travel path can introduce errors, such as vibrations, to the height sensor measurements. In an example implementation an error matrix is adjusted to zero out Z-axis motion synchronous with X-axis motion.
In some implementations the height sensor has a coarser resolution than the printed image, and a correction matrix is mapped to printer image space, such as (for example without limitation) by a bilinear interpolation.
A correction mask can be used to adjust for errors, and optionally the correction mask is registered to the image space manually. In the alternative, the correction mask is registered to the image space with an automated algorithm comprising template matching, preferably including rotation and scale.
The correction to be applied can be undertaken in various manners. For example, in some implementations the correction is applied as a multiplier to the exposure time of each pixel printed, thus increasing the amount of part or support material applied to that location in subsequent layers. Further, the correction can be applied as the randomized or structured insertion of non-imaged pixels at specified densities, wherein the position of any such air pixel is varied from layer to layer. Also, the error matrix is optionally calculated relative to a tare image, which is the sensed area prior to any material being deposited on the build sheet, allowing built-in cancellation of any position-dependent structure deflections in the sensed height image from the build sheet or platen. Alternative, correction can include using a nonlinear function or arbitrary mapping, instead of time=constant*PID output.
In some embodiments a correction is applied to the error matrix wherein a measured load applied to the structure supporting the build and/or sensor is used to predict and compensate for errors caused by structural deflection under said load.
In certain embodiments location features are added to the build to help with registration. For example, dedicated features can be embedded within the build volume, detected by means of an image processing algorithm, compared to the known locations of those same features in the sliced image data, and used as an input to an image transformation function to register the scan data to the writer data. The location features, called fiducials, can be at fixed, dedicated locations where operators are not permitted to print parts. Alternatively, the fiducials are automatically or manually located at regions where operators have not placed parts in the build volume. In some embodiments the fiducials are detected by various methods, including blob detection, template matching, and circle detection. The fiducial detection is optionally limited to a region-of-interest around each fiducial encompassing the expected variability in fiducial locations, to reduce computation time required. The fiducial locations may be selected at the time of job slicing and passed through to the printer as the nominal locations against which the measured fiducials are matched. Further, the machine may be configured to provide a “gross” alignment that is used in each job before a valid set of fiducials is detected (such an alignment may be derived from detected fiducial locations in prior print jobs).
Optionally, the system to undertake the present process may include a line-scan triangulating laser profilometer. In an example implementation, four separate sensor units are set up side-by-side to get sufficient crosstrack field of view (Y axis, in STEP machine coordinates). Optionally, a dedicated industrial computer running a defined algorithm applies a transform—based on a vendor-proprietary alignment procedure—to each of the four sensors' point cloud outputs to get the data into the same coordinate system. These “line” point clouds are built up over many sequential scans to build an “area” point cloud, then projected to a 2-dimensional space and output to a control system as an array. This array is interpreted as an image, which is subsequently processed with common image processing techniques to implement the control loop.
The line scans described above are triggered by a signal from the motion control system, based on an encoder that measures the platen X-axis motion. Example resolution is 127 μm in the crosstrack direction and 254 μm in the intrack direction. Crosstrack resolution is limited by the profilometer resolution, while intrack resolution is limited by the processing power available within the sensor subsystem (i.e. the four sensors and the computer that merges their data). In many configurations the resulting image sizes are close to the computational limits of the cores allocated to them in the system, so significantly finer resolution is impractical without significantly more computing power.
Registration between top-of-part scan data and the “writer image” to which corrections are applied is often critical, and called “scan-to-writer alignment”. Well-registered scan data will result in corrections being applied to the right pixels. Improperly-registered scan data results in the wrong pixels getting corrections. This can result in poor realized part flatness.
Material flow in the topmost layers of the mold block can mitigate these effects somewhat because it helps average the corrections applied in a small neighborhood. However, material flow can be detrimental to geometric accuracy, and some materials do not flow as much as others in the STEP process, so allowable registration errors are smaller for higher-precision parts made of materials with a higher viscosity under their STEP process conditions.
To automate scan-to-writer registration, at least four degrees of freedom are typically defined—X and Y translation, and X and Y scale. Additional degrees of freedom can permit compensation for skew and keystone errors as well.
One approach is to detect all the edges in the scan and printed images and compare them. This can be computationally expensive because it requires iterating through various possible transformations and identifying one that matches well across large images. Instead, it can be preferred to create dedicated fiducial features in the margins around the printed parts. For example, the fiducials can be fixed in the corners, however they can also be placed within the build where space is available between parts. Registration accuracy can be improved with widely-spaced fiducials because the resulting transform will be less sensitive to small errors in the detected fiducial locations.
Several methods exist for detecting features in images, such as various forms of edge detection, template matching, or blob detection. One option is to create and detect straight edges near the exterior of the build volume, or to make use of existing edges such as those created by a “frame” around the build volume. However, edge effects in the STEP process can sometimes cause deformation of the features, and variations in gloss. Gloss variations, and very steep roll off (e.g. close to a vertical wall), affect the accuracy of the scan data in the nearby vicinity. Edges measured in this way can have a significant bias that could affect the accuracy of the transform. Alternatively, symmetrical features of size can be used (such as pins, holes, bosses, etc.), the edge errors can tend to average out, leaving the center point of such features detectable with reasonable accuracy, on the order of half of a scan pixel.
Template matching is an optional technique to detect such features. The measured size of fiducials can change somewhat as the build progresses and the edge shape changes. This makes template matching more difficult and computationally-intensive, as scale adjustments would need to be iterated over. Further, it is desirable to minimize the size of a fiducial feature, whereas template matching relies heavily on having many pixels available to match and may not produce the most accurate results.
Blob detection can also be used, but small local defects in the shape can significantly influence the output of a blob detection scheme—namely the centroid. It is important to have a detection method that is robust to the occasional local geometric or sensing error.
Circular fiducials with a circle detection algorithm can be used employing (for example) a Hough transform. This is an established algorithm in machine vision, which first edge-finds, then iterates through possible circle center points and circle diameters, and for each combination counts the number of edge pixels that lie on such a circle. The locations and diameters with a sufficiently high number of “votes” is considered a circle. Since it is know there is only one circle within each region of interest, the circle with the highest number of votes is used.
Fiducial locations are typically established in the slicer, then passed to the control system in a parameter file that accompanies the job data. The control system can then define a Region of Interest (“ROI”) around the center points to look for fiducials. By defining an ROI substantially smaller (on the order of ˜50,000 pixels per ROI, vs ˜15,000,000 total pixels in the scan) than the whole scan or toner image, computation time can be drastically reduced. Once the fiducial location within the ROI is detected, a transformation is applied to the scan data to bring it into “toner image space.” It is possible to use an established four-point transform called a Perspective Transform, which maps any four points to any other four points, and interpolates the intermediate pixels according to the same warp matrix. This transform permits a low-order correction for build deformation. Higher-order transforms (requiring more fiducials) permit higher-order corrections for build deflection (i.e. in the X-Y plane). Other transforms are possible with fewer or more matching points. Transforms up to perspective transforms maintain straight lines through the transformation; higher-order transforms would not maintain straight lines. Once this image transformation is complete, the TOP algorithm can proceed.
The TOP system is a line-scan triangulating laser profilometer. In an example implementation, four separate sensor units are set up side-by-side to get sufficient crosstrack field of view (Y axis, in STEP machine coordinates). A dedicated industrial PC running a vendor-proprietary algorithm can be used to apply a transform-based on a vendor-proprietary alignment procedure—to each of the four sensors' point cloud outputs to get the data into the same coordinate system. These “line” point clouds are built up over many sequential scans to build an “area” point cloud, then projected to a 2-dimensional space and output to a control system (“Luminous”) as an array. This array is interpreted as an image, which is subsequently processed with common image processing techniques to implement the control loop.
The line scans described above are triggered by a signal from the motion control system, based on an encoder that measures the platen X motion. Typical resolution is 127 μm in the crosstrack direction and 254 μm in the intrack direction. Crosstrack resolution is limited by the profilometer resolution, while intrack resolution is limited by the processing power available within the sensor subsystem (i.e. the four sensors and the dedicated PC that merges their data). Realistically, the resulting image sizes are close to the computational limits of the cores allocated to them in the Luminous system, so significantly finer resolution is impractical without significantly more computing power.
Unless otherwise specified, the following terms as used herein have the meanings provided below:
The term “copolymer” refers to a polymer having two or more monomer species.
The terms “preferred” and “preferably” refer to embodiments of the invention that may afford certain benefits, under certain circumstances. However, other embodiments may also be preferred, under the same or other circumstances. Furthermore, the recitation of one or more preferred embodiments does not imply that other embodiments are not useful, and is not intended to exclude other embodiments from the inventive scope of the present disclosure.
Reference to “a” chemical compound refers one or more molecules of the chemical compound, rather than being limited to a single molecule of the chemical compound. Furthermore, the one or more molecules may or may not be identical, so long as they fall under the category of the chemical compound.
The terms “at least one” and “one or more of” an element are used interchangeably, and have the same meaning that includes a single element and a plurality of the elements, and may also be represented by the suffix “(s)” at the end of the element.
Directional orientations such as “above”, “below”, “top”, “bottom”, and the like are made with reference to a direction along a printing axis of a 3D part. In the embodiments in which the printing axis is a vertical z-axis, the layer-printing direction is the upward direction along the vertical z-axis. In these embodiments, the terms “above”, “below”, “top”, “bottom”, and the like are based on the vertical z-axis. However, in embodiments in which the layers of 3D parts are printed along a different axis, the terms “above”, “below”, “top”, “bottom”, and the like are relative to the given axis.
The terms “about” and “substantially” are used herein with respect to measurable values and ranges due to expected variations known to those skilled in the art (e.g., limitations and variabilities in measurements).
The term “providing”, such as for “providing a material” and the like, when recited in the claims, is not intended to require any particular delivery or receipt of the provided item. Rather, the term “providing” is merely used to recite items that will be referred to in subsequent elements of the claim(s), for purposes of clarity and ease of readability.
The term “selective deposition” refers to an additive manufacturing technique where one or more layers of particles are fused to previously deposited layers utilizing heat and pressure over time where the particles fuse together to form a layer of the part and also fuse to the previously printed layer.
The term “electrostatography” refers to the formation and utilization of latent electrostatic charge patterns to form an image of a layer of a part, a support structure or both on a surface. Electrostatography includes, but is not limited to, electrophotography where optical energy is used to form the latent image, ionography where ions are used to form the latent image and/or electron beam imaging where electrons are used to form the latent image.
The terms “resilient material” and “flowable material” describe distinct materials used in the printing of a 3D part and support. The resilient material has a higher viscosity and/or storage modulus relative to the flowable material.
Unless otherwise specified, pressures referred to herein are based on atmospheric pressure (i.e. one atmosphere).
Embodiments of the present disclosure relate to a selective deposition-based additive manufacturing system, such as an electrostatography-based additive manufacturing system, to print 3D parts and/or support structures with high resolution and smooth surfaces. During a printing operation, electrostatographic engines develop or otherwise image each layer of the part and support materials using the electrostatographic process. The developed layers are then transferred to a layer transfusion assembly where they are transfused (e.g., using heat and/or pressure over time) to print one or more 3D parts and support structures in a layer-by-layer manner.
The EP engines 12p and 12s are imaging engines for respectively imaging or otherwise developing layers, generally referred to as 22, of the powder-based part and support materials, where the part and support materials are each preferably engineered for use with the particular architecture of the EP engine 12p or 12s. As discussed below, the developed layers 22 are transferred to a transfer medium (such as belt 24) of the transfer assembly 14, which delivers the layers 22 to the transfusion assembly 20. The transfusion assembly 20 operates to build the 3D part 26, which may include support structures and other features, in a layer-by-layer manner by transfusing the layers 22 together on a build platform 28.
In some embodiments, the transfer medium includes a belt 24, as shown in
In some embodiments, the transfer assembly 14 includes one or more drive mechanisms that include, for example, a motor 30 and a drive roller 33, or other suitable drive mechanism, and operate to drive the transfer medium or belt 24 in a feed direction 32. In some embodiments, the transfer assembly 14 includes idler rollers 34 that provide support for the belt 24. The example transfer assembly 14 illustrated in
The EP engine 12s develops layer or image portions 22s of powder-based support material, and the EP engine 12p develops layer or image portions 22p of powder-based part/build material. In some embodiments, the EP engine 12s is positioned upstream from the EP engine 12p relative to the feed direction 32, as shown in
Example system 10 also includes controller 36, which represents one or more processors that are configured to execute instructions, which may be stored locally in memory of the system 10 or in memory that is remote to the system 10, to control components of the system 10 to perform one or more functions described herein. In some embodiments, the controller 36 includes one or more control circuits, microprocessor-based engine control systems, and/or digitally-controlled raster imaging processor systems, and is configured to operate the components of system 10 in a synchronized manner based on printing instructions received from a host computer 38 or a remote location.
In some embodiments, the host computer 38 includes one or more computer-based systems that are configured to communicate with controller 36 to provide the print instructions (and other operating information). For example, the host computer 38 may transfer information to the controller 36 that relates to the sliced layers of the 3D parts and support structures, thereby allowing the system 10 to print the 3D parts 26 and support structures in a layer-by-layer manner. The controller 36 may also use signals from one or more sensors to assist in properly registering the printing of the part or image portion 22p and/or the support structure or image portion 22s with a previously printed corresponding support structure portion 22s or part portion 22p on the belt 24 to form the individual layers 22.
The components of system 10 may be retained by one or more frame structures (not shown for simplicity). Additionally, the components of system 10 may be retained within an enclosable housing (not shown for simplicity) that prevents components of the system 10 from being exposed to ambient light during operation.
The photoconductive surface 46 can be a thin film extending around the circumferential surface of the conductive drum body 44, and is preferably derived from one or more photoconductive materials, such as amorphous silicon, selenium, zinc oxide, organic materials, and the like. As discussed below, the surface 46 is configured to receive latent-charged images of the sliced layers of a 3D part or support structure (or negative images), and to attract charged particles of the part or support material to the charged or discharged image areas, thereby creating the layers of the 3D part or support structure.
As further shown, each of the example EP engines 12p and 12s also includes a charge inducer 54, an imager 56, a development station 58, a cleaning station 60, and a discharge device 62, each of which may be in signal communication with the controller 36. The charge inducer 54, the imager 56, the development station 58, the cleaning station 60, and the discharge device 62 accordingly define an image-forming assembly for the surface 46 while the drive motor 50 and the shaft 48 rotate the photoconductor drum 42 in the direction 52.
Each of the EP engines 12 uses the powder-based material (e.g., polymeric or thermoplastic toner), generally referred to herein by reference character 66, to develop or form the layers 22. In some embodiments, the image-forming assembly for the surface 46 of the EP engine 12s is used to form support layers 22s (e.g., image portions) of powder-based support material 66s, where a supply of the support material 66s may be retained by the development station 58 (of the EP engine 12s) along with carrier particles. Similarly, the image-forming assembly for the surface 46 of the EP engine 12p is used to form part layers 22p (e.g., image portion) of powder-based part material 66p, where a supply of the part material 66p may be retained by the development station 58 (of the EP engine 12p) along with carrier particles. Additional EP engines 12 may be included that utilize other support or part materials 66.
The charge inducer 54 is configured to generate a uniform electrostatic charge on the surface 46 as the surface 46 rotates in the direction 52 past the charge inducer 54. Suitable devices for the charge inducer 54 include corotrons, scorotrons, charging rollers, and other electrostatic charging devices.
Each imager 56 is a digitally-controlled, pixel-wise light exposure apparatus configured to selectively emit electromagnetic radiation toward the uniform electrostatic charge on the surface 46 as the surface 46 rotates in the direction 52 the past imager 56. The selective exposure of the electromagnetic radiation to the surface 46 is directed by the controller 36, and causes discrete pixel-wise locations of the electrostatic charge to be removed (i.e., discharged to ground), thereby forming latent image charge patterns on the surface 46.
Suitable devices for the imager 56 include scanning laser (e.g., gas or solid-state lasers) light sources, light emitting diode (LED) array exposure devices, and other exposure device conventionally used in 2D electrophotography systems. In alternative embodiments, suitable devices for the charge inducer 54 and the imager 56 include ion-deposition systems configured to selectively directly deposit charged ions or electrons to the surface 46 to form the latent image charge pattern.
Each development station 58 is an electrostatic and magnetic development station or cartridge that retains the supply of the part material 66p or the support material 66s, along with carrier particles. The development stations 58 may function in a similar manner to single or dual component development systems and toner cartridges used in 2D electrophotography systems. For example, each development station 58 may include an enclosure for retaining the part material 66p or the support material 66s and carrier particles. When agitated, the carrier particles generate triboelectric charges to attract the powders of the part material 66p or the support material 66s, which charges the attracted powders to a desired sign and magnitude, as discussed below.
Each development station 58 may also include one or more devices for transferring the charged part or the support material 66p or 66s to the surface 46, such as conveyors, fur brushes, paddle wheels, rollers, and/or magnetic brushes. For instance, as the surface 46 (containing the latent charged image) rotates from the imager 56 to the development station 58 in the direction 52, the charged part material 66p or the support material 66s is attracted to the appropriately charged regions of the latent image on the surface 46, utilizing either charged area development or discharged area development (depending on the electrophotography mode being utilized). This creates successive layers 22p or 22s as the photoconductor drum continues to rotate in the direction 52, where the successive layers 22p or 22s correspond to the successive sliced layers of the digital representation of the 3D part or support structure.
The successive layers 22p or 22s are then rotated with the surface 46 in the direction 52 to a transfer region in which layers 22p or 22s are successively transferred from the photoconductor drum 42 to the belt 24 or other transfer medium, as discussed below. While illustrated as a direct engagement between the photoconductor drum 42 and the belt 24, in some preferred embodiments, the EP engines 12p and 12s may also include intermediary transfer drums and/or belts, as discussed further below.
After a given layer 22p or 22s is transferred from the photoconductor drum 42 to the belt 24 (or an intermediary transfer drum or belt), the drive motor 50 and the shaft 48 continue to rotate the photoconductor drum 42 in the direction 52 such that the region of the surface 46 that previously held the layer 22p or 22s passes the cleaning station 60. The cleaning station 60 is a station configured to remove any residual, non-transferred portions of part or support material 66p or 66s. Suitable devices for the cleaning station 60 include blade cleaners, brush cleaners, electrostatic cleaners, vacuum-based cleaners, and combinations thereof.
After passing the cleaning station 60, the surface 46 continues to rotate in the direction 52 such that the cleaned regions of the surface 46 pass the discharge device 62 to remove any residual electrostatic charge on the surface 46, prior to starting the next cycle. Suitable devices for the discharge device 62 include optical systems, high-voltage alternating-current corotrons and/or scorotrons, one or more rotating dielectric rollers having conductive cores with applied high-voltage alternating-current, and combinations thereof.
The biasing mechanisms 16 are configured to induce electrical potentials through the belt 24 to electrostatically attract the layers 22p and 22s from the EP engines 12p and 12s to the belt 24. Because the layers 22p and 22s are each only a single layer increment in thickness at this point in the process, electrostatic attraction is suitable for transferring the layers 22p and 22s from the EP engines 12p and 12s to the belt 24.
The controller 36 preferably rotates the photoconductor drums of the EP engines 12p and 12s at the same rotational rates that are synchronized with the line speed of the belt 24 and/or with any intermediary transfer drums or belts. This allows the system 10 to develop and transfer the layers 22p and 22s in coordination with each other from separate developer images. In particular, as shown, each part layer 22p may be transferred to the belt 24 with proper registration with each support layer 22s to produce a combined part and support material layer or combined image layer, which is generally designated as layer 22. As can be appreciated, some of the layers 22 transferred to the layer transfusion assembly 20 may only include support material 66s or may only include part material 66p, depending on the particular support structure and 3D part geometries and layer slicing.
In an alternative embodiment, the part layers 22p and the support layers 22s may optionally be developed and transferred along the belt 24 separately, such as with alternating layers 22p and 22s. These successive, alternating layers 22p and 22s may then be transferred to layer transfusion assembly 20, where they may be transfused separately to form the layer 22 and print or build the 3D part 26 and support structure.
In a further alternative embodiment, one or both of the EP engines 12p and 12s may also include one or more intermediary transfer drums and/or belts between the photoconductor drum 42 and the belt or transfer medium or belt 24. For example, as shown in
The EP engine 12s may include the same arrangement of an intermediary drum 42a for carrying the developed layers 22s from the photoconductor drum 42 to the belt 24. The use of such intermediary transfer drums or belts for the EP engines 12p and 12s can be beneficial for thermally isolating the photoconductor drum 42 from the belt 24, if desired.
The build platform 28 is supported by a gantry 84 or other suitable mechanism, which can be configured to move the build platform 28 along the z-axis and the x-axis (and, optionally, also the y-axis), as illustrated schematically in
In the illustrated embodiment, the build platform 28 can be heatable with heating element 90 (e.g., an electric heater). The heating element 90 is configured to heat and maintain the build platform 28 at an elevated temperature that is greater than room temperature (25° C.), such as at a desired average part temperature of 3D part 26p and/or support structure 26s, as discussed in Comb et al., U.S. Patent Application Publication Nos. 2013/0186549 and 2013/0186558. This allows the build platform 28 to assist in maintaining 3D part 26p and/or support structure 26s at this average part temperature.
The nip roller 70 is an example heatable element or heatable layer transfusion element, which is configured to rotate around a fixed axis with the movement of the belt 24. In particular, the nip roller 70 may roll against the rear surface 22s in the direction of arrow 92 while the belt 24 rotates in the feed direction 32. In the shown embodiment, the nip roller 70 is heatable with a heating element 94 (e.g., an electric heater). The heating element 94 is configured to heat and maintain nip roller 70 at an elevated temperature that is greater than room temperature (25° C.), such as at a desired transfer temperature for the layers 22.
The pre-transfusion heater 72 includes one or more heating devices (e.g., an infrared heater and/or a heated air jet) that are configured to heat the layers 22 on the belt 24 to a selected temperature of the layer 22, such as up to a fusion temperature of the part material 66p and the support material 66s, prior to reaching nip roller 70. Each layer 22 desirably passes by (or through) the heater 72 for a sufficient residence time to heat the layer 22 to the intended transfer temperature. The pre-transfusion heater 74 may function in the same manner as the heater 72, and heats the top surfaces of the 3D part 26p and support structure 26s on the build platform 28 to an elevated temperature, and in one embodiment to supply heat to the layer upon contact.
The part and support materials 66p and 66s of the layers 22p and 22s may be heated together with the heater 72 to substantially the same temperature, and the part and support materials 66p and 66s at the top surfaces of the 3D part 26p and support structure 26s may be heated together with heater 74 to substantially the same temperature. This allows the part layers 22p and the support layers 22s to be transfused together to the top surfaces of the 3D part 26p and the support structure 26s in a single transfusion step as the combined layer 22. As discussed below, a gap can be placed between the support layers 22s and part layers 22p, and under heat and pressure part and support material are pressed together in a manner such as to produce an improved interface with reduced surface roughness.
An optional post-transfusion heater 76 may be provided downstream from nip roller 70 and upstream from air jets 78, and configured to heat the transfused layers 22 to an elevated temperature in a single post-fuse step.
As mentioned above, in some embodiments, prior to building the part 26 on the build platform 28, the build platform 28 and the nip roller 70 may be heated to their selected temperatures. For example, the build platform 28 may be heated to the average part temperature (e.g., bulk temperature) of 3D part 26p and support structure 26s. In comparison, the nip roller 70 may be heated to a desired transfer temperature or nip entrance temperature for the layers 22.
As further shown in
The continued rotation of the belt 24 and the movement of the build platform 28 align or register the heated layer 22 (e.g., combined image layer) with the heated top surfaces of 3D part 26p and support structure 26s with proper registration along the x-axis. The gantry 84 may continue to move the build platform 28 along the x-axis, at a rate that is synchronized with the rotational rate of the belt 24 in the feed direction 32 (i.e., the same directions and speed). This causes the rear surface 24b of the belt 24 to rotate around the nip roller 70 to nip the belt 24 and the heated layer 22 against the top surfaces of 3D part 26p and support structure 26s. This presses the heated layer 22 between the heated top surfaces of 3D part 26p and support structure 26s at the location of the nip roller 70, which at least partially transfuses the heated layer 22 to the top layers of 3D part 26p and support structure 26s.
As the transfused layer 22 passes the nip of the nip roller 70, the belt 24 wraps around the nip roller 70 to separate and disengage from the build platform 28. This assists in releasing the transfused layer 22 from the belt 24, allowing the transfused layer 22 to remain adhered to 3D part 26p and support structure 26s. Maintaining the transfusion interface temperature at a transfer temperature that is higher than its glass transition temperature, but lower than its fusion temperature, allows the heated layer 22 to be hot enough to adhere to the 3D part 26p and support structure 26s, while also being cool enough to readily release from the belt 24. Additionally, the close melt rheologies of the part and support materials allow them to be transfused in the same step. The temperature and pressures can be selected, as is discussed below, to promote flow of part material and support material into a gap between the two materials. Often the rheologies are preferably close, they can be transfused with glass transition temperatures that are significantly different from one another in some constructions. This flow into the gap, typically accompanied by an upward movement of the part and support material, results in a smoother interface between the part and support, plus a smoother surface for the part after removal of the support.
After release, the gantry 84 continues to move the build platform 28 along the x-axis to the post-transfusion heater 76. At optional post-transfusion heater 76, the top-most layers of 3D part 26p and the support structure 26s (including the transfused layer 22) may then be heated to at least the fusion temperature of the thermoplastic-based powder in a post-fuse or heat-setting step. This optionally heats the material of the transfused layer 22 to a highly fusable state such that polymer molecules of the transfused layer 22 quickly interdiffuse (also referred to as reptate) to achieve a high level of interfacial entanglement with 3D part 26p and support structure 26s.
Additionally, as the gantry 84 continues to move the build platform 28 along the x-axis past the post-transfusion heater 76 to the air jets 78, the air jets 78 blow cooling air towards the top layers of 3D part 26p and support structure 26s. This actively cools the transfused layer 22 down to the average part temperature, as discussed in Comb et al., U.S. Patent Application Publication Nos. 2013/0186549 and 2013/0186558.
To assist in keeping the 3D part 26p and support structure 26s at the average part temperature, in some embodiments, the heater 74 and/or the heater 76 may operate to heat only the top-most layers of 3D part 26p and support structure 26s. For example, in embodiments in which heaters 72, 74, and 76 are configured to emit infrared radiation, the 3D part 26p and support structure 26s may include heat absorbers and/or other colorants configured to restrict penetration of the infrared wavelengths to within the top-most layers. Alternatively, the heaters 72, 74, and 76 may be configured to blow heated air across the top surfaces of 3D part 26p and support structure 26s. In either case, limiting the thermal penetration into 3D part 26p and support structure 26s allows the top-most layers to be sufficiently transfused, while also reducing the amount of cooling required to keep 3D part 26p and support structure 26s at the average part temperature. However generally sufficient thermal penetration is desired to promote flow of part material and support material into gaps positioned at the interface between the part and support material.
The gantry 84 may then actuate the build platform 28 downward and move the build platform 28 back along the x-axis to a starting position along the x-axis, following the reciprocating rectangular pattern 86. The build platform 28 desirably reaches the starting position for proper registration with the next layer 22. In some embodiments, the gantry 84 may also actuate the build platform 28 and 3D part 26p/support structure 26s upward for proper registration with the next layer 22. The same process may then be repeated for each remaining layer 22 of 3D part 26p and support structure 26s.
After the transfusion operation is completed, the resulting 3D part 26p and support structure 26s may be removed from system 10 and undergo one or more post-printing operations. For example, support structure 26s may be sacrificially removed from 3D part 26p using an aqueous-based solution, such as an aqueous alkali solution. Under this technique, support structure 26s may at least partially dissolve in the solution, separating it from 3D part 26p in a hands-free manner.
In comparison, part materials are chemically resistant to aqueous alkali solutions. This allows the use of an aqueous alkali solution to be employed for removing the sacrificial support structure 26s without degrading the shape or quality of 3D part 26p. Examples of suitable systems and techniques for removing support structure 26s in this manner include those disclosed in Swanson et al., U.S. Pat. No. 8,459,280; Hopkins et al., U.S. Pat. No. 8,246,888; and Dunn et al., U.S. Patent Application Publication No. 2011/0186081; each of which are incorporated by reference to the extent that they do not conflict with the present disclosure.
Furthermore, after support structure 26s is removed, 3D part 26p may undergo one or more additional post-printing processes, such as surface treatment processes. Examples of suitable surface treatment processes include those disclosed in Priedeman et al., U.S. Pat. No. 8,123,999; and in Zinniel, U.S. Pat. No. 8,765,045.
In an example embodiment the method comprises providing a height sensor for detecting the local height of a portion of an additively manufactured build relative to a known datum; building a topographical height map using data from the height sensor; generating an error matrix based upon the height map; using the error matrix to generate a corresponding correction matrix; and using the correction matrix to apply future layers of build material to maintain planarity of the top of the part.
Various sensors can be used, but in some embodiments the height sensor or sensors are optical, pneumatic, or capacitive. Typically, just one type of sensor will be used, but in some embodiments, two or more different types of sensors can be used. The height sensor can operate as, for example, a line-scan, area-scan, or raster-scan. When the height sensor is a line-scan or a raster-scan sensor, the line advance motion can be supplied by the motion of the build (the part being formed on a moving platen) along its process direction. Thus, the sensors themselves are typically stationary but measure a height along the build as the build travels past (typically below) the sensor or sensors. In certain embodiments the height sensor comprises an array of individual sensors that span the full build area and are triggered to sample the build height based on an external input synchronized to platen motion. The sensor or sensors can be placed immediately adjacent to the transfer, fusing, or transfusing element, such that the reciprocating cycle time of the build can be minimized.
In some embodiments a previously printed image is used as a mask for the sensor reading and the average of the masked error matrix is used as feedback for one or more biases (Vzero) in the system to control average layer thickness. It is often desirable to correct for inconsistencies and errors in the measurement process. For example, movement of the build platen along its travel path can introduce errors, such as vibrations, to the height sensor measurements. In an example implementation an error matrix is adjusted to zero out Z-axis motion synchronous with X-axis motion.
In some implementations the height sensor has a coarser resolution than the printed image, and a correction matrix is mapped to printer image space, such by a bilinear interpolation.
A correction mask can be used to adjust for errors, and optionally correction mask is registered to image space manually. In the alternative, the correction mask is registered to image space with an automated algorithm comprising template matching, preferably including rotation and scale.
The correction to be applied can be undertaken in various manners. For example, in some implementations the correction is applied as a multiplier to the exposure time of each pixel printer, thus increasing the amount of part or support material applied to that location in subsequent layers. Further, the correction can be applied as the randomized or structured insertion of non-imaged pixels at specified densities, wherein the position of any such air pixel is varied from layer to layer. Also, the error matrix is optionally calculated relative to a tare image, which is the sensed area prior to any material being deposited on the build sheet, allowing the built-in cancellation of any position-dependent structure deflections in the sensed height image.
In some embodiments a correction is applied to the error matrix wherein a measured load applied to the structure supporting the build and/or sensor is used to predict and compensate for errors caused by structural deflection under said load.
In certain embodiments location features are added to the build to help with registration. For example, dedicated features can be embedded within the build volume, detected by means of an image processing algorithm, compared to the known locations of those same features in the sliced image data, and used as an input to an image transformation function to register the scan data to the writer data. The location features, called fiducials, can be at fixed, dedicated locations where operators are not permitted to print parts. Alternatively, the fiducials are automatically located at regions where operators have not placed parts in the build volume. In some embodiments the fiducials are detected by various methods, including blob detection, template matching, and circle detection. The fiducial detection is optionally limited to a region-of-interest around each fiducial encompassing the expected variability in fiducial locations, to reduce computation time required. The fiducial locations may be selected at the time of job slicing and passed through to the printer as the nominal locations against which the measured fiducials are matched. Further, the machine may be configured to provide a “gross” alignment that is used in each job before a valid set of fiducials is detected.
Optionally, a system to undertake the present process may include a line-scan triangulating laser profilometer. In an example implementation, four separate sensor units are set up side-by-side to get sufficient crosstrack field of view (Y axis, in STEP machine coordinates). A dedicated industrial computer running a defined algorithm applies a transform—based on a vendor-proprietary alignment procedure—to each of the four sensors' point cloud outputs to get the data into the same coordinate system. These “line” point clouds are built up over many sequential scans to build an “area” point cloud, then projected to a 2-dimensional space and output to a control system as an array. This array is interpreted as an image, which is subsequently processed with common image processing techniques to implement the control loop.
The line scans described above are triggered by a signal from the motion control system, based on an encoder that measures the platen X motion. Example resolution is 127 μm in the crosstrack direction and 254 μm in the intrack direction. Crosstrack resolution is limited by the profilometer resolution, while intrack resolution is limited by the processing power available within the sensor subsystem (i.e. the four sensors and the dedicated PC that merges their data). In many configurations the resulting image sizes are close to the computational limits of the cores allocated to them in the system, so significantly finer resolution is impractical without significantly more computing power.
Registration between top-of-part scan data and the “writer image” to which corrections are applied is often critical, and called “scan-to-writer alignment”, specifically to distinguish it from other registration controls that were established long before ToP was implemented. Well-registered scan data will result in corrections being applied to the right pixels. Improperly-registered scan data results in the wrong pixels getting corrections. This can result in poor realized part flatness, and particularly the “Tiger-Striping” defect.
Material flow in the topmost layers of the mold block can mitigate these effects somewhat because it helps average the corrections applied in a small neighborhood. However, material flow can be detrimental to geometric accuracy, and some materials do not flow as much as others in the STEP process, so allowable registration errors are smaller for higher-precision parts made of materials with a higher viscosity under their STEP process conditions.
To automate scan-to-writer registration, at least four degrees of freedom must be defined—X and Y translation, and X and Y scale. Additional degrees of freedom can permit compensation for skew and keystone errors as well.
One approach may be to detect all the edges in the scan and printed images, and compare them. This can be computationally expensive, because it requires iterating through various possible transformations and identifying one that matches well, across two 10- to 100-megapixel images. Instead, it can be preferred to create dedicated fiducial features in the margins around the printed parts. For example, the fiducials can be fixed in the corners, however they can also be placed within the build where space is available between parts. Registration accuracy would be improved with widely-spaced fiducials.
Several methods exist for detecting features in images, such as various forms of edge detection, template matching, or blob detection. One option is to create and detect straight edges near the exterior of the build volume, or to make use of existing edges such as those created by a “frame” around the build volume. However, edge effects in the STEP process can sometimes cause deformation of the features, and variations in gloss. Gloss variations, and very steep rolloff (e.g. close to a vertical wall), affect the accuracy of the scan data in the nearby vicinity. Edges measured in this way can have a significant bias that could affect the accuracy of the transform. Alternatively, symmetrical features of size can be used (such as pins, holes, bosses, etc.), the edge errors can tend to average out, leaving the center point of such features detectable with reasonable accuracy, on the order of half of a scan pixel.
Template matching is an optional technique to detect such features. The measured size of fiducials can change somewhat as the build progresses and the edge shape changes. This makes template matching more difficult and computationally-intensive, as scale adjustments would need to be iterated over. Further, it is desirable to minimize the size of a fiducial feature, whereas template matching relies heavily on having many pixels available to match, and may not produce the most accurate results.
Blob detection can also be used, but small local defects in the shape can significantly influence the output of a blob detection scheme-namely the centroid. It is important to have a detection method that is robust to the occasional local geometric or sensing error.
Circular fiducials with a circle detection algorithm can be used employing a Hough transform. This is an established algorithm in machine vision, which first edge-finds, then iterates through possible circle centerpoints and circle diameters, and for each combination counts the number of edge pixels that lie on such a circle. The locations and diameters with a sufficiently high number of “votes” is considered a circle- or in our case, since we know there is only one circle within each region of interest, the circle with the highest number of votes is used.
Fiducial locations are typically established in the slicer, then passed to the control system in a parameter file that accompanies the job data. The control system can then define a Region of Interest around the center points to look for fiducials. By defining an ROI substantially smaller than the whole scan or toner image, computation time can be drastically reduced. Once the fiducial location within the ROI is detected, a transformation is applied to the scan data to bring it into “toner image space.” It is possible to use an established four-point transform called a Perspective Transform, which maps any four points to any other four points, and interpolates the intermediate pixels according to the same warp matrix. Other transforms are possible with fewer or more matching points. Transforms up to perspective transforms maintain straight lines through the transformation; higher-order transforms would not maintain straight lines. Once this image transformation is complete, the ToP algorithm can proceed with the subsequent steps as described in the original disclosure.
Prior to the line-scan ToP control described in the previous disclosure, a point-scan ToP system was used to measure layer thickness and slope, and apply a two-parameter correction to each engine. While that system was also called “Top of Part,” any future references to “Top of Part” or “TOP” here refer specifically to an area-scan sensor that measures the height across the full mold block and uses that as feedback for a per-pixel toner density adjustment to maintain flatness across the whole build area.
The TOP system is a line-scan triangulating laser profilometer. In our implementation, four separate sensor units are set up side-by-side to get sufficient crosstrack field of view (Y axis, in STEP machine coordinates). A dedicated industrial PC running a vendor-proprietary algorithm applies a transform—based on a vendor-proprietary alignment procedure—to each of the four sensors' point cloud outputs to get the data into the same coordinate system. These “line” point clouds are built up over many sequential scans to build an “area” point cloud, then projected to a 2-dimensional space and output to a control system (“Luminous”) as an array. This array is interpreted as an image, which is subsequently processed with common image processing techniques to implement the control loop.
The line scans described above are triggered by a signal from the motion control system, based on an encoder that measures the platen X motion. Typical resolution is 127 μm in the crosstrack direction and 254 μm in the intrack direction. Crosstrack resolution is limited by the profilometer resolution, while intrack resolution is limited by the processing power available within the sensor subsystem (i.e. the four sensors and the dedicated PC that merges their data). Realistically, the resulting image sizes are close to the computational limits of the cores allocated to them in the Luminous system, so significantly finer resolution is impractical without significantly more computing power.
Registration between top-of-part scan data and the “writer image” to which corrections are applied is critical (note: at EAS we tend to call this “scan-to-writer alignment” specifically to distinguish it from other registration controls that were established long before ToP was implemented. To be more concise, this document simply uses the term “registration,” which should be taken to be TOP-related registration unless otherwise specified). Well-registered scan data will result in corrections being applied to the right pixels. Improperly-registered scan data results in the wrong pixels getting corrections. This can result in poor realized part flatness, and particularly the “Tiger-Striping” defect that has been seen with varying frequency since line-scan ToP was implemented.
Material flow in the topmost layers of the mold block can mitigate these effects somewhat because it helps average the corrections applied in a small neighborhood. However, material flow can be detrimental to geometric accuracy, and some materials do not flow as much as others in the STEP process, so allowable registration errors are smaller for higher-precision parts made of materials with a higher viscosity under their STEP process conditions.
If, instead, symmetrical features of size were used (such as pins, holes, bosses, etc), the edge errors would tend to average out, leaving the centerpoint of such features detectable with reasonable accuracy, on the order of half of a scan pixel.
Template matching is one potential technique to detect such features. However, we observed that the measured size of fiducials can change somewhat as the build progresses and the edge shape changes. This makes template matching more difficult and computationally-intensive, as scale adjustments would need to be iterated over. Further, it is desirable to minimize the size of a fiducial feature, whereas template matching relies heavily on having many pixels available to match, and may not produce the most accurate results.
Blob detection could also be used, but small local defects in the shape can significantly influence the output of a blob detection scheme-namely the centroid. It is important to have a detection method that is robust to the occasional local geometric or sensing error.
Ultimately circular fiducials with a circle detection algorithm employing a Hough transform can be used. This is an established algorithm in machine vision, which first edge-finds, then iterates through possible circle centerpoints and circle diameters, and for each combination counts the number of edge pixels that lie on such a circle. The locations and diameters with a sufficiently high number of “votes” is considered a circle- or in our case, since we know there is only one circle within each region of interest, the circle with the highest number of votes is used.
Fiducial locations are established in the slicer, then passed to the control system in a parameter file that accompanies the job data. The control system can then define a Region of Interest around the center points to look for fiducials. By defining an ROI substantially smaller than the whole scan or toner image, computation time can be drastically reduced.
Once the fiducial location within the ROI is detected, a transformation is applied to the scan data to bring it into “toner image space.” We use an established four-point transform called a Perspective Transform, which maps any four points to any other four points, and interpolates the intermediate pixels according to the same warp matrix. Other transforms are possible with fewer or more matching points. Transforms up to perspective transforms maintain straight lines through the transformation; higher-order transforms would not maintain straight lines. We did not believe a higher-order transform was warranted because we believed the scan data was sufficiently rectilinear. Once this image transformation is complete, the algorithm can proceed with the subsequent steps as described in the original disclosure.
Although the present disclosure has been described with reference to preferred embodiments, workers skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the disclosure.
This application is being filed as a PCT International Patent application on Dec. 29, 2022 in the name of Evolve Additive Solutions, Inc., a U.S. national corporation, applicant for the designation of all countries, and Alex J. Kossett, a U.S. Citizen, and J. Samuel Batchelder, a U.S. Citizen, inventors for the designation of all countries, and claims priority to U.S. Provisional Patent Application No. 63/295,811 filed Dec. 31, 2021, the contents of which are herein incorporated by reference in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/054268 | 12/29/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63295811 | Dec 2021 | US |