This invention relates to the precision methods and systems used in additive fabrication.
Additive fabrication, also referred to as 3D printing, refers to a relatively wide class of techniques for producing parts according to a computer-controlled process, generally to match a desired 3D specification, for example, a solid model. A class of fabrication techniques jets material for deposition on a partially fabricated object using inkjet printing technologies. The jetted material is typically UV cured shortly after it deposited, forming thin layers of cured material.
Certain additive fabrication systems use Optical Coherence Tomography (OCT) to capture volumetric data related to an object under fabrication. The captured volumetric data can be used by a reconstruction algorithm to produce a surface or depth map for a boundary of the object under fabrication (e.g., topmost surface of the object). One way of computing the surface boundary uses an image processing method such as peak detection.
Aspects described herein replace a some or all of the intermediate steps in the process of going from raw OCT data to a surface or depth map (sometimes referred to as 2.5D representation) of the scanned geometry with a machine learning model represented as a neural network. The reconstructed surface or depth map can be used for a variety of computer vision applications such as part inspection. The algorithm and system can also be used for a feedback loop in additive manufacturing systems.
In a general aspect, a method for determining estimated depth data for an object includes scanning the object to produce scan data corresponding to a surface region of the object using a first scanning process, configuring an artificial neural network with first configuration data corresponding to the first scanning process, and providing the scan data as an input to the configured artificial neural network to yield the estimated depth data as an output, the estimated depth data representing a location of a part of the object in the surface region.
Aspects may include one or more of the following features.
The object may include an object under 3D additive fabrication according to a first fabrication process. The first configuration data may correspond to the first scanning process as well as the first fabrication process. The method may include configuring an artificial neural network with first configuration data including selecting said first configuration data from a set of available configuration data each associated with a different scanning and/or fabrication process.
The method may include determining expected depth data for the surface region of the object and providing the expected depth data with the scan data to the configured artificial neural network. The expected depth data may include a range of expected depths. Scanning the object may include optically scanning the object at the location on the surface of the object. Scanning the object may include optically scanning the object over a number of locations on the surface region of the object.
Scanning the object may include scanning the object using optical coherence tomography. Scanning the object may include processing raw scan data according to one or more of (1) a linearization procedure, (2) a spectral analysis procedure, and (3) a phase correction procedure to produce the scan data. The method may include transforming the scan data from a time domain representation to a frequency domain representation prior to providing the scan data as input to the artificial neural network.
The configured artificial neural network may yields a confidence measure associated with the estimated depth data. The method may include providing scan data for a spatial neighborhood associated with the surface region of the object as input to the configured artificial neural network. The spatial neighborhood may include a number of parts of the object in the surface region.
In another general aspect, a system for determining estimated depth data for an object includes a sensor for scanning the object to produce scan data corresponding to a surface region of the object using a first scanning process, an artificial neural network configured with first configuration data corresponding to the first scanning process, the artificial neural network configured to determine the estimated depth data representing a location of a part of the object in the surface region. The artificial neural network has one or more inputs for receiving the scan data and an output for providing an estimated depth data.
In another general aspect, software stored on a non-transitory computer-readable medium includes instructions for causing a processor to cause a sensor to scan the object to produce scan data corresponding to a surface region of the object using a first scanning process configure an artificial neural network with first configuration data corresponding to the first scanning process, and provide the scan data as an input to the configured artificial neural network to yield the estimated depth data as an output, the estimated depth data representing a location of a part of the object in the surface region.
In another general aspect, a method for determining estimated depth data for an object includes scanning the object to produce scan data corresponding to a surface region of the object using a first scanning process, configuring a first artificial neural network with first configuration data corresponding to the first scanning process, configuring a second artificial neural network with second configuration data corresponding to the first scanning process, providing the scan data as an input to the configured first artificial neural network to yield volumetric data representing a location of a part of the object in the surface region, and providing the volumetric data to the configured second neural network to yield the estimated depth data as an output, the estimated depth data representing the location of the part of the object in the surface region.
Aspects may include one or more of the following features.
The method may include determining expected depth data for the surface region of the object and providing the expected depth data with the scan data to the configured second artificial neural network. The expected depth data may include a range of expected depths.
In another general aspect, a method for configuring an artificial neural network for determining estimated depth data for an object includes determining training data including scan data for a number of object and corresponding reference depth data, processing the training data to form configuration data for an artificial neural network, and providing the training data for use in a method for determining estimated depth data.
Aspects may have one or more of the following advantages over conventional techniques. The use of an artificial neural network allows for a model can be automatically trained for different scanning processes and materials. As a result, the parameter tuning that may be required in conventional processing pipelines may be avoided. The model may be simpler and require fewer computation steps than conventional processing pipelines (i.e., the model may increase computational efficiency). The model may produce more accurate and higher resolution results.
Other features and advantages of the invention are apparent from the following description, and from the claims.
1 Additive Manufacturing System Overview
The description below relates additive fabrication, for example using a jetting-based 3D printer 100 shown in
The printer 100 uses jets 120 (inkjets) to emit material for deposition of layers on a partially fabricated object. In the printer illustrated in
A sensor 160 is used to determine physical characteristics of the partially fabricated object, including one or more of the surface geometry (e.g., a depth map characterizing the thickness/depth of the partially fabricated object), subsurface (e.g., in the near surface including, for example, 10s or 100s of deposited layers) characteristics. The characteristics that may be sensed can include one or more of a material density, material identification, and a curing state. While various types of sensing can be used, examples described herein relate to the use of optical coherence tomography (OCT) to determine depth and volumetric information related to the object being fabricated.
The controller 110 uses the model 190 of the object to be fabricated to control motion of the build platform 130 using a motion actuator 150 (e.g., providing three degrees of motion) and control the emission of material from the jets 120 according to the non-contact feedback of the object characteristics determined via the sensor 160. Use of the feedback arrangement can produce a precision object by compensating for inherent unpredictable aspects of jetting (e.g., clogging of jet orifices) and unpredictable material changes after deposition, including for example, flowing, mixing, absorption, and curing of the jetted materials.
The sensor 160 is positioned above the object under fabrication 121 and measures characteristics of the object 121 within a given working range (e.g., a 3D volume). The measurements are associated with a three-dimensional (i.e., x, y, z) coordinate system where the x and y axes are treated as spatial axes and the z axis is a depth axis.
In some examples, the sensor 160 measures the volume of the object under fabrication 121 in its own coordinate system. The sensor's coordinate system might be a projective space, or the lens system might have distortion, even if the system is meant to be orthographic. As such, it may be the case that the measured volume is transformed from the coordinate space of the sensor 160 to the world coordinate space (e.g., a Euclidean, metric coordinate system). A calibration process may be used to establish mapping between these two spaces.
2 Sensor Data Processing
In some examples, the printer 100 fabricates the object 121 in steps, or “layers.” For each layer, the controller 110 causes the platform 130 to move to a position on the z-axis. The controller 110 then causes the platform 130 to move to a number of positions on the x-y plane. At each (x,y,z) position the controller 110 causes the jets 120 to deposit an amount of material that is determined by a planner 112 based at least in part on the model 190 and a depth map of the object under fabrication 121 determined by a sensor data processor 111 of the controller 110.
2.1 Volumetric Intensity Data Determination
Referring also to
In some examples, the measured volumetric intensity data for different (x, y, z) positions of the build platform is combined to form a volumetric profile representing material occupancy in a 3D volume of the object built on the platform. For example, for each point P in a 3D volume the volumetric profile specifies whether that point contains material or not. In some examples, the volumetric profile also stores partial occupancy of material for the point, as well as the type(s) of material occupying the point. In some examples, the volumetric profile is represented as a 3D discrete data structure (e.g., 3D array of data). For example, for each point P in the (x, y, z) coordinate space, a value of 1 is stored in the data structure if the point contains material and a value of 0 is stored in the data structure if the point does not contain material. Values between 0 and 1 are stored in the data structure to represent fractional occupancy.
To distinguish between different materials, the stored value for a particular point can denote material label. For example, 0—no material, 1—build material, 2—support material. In this case, to store fractional values of each material type, multiple volumes with fractional values may be stored. Since the data is typically associated with measurements at discrete (x, y, z) locations, continuous values can be interpolated (e.g., using tri-linear interpolation).
Furthermore, in some examples, the volumetric data in the volumetric profile is also associated with a confidence of the measurement. This is typically a value between 0 and 1. For example, 0 means no confidence in the measurement (e.g., missing data), 1 means full confidence in the data sample. Fractional values are also possible. The confidence is stored as additional volumetric data.
Referring to
2.2 Surface Depth Determination
In some examples, a surface of an object is reconstructed from the raw OCT data 261 collected for the object under fabrication 121. For example, after computing a volumetric profile for an object, a topmost surface of the object can be reconstructed by finding a most likely depth (i.e., maximum z value) for each (x, y) position in the volumetric profile using a peak detection algorithm.
Referring to
Referring to
In some examples, the determined depth data for different (x, y) positions is combined to form a “2.5D” representation in which the volumetric intensity data at a given (x, y) position is replaced with a depth value. In some examples, the 2.5D representation is stored as a 2D array (or multiple 2D arrays). In some examples, the stored depth value is computed from the volumetric intensity data, as is described above. In other examples, the stored depth value is closest occupied depth value from the volumetric intensity data (e.g., if the mapping is orthographic, for each (x, y) position the z value of the surface or the first voxel that contains material is stored). In some examples, material labels for the surface of an object are stored using an additional 2.5D array. In some examples, the reconstructed depth represented in the 2.5D representation is noisy and is filtered to remove the noise.
Referring to
3 Neural Network-based Depth Reconstruction
In some examples, some or all of the above-described steps in the process determining a 2.5D representation of a scanned geometry are replaced with a machine learning model represented as a neural network. Very generally, the machine learning model is trained in a training step (described below) to generate configuration data. In a runtime configuration, the neural networks described below are configured according to that configuration data.
3.1 Full Neural Network Processing
Referring to
Note that the arrangement in
3.2 FFT Followed by Neural Network Processing
Referring to
3.3 Linearization and Phase Correction Prior to FFT and Neural Network Processing
Referring to
3.4 Linearization Prior to Neural Network Processing
Referring to
3.5 Linearization and Phase Correction Prior to Neural Network Processing
Referring to
3.6 Neural Network Processing Algorithm with Expected Depth or Depth Range
In some examples, when an estimate of the surface depth (or depth range) is available, the estimate is provided as an additional input to guide the neural network. For example, when the system has approximate knowledge of the 3D model of the object under fabrication, that approximate knowledge is used by the neural network.
In the context of a 3D printing system with a digital feedback mechanism, the expected depth or depth range can be computed in a straightforward manner. In this additive fabrication method, scanning and printing are interleaved. The algorithm can store depth values computed at the previous iteration. It also has access to the information whether the printing method has printed a layer at a given (x, y) position and expected thickness of the layer.
Referring to
3.7 Neural Network Processing Algorithm with Spatial Neighborhood
In some examples, rather than providing the raw OCT data 261 for a single (x, y) position to a neural network in order to compute the surface depth value for that (x, y) position, raw OCT data for a spatial neighborhood around that single (x, y) position are provided to the neural network. Doing so reduces noise and improves the quality of the computed surface depth values because, for example, it is able to remove sudden jumps on the surface caused by over-saturated data or missing data has access to more information. For example, a 3×3 or 5×5 neighborhood if interference signals can be provided as input to the neural network.
Referring to
3.8 Neural Network Processing Algorithm with Spatial Neighborhood and Expected Depth
In some examples, an expected surface depth or surface depth range is provided to a neural network along with the raw OCT data (i.e., the interference signal) 261 for a given (x, y) position and its neighboring spatial positions. This is possible, for example, when the 3D model of the object under fabrication is approximately known.
Referring to
This is also applicable in additive fabrication systems that include a feedback loop. Those systems have access to a 3D model of the object under fabrication, previous scanned depth, print data sent to the printer, and expected layer thickness.
3.9 Linearization and Phase Correction Prior to Neural Network Processing with Spatial Neighborhood
Referring to
In each processing pipeline, the raw OCT data (i.e., the interference signal) 261 pre-preprocessed before it is fed to the FFT module 262. For example, the raw OCT data 261 is first processed by a linearization module 668 to generate a linearized interference signal 669, which is then processed by a phase correction module 671 to generate linearized, phase corrected data 672. The linearized, phase corrected data 672 is provided to the FFT 262 to generate the volumetric intensity data 263.
The volumetric intensity data 263 for each instance of the raw OCT data (i.e., the output of each of the processing pipelines) is then provided to a neural network 1671, which processes the volumetric intensity data 263 and outputs a surface depth value 567 (and optionally a confidence). As was noted above, spatial neighborhoods of different footprints/stencils can be used (e.g., 3×3, 5×5, etc.).
3.10 Dual Neural Network Processing Algorithm
In some examples, a series of two (or more) neural networks are used to compute the surface depth for a given (x, y) position. Referring to
In each processing pipeline, the raw OCT data 261 is provided to a first neural network 1771a, which processes the raw OCT data 261 to determine volumetric data 263 for the given (x, y) position and its neighboring spatial positions. The volumetric data 263 generated in each of the pipelines is provided to a second neural network 1771b, which processes the volumetric intensity data 263 from each pipeline and outputs a surface depth value 567 (and optionally a confidence).
In some examples, this approach is computationally efficient. The overall/combined architecture is deeper but less wide. The computation of the first neural network 1771a is completed and then stored. Both neural networks can be trained at the same time.
Referring to
4 Neural Network Training
Referring to
Standard loss functions can be used (e.g., L0, L1, L2, Linf) can be used when training the neural networks. In some examples, the networks are trained using input, output pairs computed with a standard approach or using a temporal tracking approach. Generally, the neural network is trained using a backpropagation algorithm and a stochastic gradient descent algorithm (e.g., ADAM). In this way, the surface values for each (x, y) position are calculated. The values might be noisy and additional processing (e.g., filtering) of the surface data can be employed (as is described above). The computed confidence values can be used in the filtering process.
In some examples, the input training data is obtained using a direct dept computation process (e.g., using the scanning methodologies described above), which may optionally be spatially smoothed to suppress noise. In some examples, the input training data is obtained from scans of an object with a known geometry (e.g., a coin with a known ground truth geometry. In yet other examples, the input training data is obtained as either 2.5D or 3D data provided by another (possibly higher accuracy) scanner.
5 Implementations
The printer shown in
An additive manufacturing system typically has the following components: a controller assembly is typically a computer with processor, memory, storage, network, IO, and display. It runs a processing program. The processing program can also read and write data. The controller assembly effectively controls the manufacturing hardware. It also has access to sensors (e.g., 3D scanners, cameras, IMUs, accelerometers, etc.).
More generally, the approaches described above can be implemented, for example, using a programmable computing system executing suitable software instructions or it can be implemented in suitable hardware such as a field-programmable gate array (FPGA) or in some hybrid form. For example, in a programmed approach the software may include procedures in one or more computer programs that execute on one or more programmed or programmable computing system (which may be of various architectures such as distributed, client/server, or grid) each including at least one processor, at least one data storage system (including volatile and/or non-volatile memory and/or storage elements), at least one user interface (for receiving input using at least one input device or port, and for providing output using at least one output device or port). The software may include one or more modules of a larger program, for example, that provides services related to the design, configuration, and execution of dataflow graphs. The modules of the program (e.g., elements of a dataflow graph) can be implemented as data structures or other organized data conforming to a data model stored in a data repository.
The software may be stored in non-transitory form, such as being embodied in a volatile or non-volatile storage medium, or any other non-transitory medium, using a physical property of the medium (e.g., surface pits and lands, magnetic domains, or electrical charge) for a period of time (e.g., the time between refresh periods of a dynamic memory device such as a dynamic RAM). In preparation for loading the instructions, the software may be provided on a tangible, non-transitory medium, such as a CD-ROM or other computer-readable medium (e.g., readable by a general or special purpose computing system or device), or may be delivered (e.g., encoded in a propagated signal) over a communication medium of a network to a tangible, non-transitory medium of a computing system where it is executed. Some or all of the processing may be performed on a special purpose computer, or using special-purpose hardware, such as coprocessors or field-programmable gate arrays (FPGAs) or dedicated, application-specific integrated circuits (ASICs). The processing may be implemented in a distributed manner in which different parts of the computation specified by the software are performed by different computing elements. Each such computer program is preferably stored on or downloaded to a computer-readable storage medium (e.g., solid state memory or media, or magnetic or optical media) of a storage device accessible by a general or special purpose programmable computer, for configuring and operating the computer when the storage device medium is read by the computer to perform the processing described herein. The inventive system may also be considered to be implemented as a tangible, non-transitory medium, configured with a computer program, where the medium so configured causes a computer to operate in a specific and predefined manner to perform one or more of the processing steps described herein.
In general, some or all of the algorithms described above can be implemented on an FPGA, a GPU, or CPU or any combination of the three. The algorithm can be parallelized in a straightforward way.
A number of embodiments of the invention have been described. Nevertheless, it is to be understood that the foregoing description is intended to illustrate and not to limit the scope of the invention, which is defined by the scope of the following claims. Accordingly, other embodiments are also within the scope of the following claims. For example, various modifications may be made without departing from the scope of the invention. Additionally, some of the steps described above may be order independent, and thus can be performed in an order different from that described.
This application claims the benefit of U.S. Provisional Application No. 62/789,764 filed Jan. 8, 2019, the contents of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5460758 | Langer et al. | Oct 1995 | A |
6492651 | Kerekes | Dec 2002 | B2 |
9562759 | Vogler et al. | Feb 2017 | B2 |
9952506 | Arai et al. | Apr 2018 | B2 |
10011071 | Batchelder | Jul 2018 | B2 |
10252466 | Ramos et al. | Apr 2019 | B2 |
10456984 | Matusik et al. | Oct 2019 | B2 |
20020104973 | Kerekes | Aug 2002 | A1 |
20040085416 | Kent | May 2004 | A1 |
20040114002 | Kosugi et al. | Jun 2004 | A1 |
20040173946 | Pfeifer et al. | Sep 2004 | A1 |
20060007254 | Tanno et al. | Jan 2006 | A1 |
20070106172 | Abreu | May 2007 | A1 |
20070241482 | Giller et al. | Oct 2007 | A1 |
20080124475 | Kritchman | May 2008 | A1 |
20090073407 | Okita | Mar 2009 | A1 |
20090105605 | Abreu | Apr 2009 | A1 |
20090220895 | Garza et al. | Sep 2009 | A1 |
20090279089 | Wang | Nov 2009 | A1 |
20090279098 | Ohbayashi et al. | Nov 2009 | A1 |
20100140550 | Keller et al. | Jun 2010 | A1 |
20100158332 | Rico | Jun 2010 | A1 |
20120275148 | Yeh et al. | Nov 2012 | A1 |
20130182260 | Bonnema et al. | Jul 2013 | A1 |
20130328227 | McKinnon et al. | Dec 2013 | A1 |
20140249663 | Voillaume | Sep 2014 | A1 |
20140300676 | Miller et al. | Oct 2014 | A1 |
20150061178 | Siniscalchi et al. | Mar 2015 | A1 |
20150101134 | Manz et al. | Apr 2015 | A1 |
20150124019 | Cruz-Uribe et al. | May 2015 | A1 |
20150352792 | Kanada | Dec 2015 | A1 |
20160018404 | Iyer et al. | Jan 2016 | A1 |
20160023403 | Ramos et al. | Jan 2016 | A1 |
20160101568 | Mizes et al. | Apr 2016 | A1 |
20160157751 | Mahfouz | Jun 2016 | A1 |
20160167301 | Cole et al. | Jun 2016 | A1 |
20160209319 | Adalsteinsson et al. | Jul 2016 | A1 |
20160249836 | Gulati | Sep 2016 | A1 |
20160320771 | Huang | Nov 2016 | A1 |
20160347005 | Miller | Dec 2016 | A1 |
20170021455 | Dallarosa et al. | Jan 2017 | A1 |
20170087766 | Chung et al. | Mar 2017 | A1 |
20170106604 | Dikovsky et al. | Apr 2017 | A1 |
20170120337 | Kanko et al. | May 2017 | A1 |
20170143494 | Mahfouz | May 2017 | A1 |
20170217103 | Babaei et al. | Aug 2017 | A1 |
20170235293 | Shapiro et al. | Aug 2017 | A1 |
20170355147 | Buller et al. | Dec 2017 | A1 |
20180017501 | Trenholm et al. | Jan 2018 | A1 |
20180036964 | DehghanNiri et al. | Feb 2018 | A1 |
20180056582 | Matusik et al. | Mar 2018 | A1 |
20180071984 | Lee et al. | Mar 2018 | A1 |
20180099333 | DehghanNiri et al. | Apr 2018 | A1 |
20180143147 | Milner et al. | May 2018 | A1 |
20180154580 | Mark | Jun 2018 | A1 |
20180169953 | Matusik et al. | Jun 2018 | A1 |
20180194066 | Ramos et al. | Jul 2018 | A1 |
20180273657 | Wang et al. | Sep 2018 | A1 |
20180281067 | Small et al. | Oct 2018 | A1 |
20180297113 | Preston | Oct 2018 | A1 |
20180304549 | Safai et al. | Oct 2018 | A1 |
20180311893 | Choi et al. | Nov 2018 | A1 |
20180320006 | Lee et al. | Nov 2018 | A1 |
20180341248 | Mehr et al. | Nov 2018 | A1 |
20180348492 | Pavlov et al. | Dec 2018 | A1 |
20190077921 | Eckel | Mar 2019 | A1 |
20190118300 | Penny et al. | Apr 2019 | A1 |
20190270254 | Mark et al. | Sep 2019 | A1 |
20190271966 | Coffman et al. | Sep 2019 | A1 |
20190322031 | Kritchman | Oct 2019 | A1 |
20190329322 | Preston et al. | Oct 2019 | A1 |
20190346830 | de Souza Borges Ferreira et al. | Nov 2019 | A1 |
20190353767 | Eberspach et al. | Nov 2019 | A1 |
20200004225 | Buller et al. | Jan 2020 | A1 |
20200122388 | Van Esbroeck et al. | Apr 2020 | A1 |
20200143006 | Matusik | May 2020 | A1 |
20200147888 | Ramos et al. | May 2020 | A1 |
20200215761 | Chen et al. | Jul 2020 | A1 |
Number | Date | Country |
---|---|---|
3459716 | Mar 2019 | EP |
2014098555 | May 2014 | JP |
6220476 | Oct 2017 | JP |
2018103488 | Jul 2018 | JP |
101567281 | Nov 2015 | KR |
101567281 | Nov 2015 | KR |
20180067961 | Jun 2018 | KR |
2003026876 | Apr 2003 | WO |
2017066077 | Apr 2017 | WO |
2018080397 | May 2018 | WO |
2018197376 | Nov 2018 | WO |
2018209438 | Nov 2018 | WO |
2019070644 | Apr 2019 | WO |
2019125970 | Jun 2019 | WO |
2020123479 | Jun 2020 | WO |
Entry |
---|
International Search Report dated Apr. 24, 2020 in PCT Application No. PCT/US2020/012725. |
Alarousu, Erkki, Ahmed AlSaggaf, and Ghassan E. Jabbour. “Online monitoring of printed electronics by spectral-domain optical coherence tomography.” Scientific reports 3 (2013): 1562. |
Daniel Markl et al: “Automated pharmaceutical tablet coating layer evaluation of optical coherence tomography images”, Measurement Science and Technology, IOP, Bristol, GB, vol. 26, No. 3, Feb. 2, 2015 (Feb. 2, 2015), p. 35701, XP020281675, ISSN: 0957-0233, DOI: 10.1088/0957-0233/26/3/035701 [retrieved on Feb. 2, 2015]. |
Daniel Markl et al: “In-line quality control of moving objects by means of spectral-domain OCT”, Optics and Lasers in Engineering, vol. 59, Aug. 1, 2014 (Aug. 1, 2014), pp. 1-10, XP055671920, Amsterdam, NL ISSN: 0143-8166, DOI: 10.1016/j.optlaseng.2014.02.008. |
Fischer, Björn, Christian Wolf, and Thomas Hartling. “Large field optical tomography system.” In Smart Sensor Phenomena, Technology, Networks, and Systems Integration 2013, vol. 8693, p. 86930P. International Society for Optics and Photonics, 2013. |
Huo, Tiancheng, Chengming Wang, Xiao Zhang, Tianyuan Chen, Wenchao Liao, Wenxin Zhang, Shengnan Ai, Jui-Cheng Hsieh, and Ping Xue. “Ultrahigh-speed optical coherence tomography utilizing all-optical 40 MHz swept-source.” Journal of biomedical optics 20, No. 3 (2015): 030503. |
Klein, Thomas, and Robert Huber. “High-speed OCT light sources and systems.” Biomedical optics express 8, No. 2 (2017): 828-859. |
Moon, Sucbei, and Dug Young Kim. “Ultra-high-speed optical coherence tomography with a stretched pulse supercontinuum source.” Optics Express 14, No. 24 (2006): 11575-11584. |
Park, Yongwoo, Tae-Jung Ahn, Jean-Claude Kieffer, and José Azaña. “Optical frequency domain reflectometry based on real-time Fourier transformation.” Optics express 15, No. 8 (2007): 4597-4616. |
Wieser, Wolfgang, Benjamin R. Biedermann, Thomas Klein, Christoph M. Eigenwillig, and Robert Huber. “Multi-megahertz OCT: High quality 3D imaging at 20 million A-scans and 4.5 GVoxels per second.” Optics express 18, No. 14 (2010): 14685-14704. |
Xu, Jingjiang, Xiaoming Wei, Luoqin Yu, Chi Zhang, Jianbing Xu, K. K. Y. Wong, and Kevin K. Tsia. “Performance of megahertz amplified optical time-stretch optical coherence tomography (AOT-OCT).” Optics express 22, No. 19 (2014): 22498-22512. |
Zhou, Chao, Aneesh Alex, Janarthanan Rasakanthan, and Yutao Ma. “Space-division multiplexing optical coherence tomography.” Optics express 21, No. 16 (2013): 19219-19227. |
Blanken, Lennart, Robin de Rozario, Jurgen van Zundert, Sjirk Koekebakker, Maarten Steinbuch, and Tom Oomen. “Advanced feedforward and learning control for mechatronic systems.” In Proc. 3rd DSPE Conf. Prec. Mech, pp. 79-86. 2016. |
Blanken, Lennart. “Learning and repetitive control for complex systems: with application to large format printers.” (2019). |
Oomen, Tom. “Advanced motion control for next-generation precision mechatronics: Challenges for control, identification, and teaming.” In IEEJ International Workshop on Sensing, Actuation, Motion Control, and Optimization (SAMCON), pp. 1-12. 2017. |
Sitthi-Amorn, Pitchaya, Javier E. Ramos, Yuwang Wangy, Joyce Kwan, Justin Lan, Wenshou Wang, and Wojciech Matusik. “MultiFab: a machine vision assisted platform for multi-material 3D printing.” ACM Transactions on Graphics (TOG) 34, No. 4 (2015): 129. |
Kulik, Eduard A., and Patrick Calahan. “Laser profilometry of polymeric materials.” Cells and Materials 7, No. 2 (1997): 3. |
Qi, X.; Chen, G.; Li, Y.; Cheng, X.; and Li, C., “Applying Neural-Network Based Machine Learning to Addirive Manufacturing: Current Applications, Challenges, and Future Perspectives”, Jul. 29, 2018, Engineering 5 (2019) 721-729. (Year: 2019). |
DebRoy, T.; Wei, H.L.; Zuback, J.S.; Muhkerjee, T.; Elmer, J.W.; Milewski, J.O.; Beese, A.M.; Wilson-Heid, A.; Ded, A.; and Zhang, W., “Additive manufacturing of metallic components—Process, structure and properties”, Jul. 3, 2017, Progress in Materials Science 92 (2018) 112-224. (Year: 2017). |
Number | Date | Country | |
---|---|---|---|
20200223147 A1 | Jul 2020 | US |
Number | Date | Country | |
---|---|---|---|
62789764 | Jan 2019 | US |