PORE MEASUREMENT DEVICE

Information

  • Patent Application
  • 20220404276
  • Publication Number
    20220404276
  • Date Filed
    November 12, 2020
    4 years ago
  • Date Published
    December 22, 2022
    2 years ago
Abstract
In one aspect, there is provided a system including at least one processor; and at least one memory including program code which when executed by the at least one processor causes operations including capturing an image of at least a portion of a surface of an object; generating, from the captured image, pixel intensity data; in response to generating the pixel intensity data, determining, based on a height error model, height error data, wherein the height error data indicates an uncertainty of at least one height measurement of the object; and determining, based on the height error data, whether the object satisfies a threshold criteria for acceptance of the object. Related system, methods, and articles of manufacture are also disclosed.
Description
BACKGROUND

Digital fringe projection (DFP) refers to a non-contact surface measurement technique that may be used in, for example, surface height profiling, surface roughness quantification, and three-dimensional point cloud creation, and/or the like. In the case of DFP, structured light, such as a fringe, is projected on to the surface of an object. A sensor captures an image and determines a height measurement. In the case of additive manufacturing, DFP may be used to measure the height of material being added to form the object. For example, after a layer of powder is applied to an object, DFP may be used to measure the height of the applied layer.


SUMMARY

In one aspect, there is provided a system including at least one processor; and at least one memory including program code which when executed by the at least one processor causes operations including capturing an image of at least a portion of a surface of an object; generating, from the captured image, pixel intensity data; in response to generating the pixel intensity data, determining, based on a height error model, height error data, wherein the height error data indicates an uncertainty of at least one height measurement of the object; and determining, based on the height error data, whether the object satisfies a threshold criteria for acceptance of the object.


In some variations, one or more of the features disclosed herein including the following features can optionally be included in any feasible combination. The system may further include an image sensor configured to capture the image and a light source configured to project structured light on to the surface of the object. The structured light may include a Moiré pattern and/or a fringe pattern. The height error model may be determined based on noise captured by the image associated with the pixel intensity data. The noise may include uncertainty caused by light projector gamma nonlinearity, light projector quantization, camera quantization, and/or pixel intensity noise caused by ambient light. In response to the generating of the pixel intensity data, at least one height measurement of the object may be determined. The determination may include determining whether the object satisfies the threshold criteria further comprises determining whether the object satisfies a threshold height. In response to the at least one height measurement exceeding a threshold height, an indication to reject the object may be provided based on the height error data indicating a threshold level of certainty. The indication may terminate an additive manufacturing process of the part. The indication may trigger an alert at a user interface. The indication may trigger an alert to dispose of, rather than reuse, material being used to build the object. The system may comprise, or be comprised in, an additive manufacturing device making the object. After a layer of material is applied, the processor causes the capturing, the generating, the determining height error data, and/or determining whether the object satisfies the threshold. The additive manufacturing device may include powder fusion, binder jetting, and/or direct energy deposition. In response to the at least one height measurement being less than a threshold height, an indication to continue or restart additive manufacturing of the object may be provided based on the height error data indicating a threshold level of certainty. Moreover, an aggregate layer height and an aggregate height error for a plurality of layers of material applied to the object may be provided. Furthermore, feedback may be provided to adjust one or more parameters of an additive manufacturing process of the object.


Implementations of the current subject matter can include, but are not limited to, systems and methods consistent including one or more features are described as well as articles that comprise a tangibly embodied machine-readable medium operable to cause one or more machines (e.g., computers, etc.) to result in operations described herein. Similarly, computer systems are also described that may include one or more processors and one or more memories coupled to the one or more processors. A memory, which can include a computer-readable storage medium, may include, encode, store, or the like one or more programs that cause one or more processors to perform one or more of the operations described herein. Computer implemented methods consistent with one or more implementations of the current subject matter can be implemented by one or more data processors residing in a single computing system or multiple computing systems. Such multiple computing systems can be connected and can exchange data and/or commands or other instructions or the like via one or more connections, including but not limited to a connection over a network (e.g. the Internet, a wireless wide area network, a local area network, a wide area network, a wired network, or the like), via a direct connection between one or more of the multiple computing systems, etc.


The details of one or more variations of the subject matter described herein are set forth in the accompanying drawings and the description below. Other features and advantages of the subject matter described herein will be apparent from the description and drawings, and from the claims. The claims that follow this disclosure are intended to define the scope of the protected subject matter.





DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in and constitute a part of this specification, show certain aspects of the subject matter disclosed herein and, together with the description, help explain some of the principles associated with the disclosed implementations. In the drawings,



FIG. 1A depicts an example of an additive manufacturing system, in accordance with some example embodiments;



FIG. 1B depicts an example of structured light, such as a fringe, projected on an object, in accordance with some example embodiments;



FIG. 1C depicts an example of a pixel uncertainty distribution, in accordance with some example embodiments;



FIG. 1D depicts an example of a phase uncertainty distribution, in accordance with some example embodiments;



FIG. 1D depicts an example of a height uncertainty distribution, in accordance with some example embodiments;



FIG. 2 depicts an example of a process for determining a height uncertainty, in accordance with some example embodiments;



FIG. 3 depicts an example of a digital fringe projection measurement setup, in accordance with some example embodiments;



FIG. 4 depicts an example of simulation results associated with the height uncertainty, in accordance with some example embodiments;



FIG. 5 shows examples of results of a digital fringe projection height measurement process, in accordance with some example embodiments;



FIG. 6 shows examples of plots that illustrate the measured structure of pixel noise, in accordance with some example embodiments;



FIG. 7 shows example plots related to height uncertainty, in accordance with some example embodiments;



FIG. 8 depict an example of binder jetting, in accordance with some example embodiments;



FIG. 9 depict an example of direct energy deposition (or direct metal deposition), in accordance with some example embodiments;



FIG. 10 depict an example of powder bed fusion including a contact scanner, in accordance with some example embodiments; and



FIG. 11 depicts an example of a processor, in accordance with some example embodiments.





DETAILED DESCRIPTION

As the commercial availability of high accuracy light projectors and digital cameras increases, DFP as well as other optical measurement technologies will continue to become more prevalent for critical engineering applications drawn from biomedical, materials science, electronics inspection domains, and/or the like. When these DFP-based measurements are used to inform decisions regarding the acceptance, maintenance, or usage of an object, such as a part other type of object, there may also a need for an understanding and quantification of the uncertainties in the DFP measurement process. There is a variety of sources of measurement uncertainty in the DFP measurement process, and these uncertainties represent errors in measurements. The sources of the uncertainties include light projector gamma nonlinearity, light projector and camera quantization effects, and pixel intensity noise, and/or the like. In the case of pixel intensity noise, it is considered a major source of uncertainty in DFP including DFP height measurements.


In some example embodiments, there is provided a model that quantifies measurement uncertainty associated with DFP measurements including DFP height measurements. In some example embodiments, the model (which quantifies measurement uncertainty) may be determined based on the pixel intensity noise associated with DFP measurements including DFP height measurements.


In some example embodiments, the model takes, as an input, pixel intensity data and provides, as an output, height measurement uncertainty. For example, the pixel intensity may be captured by an image sensor, such as a camera, imaging structured light, such as a fringe, that is projected on the surface of an object being measured. The output height measurement uncertainty provides an indication of the uncertainty (e.g., noise, confidence, error, etc.) for a given DFP height measurement of the object. To illustrate further, given a height measurement on the surface of an object, the height measurement uncertainty may be used to determine whether at least a portion of a powder layer applied on the object exceeds a threshold height and the uncertainty (or confidence) of that height measurement. The confidence of the height measurement may thus be determined based on the height measurement uncertainty provided as an output of the model, in accordance with some example embodiments.


Although some of the examples described herein refer to using the model to quantify the uncertainty of DFP height measurements, the error model may be used to provide an indication of the uncertainty associated with other types of optical or light based measurement technologies as well including digital image correlation technology, contact scanner technology, coherent light imaging technology, and the like.



FIG. 1A depicts an example of an additive manufacturing system 100, in accordance with some example embodiments. The system may include an object 115 that is undergoing additive manufacturing, material 120 such as unfused additive manufacturing powder (e.g., a metal based powered), a recoater blade 125 configured to layer application of the material to the object, one or more devices 130A-B, a heat source such as a laser 140, and a processor 160. The first device 130A may correspond to an imaging sensor, such as camera, configured to capture an image of a digital fringe that is projected on the object by the second device 130B, such as a light source, projector, and/or the like. Although FIG. 1A depicts the sensor 130A and projector 130B at certain locations within the additive manufacturing system, those devices 130A-B may be placed in other locations as well. Moreover, the devices 130A-B may comprise a plurality of cameras, projectors, and/or interferometers. As used herein an additive manufacturing device (or system) refers to construction of an object, such as 3D printing technology, binder jetting, powder bed fusion, direct energy deposition, and/or other technologies.


In operations, the recoater blade 125 moves across the surface of the object 115 to apply a layer of the material 120 on the object 115. The light source 130B may project light on the object, while the imaging sensor 130A, performs a measurement by capturing an image of at least a portion of the surface of the object (which includes a fringe projection on the surface). For example, the light source 130B may project a fringe on the surface of the object 115. FIG. 1B depicts an example of a fringe 132 projected on the object 115 as well as a reference surface on which the object sits. Although the fringe 132 depicted at FIG. 1B is a Moiré pattern, other patterns or types of fringes or structured light may be used as well.


The processor 160 may receive and process the measurement to determine height data for the powder applied to the object 115 and/or uncertainty data related to the height measurement, in accordance with some example embodiments. In some example embodiments, the processor 160 may receive from the camera 130A pixel data representative of the intensity of the pixels on at least the surface of the object (referred to as pixel intensity data 140A). The processor may output data, such as height measurement data 140B, for the height of the powder applied on the object, and/or may output uncertainty data 140C that indicates the uncertainty associated with the height measurement data 140B. In some example embodiments, the processor includes a model 199 that provides the height measurement data and/or the uncertainty data based on the pixel intensity data.


After a layer of powder is applied to the object for example, the laser 140 may heat or sinter the applied layer of powder. This process may repeat by having the recoater 125 apply another layer of material, perform measurements, determine the height and height uncertainty of the applied layer, sinter, and so forth until the object is complete. In this example, each layer of applied powder may have a corresponding height measurement performed by capturing an image of at least a portion of the surface of the object and processing the measurement to determine the height of the power applied to the object. In some example embodiments, there may be provided layer-by-layer measurements using structured light-based system, such as system 100. The measurements may be used to determine a height profile of each layer, uniformity of powder distribution (e.g., the existence of a void or chunk in a powder layer), porosity of an applied layer, and/or other types of measurements. Although the previous example describes capturing an image of the layer before sintering, the image may be captured before and/or after the layer is sintered to provide height measurements and uncertainty of the layer before and after the layer is sintered.


The height measurement may represent a plurality of height measurements on the surface of the object under test, which in this example is object 115 undergoing additive manufacturing. As noted, the height measurements may be used to determine whether the object is defective if, for example, a height measurement of the object indicates that the applied material 120 resulted in too much (or too little) material being applied to at least a portion of the object. Because there are uncertainties associated with the height measurements, the system 100 may provide uncertainty data to provide an indication of the confidence (for example, certainty) of a given measurement. Referring to the previous example, if the height measurement is over a threshold height and the uncertainty data indicates the height measurement has a confidence of 99% of being accurate (or over a threshold amount of certainty), the part may be marked by system 100 as defective.


The model 199 may determine the height measurement uncertainty based on noise associated with the pixel intensity. This pixel intensity noise may, as noted, represent a substantial portion of the noise and, thus, uncertainty associated with DFP measurements and, in particular, DFP height measurements as well as other types of optical or light based measurement technologies. For example, when the image captures the measurement, the pixel data associated with the image captures the noise of the measurement including light projector gamma nonlinearity, light projector and camera quantization effects, and pixel intensity noise caused by ambient light, and/or the like. Moreover, the model 199 may determine the height measurement uncertainty by removing a noise free uncertainty distribution from a noisy uncertainty distribution (which results in an estimate of the noise, such as the uncertainty or error of the height measurements).


In some example embodiments, there is provided the model 199 may be configured to quantify the uncertainty, such as error, noise, and/or the like, associated with height measurements performed using a light or optical based technology, such as DFP. The model may provide the height measurement uncertainty as a distribution of uncertainty for one or more height measurements performed on the object 115.


In some example embodiments, the model, such as model 199, may receive an input including a distribution of pixel intensity data. This pixel intensity data may be received from an image source, such as a camera, performing a measurement (e.g., image capture) of the fringe or structured light on the object 115. In some example embodiments, the model outputs a distribution of the uncertainty of the measurements. FIG. 1C depicts an example of the output, which in this example is a pixel intensity uncertainty distribution. In the example of FIG. 1C, the distribution of pixel intensities represents the uncertainty of the pixel data captured by an imaging sensor, such as a camera 130A, imaging the fringe light pattern projected on the surface of the object 115. From the input distribution of pixel intensity data, the model 199 may generate a phase uncertainty distribution (or probability density function of phase uncertainty as a function of the distribution of pixel intensities). FIG. 1D depicts an example of the phase uncertainty distribution. The phase may, as described further below, map to a height measurement. The phase may also include phase noise (which includes pixel intensity noise).


In some example embodiments, the model 199 may generate the height measurement distribution (which indicates the height at one or more location on the object 115) and/or generates a height uncertainty (or error) distribution that defines the height measurement uncertainty. Moreover, the model 199 may, as noted, output the height measurement distribution and/or the height uncertainty distribution. FIG. 1E depicts an example of the height uncertainty distribution. In the examples of FIGS. 1C-E, the uncertainty distributions are determined, as described further below, by removing a noise free uncertainty distribution from a noisy uncertainty distribution (which estimates the noise, such as the uncertainty).



FIG. 2 depicts an example of a process 200 for determining height measurement uncertainty, in accordance with some example embodiments. The description of FIG. 2 also refers to FIGS. 1A-E.


At 205, a measurement may be performed, in accordance with some example embodiments. For example, the image sensor 130A, such as a camera, may perform a measurement of structured light (e.g., a Moiré pattern, a fringe, and/or the like) projected by a light source 130B on to at least the surface of the object 115 (which may also include projection of the fringe onto a reference surface). To illustrate further, the image sensor may capture an image of the surface of the object, when the light source projects the fringe on the object's surface. In some embodiments, the image sensor may capture an image after a layer of the material is applied by the recoater but before sintering by the laser. Alternatively, or additionally, the image sensor may capture an image after a layer of material is sintered.


At 210, a pixel intensity distribution may be generated, in accordance with some example embodiments. For example, the measurement on the surface of the object 115 may generate pixel data. This pixel data may correspond to the pixel intensity as captured by the image sensor 130A during the measurement (e.g., image capture) at 205. The pixel data captures the distribution of the pixel intensity as measured from the surface of the object at one or more points along the surface of the object. This pixel data may also include a noise component, such as pixel intensity noise, phase noise, projector gamma noise, and/or the like, in the DFP height measurements. As noted, the noise component captured by the pixel data is a source of height measurement uncertainty.


At 215, a height error distribution may be generated based on the pixel intensity distribution, in accordance with some example embodiments. In some example embodiments, the model 199 may receive the pixel intensity data and output a height error distribution (also referred to herein as height uncertainty distribution) for the object 115. As noted, the model may also generate a height distribution for the object. In some embodiments, the model 199 may generate a phase uncertainty distribution (an example of which is depicted at FIG. 1D) that captures the phase noise present in the pixel intensity data. The phase noise may be used to represent the pixel intensity noise and thus a source of error or uncertainty in DFP height measurements. In other words, the uncertainty of the height measurements may be derived based on the phase information and, in particular, the phase noise, of the pixel intensity data.


At 220, the height error distribution may be used to determine whether to reject or accept the object as part of a quality control function, in accordance with some example embodiments. To illustrate further, the height error distribution may provide an indication of the measurement error at one or more points along the surface of the object. If the height measurement value at a given point of the object exceeds a threshold height, the height error (which is associated with the height measurement value) may provide an indication of the confidence of the out of tolerance height measurement value. For example, a higher value of the height error may indicate more uncertainty (or less confidence) in the accuracy of the height measurement, when compared to a lower value of the height error.


In some example embodiments, the process 200 may be performed by a processor, such as processor 160, coupled to an additive manufacturing system fabricating an object, such as object 115. In the case of additive manufacturing, in response to a height measurement of object 115 exceeding a threshold height, the processor 160 may provide an indication to reject the object. This indication may be provided based on the height error data indicating a threshold level of certainty (e.g., the error being below a threshold amount of error or the certainty/confidence being above a threshold level of confidence). The indication may also be used to trigger termination of the additive manufacturing process of the part. Alternatively, or additionally, the indication triggers may trigger an alert at a user interface associated with a user of the additive manufacturing system. For example, the alert may indicate the rejection of the object or termination of the additive manufacturing process.


In some example embodiments, the indication may be used to assess if the powder is spreadable or flowable. For example, unused powder may often be recycled and used in later builds. The condition of the powder decreases each recycling. If the powder has been recycled too many times, the powder gets clumpy, which may result in a clump that causes the height measurement at the clump to exceed a height threshold value. In response to the height measurement value exceeding a threshold height (with the corresponding height error data indicating a threshold level of certainty), the indication may signal that the powder may need to be replaced.


In the case of additive manufacturing, in response to a height measurement of object 115 being within a threshold height (with height error data indicating a threshold level of certainty), the processor 160 may provide an indication to continue or restart the additive manufacturing process of the object (e.g., continue adding layers to the object).


In some example embodiments, after each layer is applied to the object undergoing additive manufacturing, the height data and height uncertainty data may be stored by the processor 160. When this is the case, the processor may determine an equivalent printed material height of a plurality of layers with a corresponding height error distribution for each layer. Moreover, the processor may provide, as feedback, one or more parameters to adjust the additive manufacturing process. For example, when there is an uneven surface of the exposed object after layer deposition and fusing (e.g., sintering, melting, etc.), the laser power may be too high (or the scan speed may be too slow), the processor may adjust these fabrication parameters.


Before providing additional details regarding the model 199, the following provides a description regarding the DFP height measurements.


The DFP height measurements may be made by projecting patterns of light onto a flat reference plane (physical or mathematical, denoted further with subscript “r”), then placing an object, such as object 115, onto the scene, and recording how the projected patterns (e.g., the structured light or fringe) deform from the object's shape (denoted further with “o”).



FIG. 3 depicts an example of a DFP measurement setup. The projected patterns of light may be sinusoidally varying computer generated images, such as fringes, with pixel value f assigned according to












f
i

(
x
)

=

R

(


cos


(



2

π

x

P

+

δ
i


)


+
1

)


,




(
1
)







where R is a scaling factor related to fringe brightness, P is the fringe pitch, and Nphase-shifted images are generated and projected onto both reference and object with (equipartitioned) spatial shifts of δi=2π/N, i=1 . . . N (further referred to as “projections,” where i is the projection index). Equation 1 creates a fringe pattern with varying intensity as a function of x.


The image sensor 130A, such as a camera, may measure the fringe by capturing or recording images of the incident fringes on both reference 300 (dashed lines) and object 115 surfaces (solid lines), where the intensity I(r,o) of any fringe incident on a surface (either the reference r or object o) at any given measurement point xr,o is given by as






I
(r,o)
=A
(r,o)
+B
(r,o)cos(2πx(r,o)/P.  (2)


where φ(r,o)=2πx(r,o) IP is the phase, and A(r,o) and B(r,o) are the background intensity due to projector bias (combined with ambient light intensity) and the projected fringe contrast, respectively, at the arbitrary point x(r,o). In order to recover differential information between object and reference phases, which is functionally related to the object height, the measurement point's “i-th” projection intensity in Equation 2 may be each written as






I
(r,o),i
=A
(r,o),i
+B
(r,o),i cos(ϕ(r,o)ic)  (3)


where φc accounts for the phase offset of the point in relation to the carrier phase (underformed phase of the projected fringe pattern), and subscript “i” is the projection index.


Once images are captured or recorded of the projected fringes on the reference and object, phase φr,o (either on the reference or object), is found by











ϕ

(

r
,
o

)


=

arctan



(


-




i
=
1

N



I


(

r
,
o

)

,
i



sin


δ
i








i
=
1

N



I


(

r
,
o

)

,
i



cos


δ
i




)



,




(
4
)







where it is assumed that nonlinear projector gamma issues are negligible or appropriately corrected/calibrated.


To determine the height measurement value, the differential phase measurement φ between object and reference (which is used to determine the measurement point's height above the reference surface) is given by












ϕ
=



ϕ
o

-

ϕ
r








=



arctan



(


-




i
=
1

N



I

o
,
i



sin


δ
i








i
=
1

N



I

o
,
i



cos


δ
i




)


-

arctan



(


-




i
=
1

N



I

r
,
i



sin


δ
i








i
=
1

N



I

r
,
i



cos


δ
i




)











(
5
)












=



2

π


(


x
o

-

x
r


)


P

.





(
6
)







The height measurement value, z, of a measurement point 310 from a reference point on the reference surface may be represented as a function of the geometries of the DFP and the scalar quantities xr and xo. This relationship may be written as










z
=


d

(


x
o

-

x
r


)


L
+

x
o

-

x
r




,




(
7
)







where d and L are the distance between the projector and the reference plane, and the distance between the projector and the camera, respectively.


Combining Equations 6 and 7, the height, z, may be expressed as a function of the measured differential phase φ and geometrical properties of the DFP system as










z

(
ϕ
)

=



P

ϕ

d



2

π

L

+

P

ϕ



.





(
8
)







The linearized phase-to-height measurement model may be used if it is assumed (which is normally the case) that the spacing between the projector 130B and camera 130A is much larger than the geometric distance between reference and object ray projections xr and xo (e.g., L>>(xo−xr)). Effectively, this assumption is valid for small height measurements of objects. This assumption simplifies the principal height measurement relationship to a linearized version, so height, zl, is given by













z
l

=



d

(


x
o

-

x
r


)

L







=




P

d

ϕ


2

π

L


.








(
9
)







where P is the fringe pitch, where d is the distance between the projector and the reference plane, L is the distance between the projector and the camera, and φ is the differential phase measurement φ.


Comparing Equation 9 to Equation 8, the linearized phase-to-height measurement model of Equation 9 expresses a proportional relationship between estimated height zl and the differential phase measurement φ. This relationship may be used to model the height of an object. In other words, the model 199 may provide one or more height measurements for the object 115 based on Equation 8 or 9. The model 199 may also provide the uncertainty (or error) of those height measurements, in accordance with some example embodiments.


In some example embodiments, there may be provided a model, such as model 199, configured to model height uncertainty. The following provides an example of the model 199 configured to provide a height uncertainty (or error) model, in accordance with some example embodiments.


In some example embodiments, the height uncertainty model (also referred to herein as a height error model or error model) may correspond to a non-linear height model, which does not assume a linear relationship between phase differential and height. When this is the case, the measured differential phase φ inherently contains noise, κ, which arises from a variety of sources. This noise propagates through the phase-to-height transformation (which is described above with respect to Equations 8, for example) yielding uncertainty in the estimated height measurement, z.


The error, such as residual error x, of a height measurement may be defined by adding the noise κ to the differential phase measurement φ and then subtracting the true height value as follows: χ=z(φ+κ)−z(φ). As noted above, the model 199 may determine the height measurement uncertainty by removing a noise free uncertainty distribution, such as z(φ), from a noisy uncertainty distribution z(φ+κ). The noisy differential phase φ+κ is then substituted from Equation 8, yielding Equation 10 below which provides an estimate of the height uncertainty error, χ, as follows:












χ
=



z

(

ϕ
+
κ

)

-

z

(
ϕ
)








=





P

(

ϕ
+
κ

)


d



2

π

L

+

P


(

ϕ
+
κ

)




-


P

ϕ

d



2

π

L

+

P

ϕ











=



κ



P

(

d
-
z

)

2




2

π

Ld

+

κ


P

(

d
-
z

)





,







(
10
)







where substituting in for φ from the inverse of Equation 8 to arrive at the form that relates height uncertainty χ as a function of phase noise κ, true height z, and DFP measurement system parameters d (which is the distance between the projector and the reference plane), P (which is the fringe pitch), and L (which is the distance between the projector and the camera).


In some example embodiments, the model 199 may be configured based on Equation 10 to provide an indication of height uncertainty (e.g., χ) of a height measurement based on the determined phase noise κ, true height z, and/or DFP measurement system parameters d, P, L.


Assuming the probability density function of the phase noise, p(κ), is known or may be modeled, the probability density function associated with the single point height measurement uncertainty, p(χ), may be determined as














p

(
χ
)

=



p

(
κ
)




"\[LeftBracketingBar]"




χ

/


κ




"\[RightBracketingBar]"









=




p

(
κ
)




(


2

π

Ld

+

κ

P


(

d
-
z

)



)

2



2

π



LP

(

d
-
z

)

2


d








=



2

π


LdP
(
κ

)




P

(

χ
+
z
-
d

)

2









=



2

π


Ldp

(


2

π

Ld

χ



P

(

d
-
z

)



(

d
-
z
-
χ

)



)




P

(

χ
+
z
-
d

)

2



,




.




(
11
)







Equation 11 is a general form for any phase noise PDF model form, p(*). The PDF for phase noise arising from pixel intensity noise may be combined with Equation 11 and substituting for κ using the inverse of Equation 10 yields











p

(
χ
)

=



e


-
z

)




sec
2




2

π

L

χ

d



P

(

d
-
z

)



(

d
-
χ
-
z

)





(

1
+


π


?


?


(


erf

(

z
2

)

±
1

)



)


Ld




P

(


-
d

+
χ
+
z

)

2



z
1




1
-

ρ
XY
2






,


where




(
12
)














z
1

=



σ
Y
2

-

2


ρ
XY



σ
X



σ
Y


tan



2

π

L

χ

d



P

(

d
-
z

)



(

d
-
χ
-
z

)




+


σ
X
2



tan
2




2

π

L

χ

d



P

(

d
-
z

)



(

d
-
χ
-
z

)







σ
X




σ
Y

(

1
-

ρ
XY
2


)




,



z
2

=









μ
Y




σ
X

(



ρ
XY



σ
Y


-


σ
X


tan




2

π

L

χ

d



P

(

d
-
z

)



(

d
-
χ
-
z

)





)


+








μ
X




σ
Y

(



σ
X



ρ
XY




2

π

L

χ

d



P

(

d
-
z

)



(

d
-
χ
-
z

)




-

σ
Y


)










2



σ
X



σ
Y




1
-

ρ
XY
2











σ
Y
2

-

2


ρ
XY



σ
X



σ
Y


tan




2

π

L

χ

d



P

(

d
-
z

)



(

d
-
χ
-
z

)




+


σ
X
2



tan
2




2

π

L

χ

d



P

(

d
-
z

)



(

d
-
χ
-
z

)










.



z
3


=






μ
Y
2



σ
X
2


+


μ
X
2



σ
Y
2


-

2


μ
X



μ
Y



σ
X



σ
Y



ρ
XY




2


σ
X
2




σ
Y
2

(

1
-

ρ
XY
2


)



.


?


indicates text missing or illegible when filed







(
13
)







and where erf(*) is the standard error function, p, μ, and σ correspond to features of pixel intensity noise (see, e.g., below an example derivation of p, μ, and σ; see also “A model for describing phase-converted images intensity noise in digital fringe projection techniques,” Niall M. O'Dowd et al., Optics and Laser in Engineering, Jul. 2, 2020). Quantities σY, σX, and ρXY are functions of measured pixel intensity noise. These quantities are intrinsically related to the standard deviation and correlation of random variables functionally related to ensembles of measured image intensities. Correlation in the noise structure may arise from periodic lighting fluctuations such as overhead fluorescent lights, projector gamma error, dust particle patterns on either optics lens, or other source of noise.


To illustrate by way of an example, after the recoater blade coats the object with a layer of material such as powder, the system 100 may detect, as a defect, a clump of powder (which exceeds a height measurement threshold for the object at a given location or point on the object). The height uncertainty (which is provided as an output by the model 199 in accordance with for example Equation 12)) may provide an output indicative of the certainty of that height measurement and, as such, the existence of the clump. For example, the height measurements may provide one or more height measurements corresponding to the clump on the object and provide the one or more uncertainty values for each height measurement. As such, the uncertainty model 199 may provide uncertainty of measurements from a captured image of the fringe of the object, rather than having to conduct repeated height measurement experiments to quantify the uncertainty.


Equation 12 may determine the single-point probability density function of height measurement uncertainty for any arbitrarily-correlated Gaussian pixel intensity noise. A special case may occur when the pixel intensity noise structure has no inter-projection correlations, no correlations between reference object and test object data, and the noise standard deviations in each object and each reference image are the same for all projections and zero-mean. When this is the case, the height uncertainty distribution may be as follows:











p

(
χ
)

=




e


-
N



?


(


?

+

?


)





Ld



P

(


-
d

+
χ
+
z

)

2




(




e



?



?


?




?





π




N



cos
2




2

π

L

χ

d



P

(

d
-
z

)



(

d
-
χ
-
z

)







(

erf


(


1
2





N



cos
2




2

π

L

χ

d



P

(

d
-
z

)



(

d
-
χ
-
z

)






σ
o
2


+
r
2










2




σ
o
2


+
r
2





+
1

)



,



?

indicates text missing or illegible when filed





(
14
)







where σr and σo are the noise standard deviations in the reference and object images scaled by fringe contrast, respectively.


Proceeding with the above derivation, a height uncertainty model may be further derived for the linearized height measurement case. The linearized residual height measurement noise may be defined as χl=zl(φ+κ)−zl(φ), and combining with Equation 9 provides










χ
l

=





P

(

ϕ
+
κ

)


d


2

π

L


-


P

ϕ

d


2

π

L



=



κ

Pd


2

π

L


.






(
15
)







The change of variables technique gives the height uncertainty distribution as













p

(

χ
l

)

=



p

(
κ
)




"\[LeftBracketingBar]"





χ
l


/


κ




"\[RightBracketingBar]"









=



2

π


Lp

(
κ
)


Pd







=




2

π


Lp

(


2

π

L


χ
l


Pd

)


Pd

.








(
l6
)







For the same specific phase noise model described above, Equation 16 may be further modified to provide













p

(

χ
l

)

=



?


sec
2




2

π

L


χ
l


Pd



(

1
+


π


?


?


(


erf

(

z
2

)

±
1

)



)


L



z
1




1
-

ρ
XY
2




Pd



,


where





(
17
)
















z
1

=



σ
Y
2

-

2


ρ
XY



σ
X



σ
Y


tan



2

π

L


χ
l


Pd


+


σ
X
2



tan
2




2

π

L


χ
l


Pd





σ
X




σ
Y

(

1
-

P
XY
2


)




,




z
2

=




μ
Y




σ
X

(



ρ
XY



σ
Y


-


σ
X


tan



2

π

L


χ
l


Pd



)


+


μ
X




σ
Y

(



σ
X



ρ
XY


tan



2

π

L


χ
l


Pd


-

σ
Y


)





2



σ
X



σ
Y




1
-

ρ
XY
2







σ
Y
2

-

2


ρ
XY



σ
X



σ
Y


tan



2

π

L


χ
l


Pd


+


σ
X
2


tan



2

π

L


χ
l


Pd







,




z
3

=






μ
Y
2



σ
X
2


+


μ
Y
2



σ
X
2


-

2


μ
X



μ
Y



σ
X



σ
Y



ρ
XY




2


σ
X
2




σ
Y
2

(

1
-

ρ
XY
2


)



.


?


indicates text missing or illegible when filed







(
18
)







and the quantities σY, σX, and ρXY are as noted above. This derivation may require updating the conditions of the previously-derived p(K) distribution, using Equation 15. This results in using the minus sign (−) in Equation 17 when









"\[LeftBracketingBar]"


χ
l



"\[RightBracketingBar]"


<


Pd

4

L


.





and the plus sign (+) when







Pd

4

L


>



"\[LeftBracketingBar]"


χ
l



"\[RightBracketingBar]"


>


Pd

4

L


.





Equation 17 is the linearized height measurement model analogue to Equation 12. When compared to Equation 12, Equation 18 provides the same height uncertainty output but assumes a linear relationship in the phase to height conversion.


Finally, for the special case of the pixel intensity noise structure described above, the linearized model analogous to Equation 14 may be represented as follows










p

(

χ
l

)

=




e


?



?


(


?

+

?


)





L

Pd




(




e


N


?



?

Pd




?


(


?

+

?


)






π




N


cos
2




2

π

L

χ

?


Pd





(

erf



(


1
2





N



cos
2




2

π

L

χ

?


Pd




σ
o
2

+

σ
r
2





)





2




σ
o
2

+

σ
r
2





+
1

)

.


?


indicates text missing or illegible when filed





(
19
)







Equation 18 is similar to Equation 14 in some response but assumes a linear relationship in the phase to height conversion. To verify the height error model, ensembles of Monte-Carlo simulated phase measurements φMC may be used to verify the derived height uncertainty models, with varying levels of pixel noise correlation and standard deviation, each using 98304 samples, for example. The Monte-Carlo ensembles may be generated by adding pixel intensity noise ϵr,o to the Ir,o terms in Equation 5 and combining Equations 8 and 9 for the full MC model and the linear MC model, respectively, giving













z
MC

=


Pd

(





arctan



(


-




i
=
1

N



(


I

o
,
i


+

?


)


sin



δ
i








i
=
1

N



(


I

o
,
i


+

?


)


cos



δ
i




)


-






arctan



(


-




i
=
1

N



(


I

o
,
i


+

?


)


sin



δ
i








i
=
1

N



(


I

o
,
i


+

?


)


cos



δ
i




)





)



2

π

L

+

P

(





arctan



(


-




i
=
1

N



(


I

o
,
i


+

?


)


sin



δ
i








i
=
1

N



(


I

o
,
i


+

?


)


cos



δ
i




)


-






arctan



(


-




i
=
1

N



(


I

o
,
i


+

?


)


sin



δ
i








i
=
1

N



(


I

o
,
i


+

?


)


cos



δ
i




)





)




,





(
20
)















z

MC
,
i


=


Pd

2

π

L





(





arctan



(


-




i
=
1

N



(


I

o
,
i


+


ϵ
_


o
,
i



)


sin



δ
i








i
=
1

N



(


I

o
,
i


+


ϵ
_


o
,
i



)


cos



δ
i




)


-






arctan



(


-




i
=
1

N



(


I

o
,
i


+


ϵ
_


r
,
i



)


sin



δ
i








i
=
1

N



(


I

o
,
i


+


ϵ
_


r
,
i



)


cos



δ
i




)





)

.


?


indicates text missing or illegible when filed






(
21
)







Pixel intensity noise may be defined for the reference and the object images with jointly normal distributions, allowing arbitrary image-to-image correlation, i.e., ϵr,i˜N (μr,i, σr,i, Σr,ij) and ϵo,i˜N (μo,i, σo,i, Σo,ij), i=1 . . . N, where μ is the pixel intensity noise mean, σ is the pixel intensity noise standard deviation, and Σ is the image-to-image pixel intensity noise correlation. The j subscript allows for the cross correlation between noise statistics at different projection indexes. The reference and object images may also be correlated with correlation matrix Σor,ij.


This allows for the general possibility that the i-th projection object image noise could be correlated with the j-th projection reference image noise (in addition to the earlier allowance that individual reference and object images may be intra-correlated). In the Monte-Carlo simulations, a single measurement point was modeled with a height z=2.09 cm and the simulated DFP geometries L and d were selected as 5 cm and 30 cm, respectively, deliberately chosen to provide clear distinctions between the full model and linear model height distributions in FIG. 4. Fringe pitch P was selected as 8.5 mm and 4 projections were used. Table 1 shows the specific statistics of the pixel intensity noise used for model verification.












TABLE 1





Parameter
Variable
Quanity text missing or illegible when filed
Quanity text missing or illegible when filed







Reference image
σ text missing or illegible when filed
{0.175, 0.22, 0.145, 0.05}
{0.175, 0.22, 0.145, 0.05},


pixel noise std.


{0.525, 0.66, 0.435, 0.465}


Object image pixel
σ text missing or illegible when filed
{0.54, 0.36, 0.45, 0.36}
{0.18, 0.12, 0.15, 0.12},


noise std.


{0.54, 0.36, 0.45, 0.36}





Reference image pixel noise correlation matrices
ρ text missing or illegible when filed





(



1.


0.203


0.143


0.1




0.203


1.


0.043


0.302




0.143


0.043


1.


0.078




0.1


0.302


0.078


1.



)

,








(



1.


0.601


0.493


0.594




0.601


1.


0.54


0.4




0.493


0.54


1.


0.768




0.594


0.4


0.768


1.



)















(



1.


0.691


0.774


0.86




0.691


1.


0.855


0.696




0.774


0.855


1.


0.769




0.86


0.696


0.769


1.



)










Object image pixel noise correlation matrices
ρ text missing or illegible when filed





(



1.


0.297


0.25


0.099




0.297


1.


0.401


0.202




0.25


0.401


1.


0.292




0.099


0.202


0.292


1.



)

,








(



1.


0.597


0.501


0.599




0.597


1.


0.401


0.5




0.501


0.401


1.


0.503




0.599


0.5


0.503


1.



)















(



1.


0.886


0.751


0.785




0.886


1.


0.774


0.895




0.751


0.774


1.


0.65




0.785


0.895


0.65


1.



)










text missing or illegible when filed indicates data missing or illegible when filed







To further assess the validity of the disclosed models, comparisons were made of the linearized height uncertainty model to experimental height measurement maps with a prototype DFP system. The height uncertainty model was evaluated under assumed ergodic conditions (e.g., that the distribution of phase values of many iterations of a measurement point would approach the inherent uncertainty of a single phase measurement point). A DFP experiment was conducted 85 times (referred onward as 85 iterations) using projections N=10, 14, and 18, measuring an unmoved, identical surface for the object and reference plane. This experiment was conducted using the standard 8-bit imaging capabilities of the camera system, and then repeated using the “BaslerBG12” setting which captures three-channel color images in 12-bit depth projections N=10, and 14, which were converted to mono-color 16-bit depth prior to image processing. The experiment was performed with camera capturing with two different bit depths to observe the effects of pixel value quantization on phase error relative to pixel intensity noise.


Using a single, unmoved surface ensured two factors; first, that the true height value z for each measurement pixel is known as 0, and second, that the distribution of height noise across all iterations on the height map provided by the DFP system was not caused by mismatched subtleties in the height of the measured object. The known height map of 0 allowed the usage of the linearized height uncertainty model from Equations 17 and 19 (special case). The 85 iterations recorded for each phase shifted projection, were used for statistical computation of the inputs for the model, (σr, σo, ρr, ρo, and ρro). A height map was created from the phase map for each iteration using a calibration routine; this resulted in the measured value of L/(Pd)=1.54 mm/rad. The amount of iterations was chosen to provide a middle ground between ability to accurately estimate noise statistics, and ambient pixel intensity fluctuation between captures. A full field map of co, (describing the carrier phase, and incorporated in terms σY, σX, and ρXY) was constructed from the unwrapped reference phase measurement co, Using the unwrapped phase measurement to account for the carrier phase, as opposed to simply using pixel index, allowed adaptation to possible mismatched viewing angle of fringes (not perfectly horizontal or vertical). Finally, the distribution of all 85 height values at each pixel in the measured area were compared to the linearized derived height uncertainty model p(χl) to assess the accuracy of the derived model.



FIG. 5 shows example results of a DFP height measurement process using images with 8-bit depth and 14 projections. The examples of the measurement process in the 8-bit depth range provide results that may be more representative of common DFP setups. FIG. 5 at A-C show a sample of the captured incident fringes on the reference surface, each image having a unique phase shift δi. FIG. 5 at D shows an example wrapped phase map of the reference surface, before the spatial unwrapping procedure is applied to remove 2π discontinuities. FIG. 5 at E shows an example height map of a single measurement iteration. FIG. 5 at F shows a sample distribution of raw pixel noise, with a fitted normal distribution, verifying the model of pixel intensity noise as a normally distributed random variable in the phase uncertainty derivation for p(k), and thus p(X). In FIG. 5, the pixel noise is shown with units of intensity not yet normalized by the fringe contrast. Pixel intensity units are by nature integers quantized by the camera, and show them as the difference to their mean, which is a decimal numerable, allowing for non-integer components in the histogram presented.


The images collected during the 85 iteration DFP height measurement procedure were used to determine the noise statistics which are the input to our uncertainty model. FIG. 6 shows a collection of plots that illustrate the measured structure of pixel noise for the 14 projection DFP experiment. FIG. 6 at A shows the pixel standard deviation, with the primary y-axis showing the pixel intensity standard deviation (before normalizing to the fringe contrast), and the secondary y-axis shows the pixel intensity standard deviation σr,o normalized by fringe contrast, found by dividing the raw standard deviation by the measured fringe contrast for each pixel. It can be observed that the standard deviation was positively related to the pixel intensity which is significant due to its impact on the assumptions required for the special case described above. FIG. 6 at B shows a single pixel's intensity value across all 14 fringe projection projections, for each iteration, on the object image. For each pixel, pr,ij was estimated by finding the correlation of intensity values of projection “i” and projection “j” on the reference plane, and similarly for po,ij on the object plane. For each pixel, pro,ij was estimated by finding the correlation of intensity values of projection “i” on the reference plane, and projection “j” on the object plane. Each off-diagonal term in po and pr were approximately normally distributed, allowing for their means and standard deviations to accurately summarize their distributions. FIG. 6 at C-E show the standard mean of pr, po, and pro averaged across all pixels in the measurement scene, respectively. For this 8-bit, 14 projection experiment, a trend of low, positive correlation of pixel intensity noise with nearest-to-diagonal terms approximately 0.07-0.08 was observed. A slight structure can be observed in the off-diagonal terms of the correlation matrices; near-diagonal terms (representing images captured temporary close), as well as farthest-from-diagonal (outermost), exhibiting the highest correlation. The non-identity correlation structures measured in experiment are significant due to their impact on the assumptions required for the special case described above.


Results comparing the experimental distribution of measured height values to the estimation of height uncertainty, p(χl), are shown at FIG. 7 and a summary of the results in Table 2. FIG. 7 at A-D shows results from the DFP experiment using 8-bit images, while FIG. 8 at E-F show results using 12-bit images. FIG. 7 at A shows a comparison of an example pixel's height distribution across all experiment iterations compared to p(χl) estimated using the pixel's image intensity statistics σr, σo, ρr, ρo, and pro measured from the experiment. In order to report the accuracy of our height uncertainty model for every pixel in the measured area, a performance metric λ was created that represents the normalized difference of the estimated height uncertainty and the distribution of height measurements across all iterations. Each pixel in λ is calculated using λ=(σm−σe)/σm, where σm is the standard deviation of the measured heights of each pixel, and σe is the standard deviation of p(χl). The error map is constructed to provide positive values when p(λl) underestimates the standard deviation of the height distribution. FIG. 7 at B shows the normalized difference map λ for every pixel in the measured area, using the linearized height uncertainty model p(λl) from Equation 17 and measured noise correlation statistics. FIG. 7 at C shows the normalized difference map λ when using the simplified model from Equation 19 for the special case noted above (where no pixel noise correlation and the average pixel noise standard deviation across all projections). The example measurement noise statistics provided in FIG. 7 show that there was in fact noise correlation in our experiment, sot the special case assumptions of Equation 19 will induce error. This causes the normalized difference map λ in FIG. 7 at C to differ from FIG. 7 at B, shifting from positive to negative mean.



FIG. 7 at D shows the distributions of normalized difference maps λ for the full and special cases, for all 3 8-bit experiments using N=10, 14, 18 projections. It can be observed that the height error prediction performed better in cases with higher N, a reason for this trend may be that projector gamma error is reduced when using more projections. The distributions of λ created from p(χl) using correlation statistics were positively biased by 2.2%, 1.9%, 3.7% and with standard deviations of 8.2%, 8.0%, and 7.8% for the experiments using 10, 14, and 18 projections, respectively. The distributions of λ created from p(χl) predicting a tighter height uncertainty than measured in experiment (which may be due to other sources of error entering the height calculation such as external vibrations, subtle environmental lighting fluctuations, gamma error, and projector draw lines).


λ distributions created from ignoring noise correlation (special case) for 8 bit experiments using 10 and 14 projections were slightly negatively biased by −1.9% and −1.4%, with standard deviations of 10.5% and 8.6%, respectively. This may be due in part to the introduction of correlation reduces height uncertainty, as shown in FIG. 4 at A, and assuming no correlation exists increases the height uncertainty estimation made by our model. λ distributions from the 18 projection experiment created from ignoring noise correlation was positively biased by 3.6%, with a standard deviation of 7.4% and considerably closer to the full model's error distribution. This may be due to very low levels of correlation during the 18 projection experiment (averages of the near-diagonal terms of the p matrices were approximately 0.02 during the 18 projection experiment as opposed to 0.08 for both 10 and 14 projection experiments).












TABLE 2








Mean




Standard
off-diagonal



Bias
deviation
pr,o,ro value




















8-bit
Full linear
10 projections
+2.2%
8.2%
p ≈ 0.08-0.09



model
14 projections
+1.9%
8.0%
p ≈ 0.07-0.08




18 projections
+3.7%
7.8%
p ≈ 0.02-0.03



Special
10 projections
−1.9%
10.5%
p ≈ 0.08-0.09



case
14 projections
−1.4%
8.6%
p ≈ 0.07-0.08




18 projections
+3.6%
7.4%
p ≈ 0.02-0.03


12-bit
Full linear
10 projections
+0.9%
9.3%
p ≈ 0.13-0.20



model
11 projections
+2.5%
8.1%
p ≈ 0.11-0.22



Special
10 projections
−8.7%
14.5%
p ≈ 0.13-0.20



case
14 projections
−0.5%
10.7%
p ≈ 0.11-0.22










FIG. 7 at E and F show the effects of using 12-bit imaging. Similar model height estimation error was observed for these experiments compared to the 8-bit image capture experiments, suggesting that our uncertainty model's performance is independent of pixel quantization effects. A difference in the experiment was that our observed pixel noise correlation was much higher when using 12-bit images (see Table 2). A formal analysis of the quantization occurring within the camera was considered, but deemed out of the scope of this paper. FIG. 7 at E is positively biased, similar to its 8-bit counterpart in FIG. 7 at B, and FIG. 7 at F shows similar distributions to its 8-bit counterpart in FIG. 8 at D. The model estimation accuracy bias for 12-bit full cases for N=10 and 14 are 0.9% and 2.5%, and standard deviations of 9.3% and 8.1%, respectively. For the special case, estimation biases were recorded as −8.7% and −0.5% and with standard deviations of 14.5% and 10.7%, for N=10 and 14, respectively. The large negative bias for the 10 projection special case may be attributed to be the large amount of correlation observed through the experiment (off-diagonal terms averaging around 0.15).


In some example embodiments, there may be provided a model 199 configured to provide a height-converted image intensity noise uncertainty model. The model 199 may be based on, for example, Equation 12 derived as a continuation of the phase-converted image intensity noise uncertainty model form. As the DFP height profiling process is often linearized during practice, a linearized model as in Equation 17, may also be used as the model 199. Additionally, the model 19 may be based on simplified versions of the full and linear height uncertainty models as shown in Equations 14 and 19.


Although some of the examples described herein refer to using the error model to quantify uncertainty of a light based optical measurement using DFP, the error model may provide an uncertainty indication for other types of light or optical based measurement technologies as well. For example, the error model may be used in a measurement system using digital image correlation technology to perform measurements, such as height measurements. With digital image correlation, one or more cameras capture images of a surface of an object (which may be undergoing additive manufacturing). An algorithm finds matching features in each image and a height map is created. Specifically, the locations of these features in each image, and the spatial relationship of the cameras may serve as inputs to a height equation to provide a height measurement value. The error uncertainty model disclosed herein may be used to provide an indication of the uncertainty, such as the error induced by light (which is captured in the image as pixel noise or pixel intensity noise). Likewise, an image contact scanner technology may include an image bar to capture an image of a surface of an object (which may be undergoing additive manufacturing). The image bar may be fixed to the recoater blade (e.g., in a powder bed fusion system), or coupled to a lamp, print head, or recoater (e.g., in binder jetting). As the image bar moves across the object's surface, images are taken extremely close to the surface of the object. The amount of “blurriness” of the imaged surface is quantified and used to determine the distance of these blurry areas to the focal plane. This distance is formatted in a height map. Here again, the error uncertainty model disclosed herein may be used to provide an indication of the uncertainty, such as the error induced by light (which is captured in the image as pixel noise or pixel intensity noise). And, coherent light imaging technology may be used to determine height of the object. In the case of coherent light imaging, an interferometer/laser is diffused to produce an area of illumination, often known as a speckle pattern. One or more cameras take images of the build area of the object, while the laser illumination changes phase. The cameras may be linked with part of the beam of the interferometer. These images of the surface, illuminated with the interferometer are combined to form a height map. The error uncertainty model disclose herein may also be used to provide an indication of the uncertainty, such as the error induced by light (which is captured in the image as pixel noise or pixel intensity noise).



FIG. 8 depicts another example of an additive manufacturing system 800, in accordance with some example embodiments. The system 801 is similar to system 100 in some respects but uses binder jetting technology. In the example of FIG. 8A, print head 810 selectively deposits a liquid binding agent onto a thin layer of material (e.g., powder particles such as metal, sand, ceramics, composites, and/or the like) to build the object 115. Rather than use a laser to sinter, the lamp evaporates the solvent from the binder to form a layer on the object 115. In operations, one or more image sensors 812A-B may be configured as image contact scanners. In this example, as the image sensors 812A-B move across the object, the image sensors capture an image of a surface of an object (which may be undergoing additive manufacturing). For example, the image sensor 812B may be fixed to the recoater blade (e.g., in a powder bed fusion system). As noted, the image sensors 812A-B move across the object's surface and capture images of the surface of the object. The amount of “blurriness” of the imaged surface is quantified and used to determine the distance of these blurry areas to the focal plane. This distance is mapped to a height. For example, the blurriness may be a layer of applied material, a clump of material, etc. The model 199 may provide an indication of the uncertainty associated with the height measurement.


As noted above, the processor 160 may receive and process the measurement to determine height data for the powder applied to the object 115 and/or uncertainty data related to the height measurement, in accordance with some example embodiments. In some example embodiments, the processor 160 may receive pixel intensity data 140A from image sensors 812A-B. The processor may output height measurement data 140B for the powder applied on the object, and/or may output uncertainty data 140C that indicates the uncertainty associated with the height measurement data 140B. In some example embodiments, the processor includes the model 199 that provides the height measurement data and/or the uncertainty data based on the pixel intensity data.



FIG. 9 depicts another example of an additive manufacturing system 900, in accordance with some example embodiments. In the example of FIG. 9, direct energy deposition (or direct metal deposition) is depicted. In FIG. 9, a feedstock material may be pushed through a feed nozzle 902. The feedstock may be a material (e.g., powder particles such as metal, sand, ceramics, composites, and/or the like) to build the object 115. The feed stock is then melted by a heat source 904, such as a laser, electron beam, or arc to sinter each layer of material applied. In operations, the light source 130B may project light on the object, while the imaging sensor 130A, performs a measurement by capturing an image of at least a portion of the surface of the object (which includes a fringe projection on the surface). As noted above, the processor 160 may receive and process the measurement to determine height data for the powder applied to the object 115 and/or uncertainty data related to the height measurement, in accordance with some example embodiments. In some example embodiments, the processor 160 may receive pixel intensity data 140A from the camera 130A. The processor may output height measurement data 140B for the powder applied on the object, and/or may output uncertainty data 140C that indicates the uncertainty associated with the height measurement data 140B. In some example embodiments, the processor includes the model 199 that provides the height measurement data and/or the uncertainty data based on the pixel intensity data.



FIG. 10 depicts another example of an additive manufacturing system 1000, in accordance with some example embodiments. In the example of FIG. 10, a powder bed fusion technology is used as depicted. Like FIG. 1A, the recoater 901 may be used to apply the material, but the recoater 901 includes an optical contact sensor 910. This optical sensor may be used as a contact image scanner as noted above with respect to FIG. 8.


In some implementations, the processor 160 may perform one or more operations disclosed herein (e.g., process 200 and the like) in order to determine a height uncertainty based on a model, such as model 199.



FIG. 11 depicts an example of a system 1100 including a processor 1110, a memory 1120, a storage device 1130, and an input/output device 1140. Each of the components 1110, 1120, 1130 and 1140 can be interconnected using a system bus 1150. The processor 1510 can be configured to process instructions for execution within the system 1100. In some implementations, the processor 1510 can be a single-threaded processor. In alternate implementations, the processor 1510 can be a multi-threaded processor. The processor 1510 can be further configured to process instructions stored in the memory 1520 or on the storage device 1530, including receiving or sending information through the input/output device 1540. The memory 1520 can store information within the system 1100. In some implementations, the memory 520 can be a computer-readable medium. In alternate implementations, the memory 1520 can be a volatile memory unit. In yet some implementations, the memory 1520 can be a non-volatile memory unit. The storage device 1530 can be capable of providing mass storage for the system 1100. In some implementations, the storage device 1530 can be a computer-readable medium. In alternate implementations, the storage device 1530 can be a floppy disk device, a hard disk device, an optical disk device, a tape device, non-volatile solid-state memory, or any other type of storage device. The input/output device 1540 can be configured to provide input/output operations for the system 500. In some implementations, the input/output device 1540 can include a keyboard and/or pointing device. In alternate implementations, the input/output device 1540 can include a display unit for displaying graphical user interfaces.


The systems and methods disclosed herein can be embodied in various forms including, for example, a data processor, such as a computer that also includes a database, digital electronic circuitry, firmware, software, or in combinations of them. Moreover, the above-noted features and other aspects and principles of the present disclosed implementations can be implemented in various environments. Such environments and related applications can be specially constructed for performing the various processes and operations according to the disclosed implementations or they can include a general-purpose computer or computing platform selectively activated or reconfigured by code to provide the necessary functionality. The processes disclosed herein are not inherently related to any particular computer, network, architecture, environment, or other apparatus, and can be implemented by a suitable combination of hardware, software, and/or firmware. For example, various general-purpose machines can be used with programs written in accordance with teachings of the disclosed implementations, or it can be more convenient to construct a specialized apparatus or system to perform the required methods and techniques.


The systems and methods disclosed herein can be implemented as a computer program product, i.e., a computer program tangibly embodied in an information carrier, e.g., in a machine readable storage device or in a propagated signal, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.


As used herein, the term “user” can refer to any entity including a person or a computer. Although ordinal numbers such as first, second, and the like can, in some situations, relate to an order; as used in this document ordinal numbers do not necessarily imply an order. For example, ordinal numbers can be merely used to distinguish one item from another. For example, to distinguish a first event from a second event, but need not imply any chronological ordering or a fixed reference system (such that a first event in one paragraph of the description can be different from a first event in another paragraph of the description). The foregoing description is intended to illustrate but not to limit the scope of the invention, which is defined by the scope of the appended claims. Other implementations are within the scope of the following claims.


These computer programs, which can also be referred to programs, software, software applications, applications, components, or code, include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the term “machine-readable medium” refers to any computer program product, apparatus and/or device, such as for example magnetic discs, optical disks, memory, and Programmable Logic Devices (PLDs), used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. The machine-readable medium can store such machine instructions non-transitorily, such as for example as would a non-transient solid-state memory or a magnetic hard drive or any equivalent storage medium. The machine-readable medium can alternatively or additionally store such machine instructions in a transient manner, such as for example as would a processor cache or other random access memory associated with one or more physical processor cores.


To provide for interaction with a user, the subject matter described herein can be implemented on a computer having a display device, such as for example a cathode ray tube (CRT) or a liquid crystal display (LCD) monitor for displaying information to the user and a keyboard and a pointing device, such as for example a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well. For example, feedback provided to the user can be any form of sensory feedback, such as for example visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including, but not limited to, acoustic, speech, or tactile input.


Derivation p, μ, and σ


Equation 10 below is the transfer function that transforms the noise statistics (epsilon terms) into the phase noise term (kappa). Equation 10 may be determined based on the relationship between the measurement inputs (e.g., pixel intensity) and the measurement outputs (e.g., height values). The phi_c terms are the carrier phase, which is related to the projected fringes. Phi is the true phase value related to the height of the object at each specific point.









κ


arctan




(




2
N






i
=
1

N



sin



(



2

π

i

N

+

ϕ
c


)




ϵ
_


r
,
i





-

sin



(



2

π

i

N

+
ϕ
+

ϕ
c


)




ϵ
_


o
,
i





1
+


2
N






i
=
1

N



cos



(



2

π

i

N

+

ϕ
c


)




ϵ
_


r
,
i





+

cos



(



2

π

i

N

+
ϕ
+

ϕ
c


)




ϵ
_


o
,
i





)

.






(
10
)







To figure out how to model the probability of kappa, the statistics are defined based on Equation 10, and correlated noise structures are included in the model. The following assumptions are made: pixel intensity noise for the reference and the object images are each jointly normally distributed, allowing arbitrary image-to-image correlation (e.g., ϵr,i˜custom-characterr,i, σr,j, Σr,ij) and ϵo,j˜custom-charactero,j, σo,j, Σo,ij), i=1 . . . N, where μ is the pixel intensity noise mean), σ is the pixel intensity noise standard deviation, and Σ is the image-to-image pixel intensity noise correlation. Moreover, it is assumed that the reference and object images may be correlated with correlation matrix Σor,ij; in other words, this allows for the possibility that the ith object image noise may be correlated with the jth reference image noise (in addition to the initial assumption that individual reference and object images may be intra-correlated). Thus, a global correlation matrix Σij may be constructed as











ij


=


[






o
,
ij







or
,
ij









ro
,
ij







r
,
ij





]

.






(
11
)







The upper left and lower right square sub-matrices describe the intra-image correlation structure in the object and reference images, respectively, while the upper right and lower left sub-matrices describe any correlation structure between object and reference images. The following expectation operations, where E[*] is the expectation operator, are defined as






E[ϵr,i]=μr,i






E[ϵo,i]=μo,i






E[ϵo,iϵo,j]=μo,iμo,jo,ijσo,iσo,j






E[ϵr,iϵr,j]=μr,iμr,jr,ijσr,iσr,j






E[ϵo,iϵr,j]=μo,iμr,jor,ijσo,iσr,j,  (12)


where Σr,ijr,ijσr,iσr,j, Σo,ijo,ijσo,iσo,j, Σor,ijor,ijσo,iσr,j, while p* is a correlation coefficient and σ* is a standard deviation. The rho, sigma, and mu terms here are all measured quantities of pixel intensity noise, captured over numerous captured images.


Next, the form of Equation 10 may be written as K=arctan (Y/X) where the numerator Y and denominator X are given by










Y
=



2
N






i
=
1

N




sin

(



2

π

i

N

+

ϕ
c


)




ϵ
_


r
,
i





-

sin



(



2

π

i

N

+
ϕ
+

ϕ
c


)




ϵ
_


o
,
i








X
=

1
+


2
N






i
=
1

N



cos


(



2

π

i

N

+

ϕ
c


)




ϵ
_


r
,
i





+

cos



(



2

π

i

N

+
ϕ
+

ϕ
c


)





ϵ
_


o
,
i


.








(
13
)
















μ
Y

=


E
[
Y
]







=




2
N






i
=
1

N




sin

(



2

π

i

N

+

ϕ
c


)



E
[


ϵ
_


r
,
i


]




-

sin



(



2

π

i

N

+
ϕ
+

ϕ
c


)



E
[


ϵ
_


o
,
i


]









=




2
N






i
=
1

N




sin

(



2

π

i

N

+

ϕ
c


)



μ

r
,
i





-

sin



(



2

π

i

N

+
ϕ
+

ϕ
c


)



μ

o
,
i











(
14
)
















μ
X

=


E
[
X
]







=


1
+


2
N






i
=
1

N



cos


(



2

π

i

N

+

ϕ
c


)



E
[


ϵ
_


r
,
i


]




+

cos



(



2

π

i

N

+
ϕ
+

ϕ
c


)



E
[


ϵ
_


o
,
i


]









=


1
+


2
N






i
=
1

N



cos


(



2

π

i

N

+

ϕ
c


)



μ

r
,
i





+

cos



(



2

π

i

N

+
ϕ
+

ϕ
c


)




μ

o
,
i


.










(
15
)







Similarly, the variances of Y and X, σY2=E[Y2]−E2[Y] and σX2=E[X2]−E2[X], are computed by taking appropriate expectations:










σ
Y
2

=




4

N
2







i
,

j
=
1


N



sin



(



2

π

i

N

+
ϕ
+

ϕ
c


)



sin



(



2

π

i

N

+
ϕ
+

ϕ
c


)



E
[



ϵ
_


o
,
i





ϵ
_


o
,
j



]




-

2

sin



(



2

π

i

N

+
ϕ
+

ϕ
c


)



sin



(



2

π

j

N

+

ϕ
c


)



E
[



ϵ
_


o
,
i





ϵ
_


r
,
j



]


+

sin



(



2

π

i

N

+

ϕ
c


)


sin



(



2

π

j

N

+

ϕ
c


)



E
[



ϵ
_


r
,
i





ϵ
_


r
,
j



]


-

μ
Y
2


=



4

N
2







i
,

j
=
1


N



sin



(



2

π

i

N

+
ϕ
+

ϕ
c


)



sin



(



2

π

j

N

+
ϕ
+

ϕ
c


)



ρ

o
,
ij




σ

o
,
i




σ

o
,
j





-

2

sin



(



2

π

i

N

+
ϕ
+

ϕ
c


)



sin



(



2

π

j

N

+

ϕ
c


)



ρ

or
,
ij




σ

o
,
i




σ

r
,
j



+

sin


(



2

π

i

N

+

ϕ
c


)


sin


(



2

π

j

N

+

ϕ
c


)



ρ

r
,
ij




σ

r
,
i





σ

r
,
j


.








(
16
)







The derivation of the terms E[Y2] and E2[Y] employed usage of a double sum across indices i, j in Σi,j=1N to indicate the product pf two series.


Similarly,










σ
X
2

=



4

N
2







i
,

j
=
1


N



cos


(



2

π

i

N

+
ϕ
+

ϕ
c


)


cos


(



2

π

j

N

+
ϕ
+

ϕ
c


)



ρ

o
,
ij




σ

o
,
i




σ

o
,
j





+

2

cos


(



2

π

i

N

+
ϕ
+

ϕ
c


)


cos


(



2

π

j

N

+

ϕ
c


)



ρ

or
,
ij




σ

o
,
i




σ

r
,
j



+

cos


(



2

π

i

N

+

ϕ
c


)


cos


(



2

π

j

N

+

ϕ
c


)



ρ

r
,
ij




σ

r
,
i





σ

r
,
j


.







(
17
)







Finally, since X and Y are generally correlated, the covariance cov [X, Y]=E[XY]−E[X]E[Y] as










cov


(

X
,
Y

)


=



-

4

N
2








i
,

j
=
1


N



sin


(



2

π

i

N

+
ϕ
+

ϕ
c


)


cos


(



2

π

j

N

+
ϕ
+

ϕ
c


)



ρ

o
,
ij




σ

o
,
i




σ

o
,
j





+

sin


(



2


π

(

i
-
j

)


N

+
ϕ

)



ρ

or
,
ij




σ

o
,
i




σ

r
,
j



-

sin


(



2

π

i

N

+

ϕ
c


)


cos


(



2

π

j

N

+

ϕ
c


)



ρ

r
,
ij




σ

r
,
i





σ

r
,
j


.







(
18
)







The covariance between X and Y is composed of contributions from possible intra-image correlation within the reference and object images (po,ij and pr,ij), as well as reference-to-object image correlation (Por,ij). In general, all the order statistics of X and Y depend on input noise statistical parameters as well as the true phase ϕ.


The X and Y are jointly normally distributed, with individual means μX and μY, variances σX2 and σY2, correlation coefficient ρXY=cov(X,Y)/√{square root over (σXσY)}, all given by Eqs. (14-18), such that their joint probability density function may be given by










p

(

X
,
Y

)

=


1

2

π


σ
X



σ
Y




1
=

ρ
XY
2








e



-
1


2


(

1
-

ρ
XY
2


)





(




(

X
-

μ
X


)

2


σ
X
2


+



(

Y
-

μ
Y


)

2


σ
Y
2


-


2



ρ
XY

(

X
-

μ
X


)



(

Y
-

μ
Y


)




σ
X



σ
Y




)



.






(
19
)







With the joint density for X and Y, the change-of-variables technique may be used to make a coordinate transformation X=X and K=arctan (Y/X) to obtain the probability density function of K explicitly as













p

(
κ
)

=





-







p

(

X
,
Y

)




"\[LeftBracketingBar]"




κ

/


Y




"\[RightBracketingBar]"




dX








=





-







p

(

X
,
Y

)




"\[LeftBracketingBar]"


X
/

(


X
2

+

Y
2


)




"\[RightBracketingBar]"




dX









=





-






p

(

X
,

X


tan


κ


)





"\[LeftBracketingBar]"

X


"\[RightBracketingBar]"




sec
2


κ

dX



,







(
20
)







Since Y=X tan K. The integrand and integration range in Equation 20 require separation into two regions, one for the case X>0 (|K|<π/2) and for the case X<0 (π>|K|>π/2). Both integrations admit closed-form solutions, given by











p

(
κ
)

=



e

-

z
3





sec
2



κ

(

1
+


π



z
2




e

z
2
2


(


erf


(

z
2

)


±
1

)



)



2

π


z
1




1
-

ρ
XY
2






,

where




(
21
)














z
1

=



σ
Y
2

-

2


ρ
XY



σ
X



σ
Y


tan

κ

+


σ
X
2



tan
2


κ




σ
X




σ
Y

(

1
-

ρ
XY
2


)




,



z
2

=




μ
Y




σ
X

(



ρ
XY



σ
Y


-


σ
X


tan

κ


)


+


μ
X




σ
Y

(



σ
X



ρ
XY


tan

κ

-

σ
Y


)








2



σ
X



σ
Y




1
-

ρ
XY
2











σ
Y
2

-

2


ρ
XY



σ
X



σ
Y


tan

κ

+


σ
X
2



tan
2


κ








,



z
3

=





μ
Y
2



σ
X
2


+


μ
X
2



σ
Y
2


-

2


μ
X



μ
Y



σ
X



σ
Y



ρ
XY




2


σ
X
2




σ
Y
2

(

1
-

ρ
XY
2


)



.






(
22
)







and erf(*) is the standard error function. The minus (−) sign in taken in Eq. (21) when |K|<π/2 (when X>0), while the plus (+) sign is taken when π>|K|>π/2 (when X<0).


The subject matter described herein can be embodied in systems, apparatus, methods, and/or articles depending on the desired configuration. The implementations set forth in the foregoing description do not represent all implementations consistent with the subject matter described herein. Instead, they are merely some examples consistent with aspects related to the described subject matter. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations can be provided in addition to those set forth herein. For example, the implementations described above can be directed to various combinations and subcombinations of the disclosed features and/or combinations and subcombinations of several further features disclosed above. In addition, the logic flows depicted in the accompanying figures and/or described herein do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Other implementations may be within the scope of the following claims.

Claims
  • 1. A system comprising: at least one processor; andat least one memory including program code which when executed by the at least one processor causes operations comprising:capturing an image of at least a portion of a surface of an object;generating, from the captured image, pixel intensity data;in response to generating the pixel intensity data, determining, based on a height error model, height error data, wherein the height error data indicates an uncertainty of at least one height measurement of the object; anddetermining, based on the height error data, whether the object satisfies a threshold criteria for acceptance of the object.
  • 2. The system of claim 1 further comprising: an image sensor configured to capture the image; anda light source configured to project structured light on to the surface of the object.
  • 3. The system of claim 2, wherein the structured light comprises a Moiré pattern and/or a fringe pattern.
  • 4. The system of claim 1, wherein the height error model is determined based on noise captured by the image associated with the pixel intensity data.
  • 5. The system of claim 4, wherein the noise includes uncertainty caused by light projector gamma nonlinearity, light projector quantization, camera quantization, and/or pixel intensity noise caused by ambient light.
  • 6. The system of claim 1, further comprising: in response to the generating of the pixel intensity data, determining the at least one height measurement of the object, and wherein the determining whether the object satisfies the threshold criteria further comprises determining whether the object satisfies a threshold height.
  • 7. The system of claim 6, further comprising: in response to the at least one height measurement exceeding a threshold height, providing, based on the height error data indicating a threshold level of certainty, an indication to reject the object.
  • 8. The system of claim 7, wherein the indication terminates an additive manufacturing process of the part.
  • 9. The system of claim 7, wherein the indication triggers an alert at a user interface.
  • 10. The system of claim 7, wherein the indication triggers an alert to dispose of, rather than reuse, material being used to build the object.
  • 11. The system of claim 1, wherein the system comprises, or is comprised in, an additive manufacturing device making the object.
  • 12. The system of claim 11, wherein after a layer of material is applied, the processor causes the capturing, the generating, the determining height error data, and/or determining whether the object satisfies the threshold.
  • 13. The system of claim 11, wherein the additive manufacturing device comprises powder fusion, binder jetting, and/or direct energy deposition.
  • 14. The system of claim 1, further comprising: in response to the at least one height measurement being less than a threshold height, providing, based on the height error data indicating a threshold level of certainty, an indication to continue or restart additive manufacturing of the object.
  • 15. The system of claim 1 further comprising: providing an aggregate layer height and an aggregate height error for a plurality of layers of material applied to the object.
  • 16. The system of claim 1 further comprising: providing feedback to adjust one or more parameters of an additive manufacturing process of the object.
  • 17. A method comprising: capturing an image of at least a portion of a surface of an object;generating, from the captured image, pixel intensity data;in response to generating the pixel intensity data, determining, based on a height error model, height error data, wherein the height error data indicates an uncertainty of at least one height measurement of the object; anddetermining, based on the height error data, whether the object satisfies a threshold criteria for acceptance of the object.
  • 18. The method of claim 17, further comprising: in response to the generating of the pixel intensity data, determining the at least one height measurement of the object, and wherein the determining whether the object satisfies the threshold criteria further comprises determining whether the object satisfies a threshold height.
  • 19. The method of claim 17, further comprising: in response to the at least one height measurement exceeding a threshold height, providing, based on the height error data indicating a threshold level of certainty, an indication to reject the object.
  • 20. A non-transitory computer-readable storage medium including program code, which when executed by at least one processor, causes operations comprising: capturing an image of at least a portion of a surface of an object;generating, from the captured image, pixel intensity data;in response to generating the pixel intensity data, determining, based on a height error model, height error data, wherein the height error data indicates an uncertainty of at least one height measurement of the object; anddetermining, based on the height error data, whether the object satisfies a threshold criteria for acceptance of the object.
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims priority to U.S. Provisional Patent Application No. 62/934,885, filed on Nov. 13, 2019, and entitled “PORE MEASUREMENT DEVICE,” and also claims priority to U.S. Provisional Patent Application No. 63/045,699, filed on Jun. 29, 2020, and entitled “PORE MEASUREMENT DEVICE,” and claims priority to U.S. Provisional Patent Application No. 63/092,285, filed on Oct. 15, 2020, and entitled “PORE MEASUREMENT DEVICE,” the disclosures of all three applications are incorporated herein by reference in their entirety.

STATEMENT OF GOVERNMENT SUPPORT

This invention was made with government support under Los Alamos National Lab, Managed by Triad National Security, LLC for the U.S. Department of Energy's National Nuclear Security Administration (NNSA). The government has certain rights in the invention.

PCT Information
Filing Document Filing Date Country Kind
PCT/US20/60291 11/12/2020 WO
Provisional Applications (3)
Number Date Country
62934885 Nov 2019 US
63045699 Jun 2020 US
63092285 Oct 2020 US