The disclosure pertains to measurement of a change of a shape of a surface using interference of sheared speckle patterns.
Embodiments provide an optical system that includes (1) an irradiation unit containing at least first and second radiation sources (here, the at least first and second radiation sources are configured to generate radiation in a form of corresponding optical outputs containing, respectively, the first and second output beams of radiation; and one or more of spectral and geometrical parameters characterizing the first and second output beams during the propagation of these first and second output beams from the first and second radiation sources is changeable in response to an input applied to the irradiation unit); (2) a measurement unit including: a radiation detector; an aperture stop having an aperture stop axis; (3) an optical-wavefront-multiplier system; and (4) a data-acquisition system operably cooperated with the radiation detector and configured to determine a change of an object's shape, based on a determination of Fourier Transforms of only respectively-corresponding subportions of the first and second images formed by/at the detector. The measurement system is configured i) to receive an input radiation wavefront, formed as a result of propagation of the first and second output beams that have been modified by interaction of these beams with an object under test, through the aperture stop, ii) to form at least first and second radiation wavefronts by duplicating the input radiation front, and iii) to direct the at least first and second radiation wavefronts onto the radiation detector in order to form (a) a first set of respectively-corresponding first and second images representing Fourier Transforms of distributions of the radiation at the aperture at a first moment in time, and (b) a second set of respectively-corresponding first and second images representing Fourier Transforms of distributions at the aperture at a second moment in time.
These and other features will be more fully understood by referring to the following Detailed Description in conjunction with the Drawings, of which:
The disclosed methods and apparatus can be used to measure various shapes and deformations of surfaces including chuck-inducted deformations of semiconductor wafers, parts fabricated using so-called 3D printing such as laser-powder bed fusion (L-PBF), e-beam powder fusion (e-PBF), or laser metal deposition (LIVID), powder surface shape or powder surface unevenness in PBF printing, and riblet structures formed using laser processing. Examples of shaping methods and apparatus are described in, for example, U.S. Patent Application Publications 2017/0304946, U.S. 2017/0304947, PCT Patent Publication WO2019/133553, and European Application No. 17930076.9, all of which are incorporated herein by reference in their entireties. In evaluation of chuck-induced deformations, a roughened wafer can be used to produce speckle patterns used to assess in-plane and out-of-plane shape changes. Such roughening can be associated with a predetermined spatial frequency of interest which can be selected to enhance light collection efficiency. Generally, surfaces having arbitrary shapes can be assessed with the disclosed approaches, although for convenient illustration, shapes of interest are depicted as being planar.
As disclosed herein, a determination of a change of a shape of a surface of an object (or a complex surface of the workpiece, interchangeably referred to as surface under test, or SUT, or object under test, or workpiece under test) at any point within the bounds of such SUT is effectuated by analyzing a portion of interest (POI) of a light distribution defined by the use of two overlapping optical wavefronts. Each of these two overlapping wavefronts contains information about the light-scattering characteristics of the SUT and each of these wavefronts is a spatial copy (also referred to as a duplicate) of the other and is formed from a beam of radiation that has interacted with the SUT. Notably, as discussed below, the determination of the change of the object's shape can be carried out across the whole surface of the object in single measurement process as long as light can be received at an aperture of a measurement optical system from every point of the surface in a straight line without obscuration.
As used herein, the expression “complex surface” refers to a surface of substantially any and/or every shape, for example, whether planar, or curved, or containing surface portions connected to one another at points at which a function specifying the surface shape of the SUT is not fully differentiable and having light-scattering properties at an operational light wavelength of choice that generally include any of being specularly reflecting, optically diffusive, optically scattering, or any combination thereof. Surfaces that are at least partially optically diffusive or scattering are generally preferred.
In some examples, the disclosed approaches cure the inability of existing systems to measure shape changes on large, complex surfaces. Such conventional methods typically use an optical beam having a beam width that is much smaller than a surface area of interest. Such beams can have dimensions that are at least one or two orders of magnitude smaller than the extent of the SUT.
A portion (or subset) of interest (POI) of a light distribution, acquired with the optical detection system for the purposes of the determination of the change in the shape of the object is defined by a Fourier Transform of the light distribution, formed by the beam of light at a pre-determined plane of the measurement portion of the optical system, at one or more spatial frequencies characterizing differences in angular propagation characteristics of the two wavefronts. Notably, the carrier frequency may also depend on the location of the tilted mirror relative the pupil plane, and the focal length of the lens, but those parameters are generally fixed, in practical implementations of the system. In particular, the pre-determined plane may include a pupil or an aperture stop through which light that has interacted with the SUT is collected by the measurement portions of the optical system, while such light distribution at an optical detector of the optical detection system contains a speckle pattern.
Also addressed is the problem of improvement of accuracy of the so-carried-out determination of a change of a shape of the SUT. This particular problem is solved by performing the process of the determination multiple times, under differing measurement conditions, each of which can be selected to provide repeatable and pre-determined change(s) in the speckle pattern to reduce the contribution of low-light areas of any speckle to the overall measurement error. In a specific case, the measurement-accuracy improvement factor is substantially equal to a square root of the number of measurement conditions.
Referring to
A lens 110 is situated to image the surface 104 into a shearing interferometer 112. The shearing interferometer 112 is situated to receive each of the beams 106A, 106B as processed by the lens 110, divide the beams 106A, 106B, and produce two sheared beam portions from each of the beams 106A, 106B. The sheared beam portions are directed to a detector 114 which produces interference patterns corresponding to each pair of sheared beam portions. A fringe period (or carrier frequency) can be adjusted by selection of a relative tilt between the two beam portions at the detector 114. The two sheared images exhibiting interference and speckle are directed to an image processing/control system (“controller”) 116 that is operable to perform Fourier transforms, inverse Fourier transforms, frequency shift Fourier transformed images, determine and smooth phase differences (Δ1, Δ2). Processing spatially filtered, sheared speckle images associated with each of the beams 102A, 102B for each measurement time of interest permits assessing in-plane surface changes. Typically, smoothed phase differences (Δ1, Δ2) are obtained at measurement times t1, t2, . . . as needed.
The controller 116 is also coupled to change irradiation conditions. As shown in
In the example of
Referring to
The optical workpiece-observation system 200 includes a combination of lenses configured to relay an image of the workpiece 222 to a camera 244 while transmitting radiation through an optical-wavefront duplicator device: in the specific embodiment 200 as shown, the radiation field producing the speckle and originating at the workpiece 222, is split into two sibling radiation fields (optical wavefronts, each of which is the duplicate of the other) using a 50/50 or other beamsplitter. Each of the two wavefronts is reflected by a corresponding reflector (either 236B or 236C). If one of these reflectors is tilted about the y-axis, this results in an angular tilt introduced between the two wavefronts and a translational separation the amount of each of which depends on the optics design, tilt angle, etc. Specifically, if one of these reflectors is tilted about the y-axis, there appears an angular tilt introduced between the two wavefronts. When these two wavefronts are relayed to the CCD 244, they will have a translational separation and a tilt, the amount of each of which depends on the optics design, tilt angle, etc. These two wavefronts are combined and measured with the CCD 244. The two beams representing these two optical wavefronts are then spatially recombined and the spatial distribution of radiation is acquired and measured with the CCD 244.
A person of skill will readily appreciate that, while the difference between the angles of propagation (alternatively referred to as shear angle) of the duplicated optical wavefronts, generated at the optical-wavefront duplicator 236, remains substantially constant in the object space, the separation between these two duplicated wavefronts expressed in terms of distance (which can be referred to as shear distance) in the object space depends on the distance to the object at hand.
It will be appreciated by a skilled person that one advantage of this optical system is that the aperture stop 232 (disposed between the optical lenses 224, 228 and configured, for example, as a slit at the plane of the pupil of the optical system that extends in a direction perpendicular to the direction of the measurement) practically limits the working f-number of the optical system (and accordingly, the spatial frequency at which the speckle-related radiation propagates) in the direction of shear—in the example of
The aperture stop 232 can be alternatively disposed between the test object 222 and the lens 224. However, in this case the period of interference fringes, formed by the two duplicates of the wavefront arriving from the object 222 at the detector, is increased several fold (and up to by an order of magnitude) for the same difference between the angles of propagation of the two duplicated wavefronts towards the detector, as compared to the situation depicted in
According to a specific embodiment, the input flux of radiation 220 (dimensioned as a beam with the numerical aperture NA of about 0.19 in one non-limiting implementation) was injected into the optical system 200 at an axial location between the lenses 222 and 228. The presence of the negative-power optical element 222 serves to expand the field of view (up to 15 degrees, half-FOV in object space) to completely fill the test part 222. The aperture stop is dimensioned as a rectangular slit and disposed at a pupil plane to control the size of the speckle and to limit the NA of the whole system 200. The wavefront duplicator 236 is preferably located closer to the image plane (defined at the surface of the only, single optical/radiation detector 244, to which the radiation is delivered through the combination of positive lenses 246, 248) to allow for the sought-after large difference of angles of propagation between the duplicated wavefronts and a small shear distance. “Large” difference angles produce frequency components that can be fully separated when the image is Fourier transformed.
In choosing the optical characteristics of the components of the optical system of
In one example, the opto-geometrical parameters of the constituent components used in the optical system 200 were as follows: a focal length of lens 224: f224=−25 mm; a width of the (slit) aperture stop 232: W232=3 mm; a focal length of the lens 228: f228=60 mm; a focal length of the lens 246: f246=125 mm; a focal length of the lens 248: f248=80 mm; pixel size of the camera 244: 4-by-4 μm2; beamsplitter 236A: 50/50. The object under test 222, in one embodiment, was configured as a 300 mm diameter wafer and, with the distance of about 800 mm separating the lens 224 from the object 222, the FOV of the radiation incident onto the object was sufficiently large to irradiate all of the surface of the object 222 at once.
In other examples, the wavefront-duplicator device 236 can be alternatively implemented as an optical diffractive component configured to generate, in diffraction of radiation incident onto such component, two beams, each defining a corresponding one of the two duplicated wavefronts. For example, the wavefront duplicator device may include a diffraction grating, disposed at a plane of a reflector of the unit 236 and equipped to generate only the zeroth and +1st (or, alternatively, the zeroth and −1st) orders of diffraction. The geometrical parameters of such diffraction grating are judiciously chosen to keep the fringe pitch constant across the optical field. In a related embodiment, the optical diffractive component can be structured as by a reconfigurable spatial light modulator (or SLM). In yet another implementation, the wavefront-duplicator portion of the optical system may include a Wollaston prism or Savart plates. The implemented optical-wavefront-duplicator unit is generally repositionable (strictly based on its properties and the desired relationship between the shear distance and the shear angle but without the ability to just arbitrarily place it axially anywhere) along the optical axis 240, once the shear angle has been chosen. In any implementation, the exit pupil can be controlled to be substantially at infinity so that the optical system 200 remains telecentric in image space.
To perform in-plane measurements of the workpiece 222 (that is, the deformations of the workpiece occurring, as a result of, for example, chucking the workpiece, in the plane substantially perpendicular to the optical axis of the optical system 200) multiple sources of input radiation (multiple source points) are required.
Referring to
Referring to
To carry out both the in-plane measurement of the distortions of the object surface along the two in-plane axes (both the x-axis and y-axis) and the measurement of the workpiece distortion along the axis transverse to the surface of the object (that is, the z-axis as shown in
Referring to
In practice, either the single-source embodiment of
As discussed above, an optical measurement system as shown in
Only two sources S1, S2 are shown, but in practice to measure the deformations along two mutually-transverse axes disposed in the SUT of an object at least three radiation sources may be used (beams along three axes defining two different planes). Multiple sources of radiation can be chosen to generate/emit corresponding portions of radiation with the same spectral content (for example, at the same wavelength λ1=λ2) or with different spectral content (for example, at different wavelengths λ1≠λ2). When the multiple sources operate at different wavelengths, their operation may be subjected to strobing at the detector or, alternatively, arrangements can be made to perform the measurements simultaneously at these different wavelengths. When the multiple sources operate at substantially equal wavelengths, the strobing or time-multiplexing of the measurement is generally preferred.
It is understood that, in operation, the camera 244 and other detectors discussed above register the spatial distribution of radiation that includes interference fringes caused by interference between the two duplicate wavefronts reaching the detectors from the rough surface of the object through the wavefront-duplicator that are heavily modulated by the presence of speckle caused in reflection and scatter of the radiation from the surface of a workpiece. Such registered spatial distribution of radiation at the detector is referred to as a speckle interferogram.
The optical measurement systems as disclosed provide the change in the spatial derivative of the height of the measured surface of the workpiece (dw/dx) between image frames N and N+1. If the height change between two frames is desired, the result must be integrated spatially (either along the x-axis, or along the y-axis, or along both axes). If there are multiple image frame pairs recorded as a function of time, then the signal needs to be integrated with respect to time to determine the surface height change over the entire measurement.
The following discussion illustrates elements of the measurement procedure performed with the embodiment of
A single speckle interferogram, registered by a detector such as the detector 244, would not contain useful information about the distortions of the object. Two measurements of the surface of the workpiece are taken—before an external input is applied to the workpiece, and after the external input is applied to the workpiece, or between any two or more times at which the surface is in different states. Typical examples A and B of such raw speckle images are shown in
First, the FT of the data of a given speckle interferogram is determined. For example,
The data representing Δ1 may be appropriately spatially filtered to reduce the impact of the speckle modulation (generally due to phase wrapping) on the interference fringes, for example by smoothing the data representing the sine and/or cosine functions of Δ1, and using the convolution with a boxcar (or any other known spatial filtering method), represented by the term Low Pass Filter (LPF) in the equation below. Some of the modulation in
(As a result of the related measurement, in which the spatial distribution of radiation 220 used to irradiate the object 222 was substantially more uniform across the object's surface, the resulting spatially-filtered first derivative of the phase difference data is also substantially more uniform—as seen in
Notably, when the multiple sources of radiation are used in the optical measurement system (for example, two sources of radiation)—both of the results, and contain information about all types of the distortion of the workpiece's surface (an out-of-plane distortion along the z-axis, an in-plane distortion with respect to both x-axis and y-axis). This provides a clear operational advantage over the conventional interferometer-based methodologies of measuring the properties of the wavefront arriving at the optical detector. Indeed, when the Fizeau interferometer is used for similar measurements, for example, the optical wavefront arriving from the workpiece at the detector has to be compared with the wavefront from the reference surface (whether the reference surface us spatially non-uniform, such as the grating, or flat). Accordingly, the interferogram produced with the use of the traditional methodologies only contains information about the difference between the measured and reference surfaces; therefore, if the reference surface itself is changed or modified in some unknown fashion, it is likely to be interpreted as a change of the shape of the workpiece-under-test instead and lead to the unknown error of the measurement. In other words, the embodiment of the invention does not require—and is devoid of—the use of any reference surface in addition to the use of the surface to be measured.
Another advantage of the disclosed measurement methodology over other measurement techniques stems from the fact that no workpiece motion is required to carry out phase shifting. This is because the tilt (seen in the Fourier Transform data of
The determination of the approximate value of the out-of-plane distortion (shown below as a derivative value along the x-axis, for example) may be estimated according to
while the in-plane distortion of the workpiece (along the x-axis, in this example) can be assessed according to
Here, θ is the angle at which the workpiece is irradiated, λ is the wavelength of the radiation used to irradiate the workpiece, and Δx is the “shear” distance between the two duplicated wavefronts formed at the duplicator unit such as the unit 236 of
In practice, even considering the speckle pattern optical noise that limits the imaging resolution of a system both in the xy-plane and along the z-axis (defined as shown in
The Table of
are unknown values to be determined, and the remaining values pertain to system geometry and/or calibration values. Variables u, v, w correspond to in-plane changes (x and y directions) and out-of plane changes (z-direction), wherein a z-direction is perpendicular to a SUT. Making some reasonable assumptions allow simplified calculations using the expressions 2a) through 2d). The penalty for using equations 2a) through 2d) instead of 1a) through 1d) is shown in
If some more limiting assumptions are made about the geometry of the measurement, then equations 3a) through 3d) can be used (compare with Equations 1 and 2 above). The penalty for using these approximations in comparison with the exact expressions 1a through 1d) is illustrated in
The physical meaning of various terms is defined in
It is quite clear that as long as everything in the optical system remains unchanged—including but not limited to polarization and/or wavelength of radiation used to irradiate the SUT, the irradiation angle(s), the position of the object, the object shape and/or the position of radiation source(s), then the speckle pattern(s) formed at the image/detection plane will remain fixed.
It will also be appreciated that there exist two types of noise limiting the repeatability of the above-described measurements—specifically, random noise (such as shot noise in the camera 244, random fluctuations), and the systematic noise from the fixed speckle pattern. The random noise can be improved somewhat by averaging multiple frames. It can also be reduced by increasing the light level at the camera by increasing source power or engineering the test surface such that it sends more irradiance into the imaging system shown in
The systematic noise from speckle, however, cannot be reduced by this kind of averaging, since the systematic noise remains fixed as long as the optical imaging system remains fixed. In general, the phase of speckle is uniformly distributed between 0 and 2π, so if it were possible to change the phase of the speckle in discrete steps from 0 to 2π, then it would be possible to substantially precisely average out the speckle contrast everywhere in the image formed at the detector to the same value so that the speckle contrast did not cause errors. During the measurements, however, the opportunity of so-changing the phase of the speckle remains impractical if at all achievable. Indeed,
In further reference to the equations of
Example A: consider changing the source wavelength (that is, the wavelength of radiation used to irradiate the object 222). This could be done with a discrete set of lasers configured as one particular source of radiation, or with a tunable source such as a laser diode. The phase distribution of the speckle pattern comes from the height differences in the roughness of the test surface as a fraction of the source wavelength. Accordingly, if the source wavelength is changed, the phase distribution will also change because the same surface roughness will now produce a different phase delay, and therefore a different speckle pattern. In advantageous comparison with the situation when the measurements are carried out at the same, unchanged wavelength when at least four images have to be captured for the proper assessment of the change on the spatial profile of the object's surface (those before and after distortion change at more than 2 illumination angles), the measurements employing a variable wavelength source of radiation allows for dozens or more sets of data to be collected at various wavelengths. Averaging these results together understandably facilitates the reduction of the phase errors induced by the speckle. The optical system used for the measurements that employ a change of the radiation wavelength should preferably (but not necessarily) be designed to be substantially achromatic to handle the range of wavelengths of radiation produced by the source.
Example B: changing of the polarization of radiation used to irradiate the object under test (which, in one case, could be implemented prior to sending the light into a polarization maintaining (PM) fiber, see
Example C: Changing a position of the source with respect to the object under test (this corresponds to varying the value(s) of R, x, y and possibly z parameters of the equations of
(Analogously to changing the source position would be changing the source angle. This could be done with a rotating wedge prism for high stability. In this alternative scenario, instead of tilting a plane parallel plate, one can use a wedge-shaped optically-transparent window between the source and the SUT. The beam passes through the window. The window is caused to rotate about the optical axis of the beam, but because of the tilt, such window changes the angle of the beam upon propagation.)
To various degrees, all of these proposed system perturbations will cause the change in the speckle distribution at the detector plane, which in turn will change the noise characteristics observed in the first derivative of the phase (A) maps (that is the quality of the map such as the map of
While simple averaging can be used to reduce measurements, other approaches can provide superior results. For example, weighted averaging of multiple Δ maps obtained from various wavelengths can be used. From each Δ map, relative brightness of speckle at each location is determined and used to create a weighted average of the Δ maps. For example, if Δ map 1 at location Δ has 10% of the maximum power, but Δ map 2 at location Δ has 90% of the maximum power, 90% of the value from Δ map 2 and 10% of the value from Δ map 1 are combined. Relative speckle brightness can be based on one speckle or a group of pixels in a neighborhood about a location of interest, for example 2, 3, 5, 10, or 15 nearby pixels.
Some notes with respect to the object's surface may be in order. Preferably, the surface is chosen or configured such that a roughness figure (the rms value, for example) is larger than a wavelength of light that is incident upon it (for example, by a factor of 2 to 5), such that a variety of values of phase difference are present in light scattered back at the test system. Alternatively or in addition, the surface that does not transmit light (˜opaque surface) may be preferred, because light-scatter from multiple depth points returned to the test system all at once will complicate the measurement: in such a case, the light coherence representing light scattered by different points on the SUT becomes too significant, a good measurement signal is unlikely to be generated. Measurements performed at multiple wavelengths may be preferred as such measurements can be carried out at the same time with separation of the results in in Fourier space. The reduction of systematic errors in this case is substantially proportional to the square root of the number of wavelengths used to perform the measurement.
With reference to
The exemplary PC 1700 further includes one or more storage devices 1730 such as a hard disk drive for reading from and writing to a hard disk, a magnetic disk drive for reading from or writing to a removable magnetic disk, and an optical disk drive for reading from or writing to a removable optical disk (such as a CD-ROM or other optical media). Such storage devices can be connected to the system bus 1706 by a hard disk drive interface, a magnetic disk drive interface, and an optical drive interface, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules, and other data for the PC 1700. Other types of computer-readable media which can store data that is accessible by a PC, such as magnetic cassettes, flash memory cards, digital video disks, CDs, DVDs, RAMs, ROMs, and the like, may also be used in the exemplary operating environment.
A number of program modules may be stored in the storage devices 1730 including an operating system, one or more application programs, other program modules, and program data. A user may enter commands and information into the PC 1700 through one or more input devices 1740 such as a keyboard and a pointing device such as a mouse. Other input devices may include a digital camera, microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the one or more processing units 1702 through a serial port interface that is coupled to the system bus 1706, but may be connected by other interfaces such as a parallel port, game port, or universal serial bus (USB). A monitor 1746 or other type of display device is also connected to the system bus 1706 via an interface, such as a video adapter. Other peripheral output devices, such as speakers and printers (not shown), may be included.
The PC 1700 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 1760. In some examples, one or more network or communication connections 1750 are included. The remote computer 1760 may be another PC, a server, a router, a network PC, or a peer device or other common network node, and typically includes many or all of the elements described above relative to the PC 1700, although only a memory storage device 1762 has been illustrated in
When used in a LAN networking environment, the PC 1700 is connected to the LAN through a network interface. When used in a WAN networking environment, the PC 1700 typically includes a modem or other means for establishing communications over the WAN, such as the Internet. In a networked environment, program modules depicted relative to the personal computer 1700, or portions thereof, may be stored in the remote memory storage device or other locations on the LAN or WAN. The network connections shown are exemplary, and other means of establishing a communications link between the computers may be used.
As used in this application and in the claims, the singular forms “a,” “an,” and “the” include the plural forms unless the context clearly dictates otherwise. Additionally, the term “includes” means “comprises.” Further, the term “coupled” does not necessarily exclude the presence of intermediate elements between the coupled items.
The systems, apparatus, and methods described herein should not be construed as limiting in any way. Instead, the present disclosure is directed toward all novel and non-obvious features and aspects of the various disclosed embodiments, alone and in various combinations and sub-combinations with one another. The disclosed systems, methods, and apparatus are not limited to any specific aspect or feature or combinations thereof, nor do the disclosed systems, methods, and apparatus require that any one or more specific advantages be present or problems be solved. Any theories of operation are to facilitate explanation, but the disclosed systems, methods, and apparatus are not limited to such theories of operation.
Although the operations of some of the disclosed methods are described in a particular, sequential order for convenient presentation, it should be understood that this manner of description encompasses rearrangement, unless a particular ordering is required by specific language set forth below. For example, operations described sequentially may in some cases be rearranged or performed concurrently. Moreover, for the sake of simplicity, the attached figures may not show the various ways in which the disclosed systems, methods, and apparatus can be used in conjunction with other systems, methods, and apparatus. Additionally, the description sometimes uses terms like “produce” and “provide” to describe the disclosed methods. These terms are high-level abstractions of the actual operations that are performed. The actual operations that correspond to these terms will vary depending on the particular implementation and are readily discernible by one of ordinary skill in the art.
In some examples, values, procedures, or apparatus' are referred to as “lowest”, “best”, “minimum,” or the like. It will be appreciated that such descriptions are intended to indicate that a selection among many used functional alternatives can be made, and such selections need not be better, smaller, or otherwise preferable to other selections.
Examples are described with reference to directions indicated as “above,” “below,” “upper,” “lower,” and the like. These terms are used for convenient description, but do not imply any particular spatial orientation.
Terms such as beams, optical beams, and irradiation are used to describe propagating electromagnetic radiation, and in most examples at wavelengths between about 200 nm and 2000 nm but other wavelengths can be used. Such beams are not necessarily collimated, and in typical examples, a diverging beam such as emitted from an optical fiber can is used to irradiate an area of a surface. For convenience, beams are referred to as producing or being associated with speckle or speckle patterns. Such speckle patterns result from a summation of radiation portions received from target areas having random or quasi-random phase differences in imaging processes.
Terms such as wavefront duplicator or divider refer to optical systems that receive an optical beam and produce output beam portions having substantially the same wavefront shape. Examples include Wollaston prisms, Savart plates, diffraction gratings, and beamsplitter plates and cubes. As used herein, an image refers to a viewable image or data arranged so that a viewable image can be produced. Such data can be two dimensional data of various types, including images of objects, wavefront phases, wavefront amplitudes, interference patterns, and speckle patterns, including speckle interferograms. A “map” generally refers to values of phase or other quantity at a plurality of location, typically arranged as an image. In some examples, differences between so-called “initial” and “final” surface shapes are determined, but differences at arbitrary times can be obtained such as with or without a force or other perturbation applied. A time sequence of such maps can be produced and displayed as a video.
Having described and illustrated the principles of the disclosed subject matter with reference to the illustrated embodiments, it will be recognized that the illustrated embodiments can be modified in arrangement and detail without departing from such principles. For instance, elements of the illustrated embodiment shown in software may be implemented in hardware and vice-versa. Also, the technologies from any example can be combined with the technologies described in any one or more of the other examples. In view of the many possible embodiments to which these principles may be applied, it should be recognized that the illustrated embodiments are examples and should not be taken as a limitation on the scope of the disclosure. We therefore claim all subject matter that comes within the scope and spirit of these claims. Alternatives specifically addressed in these sections are merely exemplary and do not constitute all possible alternatives to the embodiments described herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2020/032234 | 5/8/2020 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62846113 | May 2019 | US |