Physical structures and, in particular, thin structures in aerospace, electronics, and other industries are often subjected to thermo-mechanical stresses and/or other loads that cause the surface of those structures to change. In other words, applied loads cause the surface topography and therefore the curvature of the structures to change. Such applied loads may be the result of a process over time (e.g., the drying of an epoxy or coating on the surface of the structure) or an instantaneously applied thermal or mechanical load (e.g., from another object applying a mechanical force to the structure over a particular region). Oftentimes, it is critical to a particular application to know whether the surface of the structure is flat or otherwise.
In the past, optical metrology tools such as shearing interferometry and moiré-based methods have been employed to quantify surface slopes and curvatures of a structure's surface. However, those methods generally require special conditions and/or destructive or contact testing of the structure to make such a determination. For example, depending on the particular implementation, the method may require Ronchi rulings or grid patterns, monochromatic coherent (e.g., laser) light, transparent structures, and/or coating the structure with a thin metallic film or other substance (e.g., having a specific pattern) in order to deduce the surface geometry.
According to one aspect of the disclosure, a method for determining characteristics of transparent materials comprises illuminating a target structure, wherein the target structure comprises a plurality of light regions and a plurality of dark regions; capturing, by a camera, one or more images of the target structure using light from the target structure that passes through a transparent specimen a first time, reflects off of a reflective surface, and passes through the transparent specimen a second time; analyzing a position of each of the plurality of light regions and each of the plurality of dark regions in the one or more images; and determining, based on the analysis of the position of each of the plurality of light regions and each of the plurality of dark regions in the one or more images, an angular deflection of light through one or more regions of the transparent specimen.
In some embodiments, analyzing the one or more images of the target structure comprises correlating the plurality of light regions and the plurality of dark regions of a first image of the one or more images of the target structure with the plurality of light regions and the plurality of dark regions of a second image of the one or more images.
In some embodiments, the transparent specimen has a reflective coating on a surface of the transparent specimen, wherein the reflective coating is the reflective surface.
In some embodiments, a substrate comprising the reflective surface is adjacent the transparent specimen.
In some embodiments, the method may further include striking the specimen with a striker; wherein capturing the one or more images of the target structure comprises capturing a plurality of images of the target structure while stress waves caused by the striking of the specimen travel through the specimen.
In some embodiments, capturing the one or more images of the target structure comprises further comprises capturing at least one image of the target structure before stress waves caused by the striking of the specimen travel through the specimen, the method further comprising comparing the at least one image of the target structure captured before stress waves caused by the striking of the specimen travel through the specimen and the plurality of images of the target structure while stress waves caused by the striking of the specimen travel through the specimen to determine stresses caused by the stress waves caused by the striking of the specimen travel through the specimen.
In some embodiments, the specimen has a crack prior to being struck by the striker.
In some embodiments, capturing the one or more images of the target structure comprising capturing the one or more images of the target structure using light from the target structure that reflects off of a beamsplitter, passes through the transparent specimen a first time, reflects off of a reflective surface, passes through the transparent specimen a second time, and is transmitted through the beamsplitter.
According to one aspect of the disclosure, a system for determining characteristics of transparent materials comprises a transparent specimen comprising a front surface and a back surface; a substrate with a reflective surface, wherein the reflective surface of the substrate is adjacent the back surface of the specimen; a target structure comprising a surface with a plurality of light regions and a plurality of dark regions; and a camera configured to capture an image of the target structure using light from the target structure that passes through the transparent specimen and reflects off of the reflective surface.
In some embodiments, the system may further include a beamsplitter configured to direct light (i) from the target structure to the transparent specimen and (ii) from the transparent specimen to the camera.
In some embodiments, the system may further include a broad spectrum white light source configured to illuminate the target structure, wherein the camera is configured to capture the image using light from the broad spectrum white light source.
In some embodiments, the system may further include a striker configured to apply a force to the specimen.
In some embodiments, the camera is configured to capture the one or more images after the striker applies the force to the specimen.
In some embodiments, the specimen has a crack.
According to one aspect of the disclosure, a system for determining characteristics of transparent materials comprises a transparent specimen comprising a front surface and a back surface, wherein the back surface has a reflective coating; a target structure comprising a surface with a plurality of light regions and a plurality of dark regions; and a camera configured to capture an image of the target structure using light from the target structure that passes through the transparent specimen and reflects off of the reflective surface.
In some embodiments, the system may further include a beamsplitter configured to direct light (i) from the target structure to the transparent specimen and (ii) from the transparent specimen to the camera.
In some embodiments, the system may further include a broad spectrum white light source configured to illuminate the target structure, wherein the camera is configured to capture the image using light from the broad spectrum white light source.
In some embodiments, the system may further include a striker configured to apply a force to the specimen.
In some embodiments, the camera is configured to capture the one or more images after the striker applies the force to the specimen.
In some embodiments, the specimen has a crack.
The concepts described in the present disclosure are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. For example, the dimensions of some elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements. The detailed description particularly refers to the accompanying figures in which:
While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific exemplary embodiments thereof have been shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure.
References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etcetera, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
Referring now to
In some embodiments, the system 100 may determine the amount by which the surface 110 of the specimen 102 has deformed relative to a flat surface, whereas in other embodiments, the system 100 may determine the amount by which the surface 110 has deformed relative to some other reference shape's surface (e.g., a previous shape of the specimen 102 prior to undergoing a process). In the illustrative embodiment, the system 100 makes those calculations and/or determinations using an imaging system 200 (see
It should be appreciated that, in the illustrative embodiment, the system 100 determines the geometric characteristics of the reflective surface 110 of the specimen 102 without applying a coating (e.g., grid pattern) or otherwise damaging the specimen 102. As such, in such embodiments, the specimen 102 may be used for its designed purpose subsequent to the system 100 determining its geometric characteristics.
The specimen 102 may be embodied as any structure having a reflective surface 110 capable of reflecting an information carrier 112 (e.g., light) and that is otherwise suitable for use with the system 100 as described herein. As indicated above, the system 100 determines (with the imaging system 200) the geometric characteristics of the reflective surface 110 of the specimen 102. Accordingly, the specimen 102 may be any physical structure having a reflective surface 110 for which one may be interested in determining its shape. For example, in various embodiments, the specimen 102 may be embodied as a silicon wafer, a mirror, a solar panel, an antenna, or another reflective structure. In some embodiments, the reflective surface 110 of the specimen 102 is generally flat when a load (e.g., thermodynamic or mechanical) is not applied to the specimen 102. Additionally, the reflective surface 110 may be nominally reflective relative to the wavelength(s) of light 112 directed from the target structure 104 to the specimen 102 and to the camera 108. For example, in some embodiments, the “roughness” of the reflective surface 110 is less than those wavelengths of light 112.
Although the system 100 is generally described herein as involving reflected light 112, in other embodiments, the information carrier may be an electromagnetic wave 112 of any combination of wavelengths (e.g., a singleton or linear combination of wavelengths) provided that the reflective surface 110 (e.g., by itself or by means of coating it with a reflective material) can reflect that combination of electromagnetic waves 112 and its intensity pattern can be recorded by the camera 108 or another suitable sensor 210. In such embodiments, it should be appreciated that the system 100 includes a target structure 104 having corresponding features that can be reflected in the reflective surface 110 by those particular electromagnetic waves 112 (e.g., at those combinations of wavelengths) and a sensor 210 that is configured to sense and process (e.g., digitize) those electromagnetic waves 112. Additionally, a surface that appears visually to be a matte or non-reflective surface may be reflective at other wavelengths. For example, a black matte finished surface may not reflect “white light” but likely does reflect infrared waves. Accordingly, the system 100 may be configured for use with different electromagnetic waves 112 and corresponding target structures 104 (e.g., depending on the particular specimen 102). For ease of discussion, the system 100 is described herein primarily with respect to light 112 and corresponding light reflections; however, it should be appreciated that the techniques described herein may be similarly employed with different electromagnetic waves 112 and corresponding sensors 210 and target structures 104.
The illustrative target structure 104 includes a plurality of distinguishable feature points that may be captured in an image (by the camera 108) such that the captured image/data may be compared to a reference image/data to determine the displacement of the feature point locations in the captured image relative to their corresponding locations in the reference image. The target structure 104 may be illuminated by a broad-band white light source. In some embodiments, the target structure 104 is embodied as a target plate or subplane that is coated with a random black and white speckle “pattern” on a surface 114 facing the beam splitter 106 (see, for example,
The beam splitter 106 may be embodied as any structure configured to direct light 112 partially from the target structure 104 to the reflective surface 110 of the specimen 102 and from the reflective surface 110 of the specimen 102 to the camera 108. That is, the beam splitter 106 is configured to allow a portion of light 112 (e.g., half) to pass through the beam splitter 106 and to reflect another portion of light 112 off the beam splitter 106. For example, in some embodiments, the beam splitter 106 may be embodied as a partial mirror. The system 100 is configured to permit the camera 108 to capture images of the target surface 114 (e.g., the speckle pattern) reflected in the reflective surface 110 of the specimen 102 by virtue of light 112 passing through the beam splitter 106. In embodiments of the system 100 in which other electromagnetic waves 112 are utilized, the beam splitter 106 is configured to perform similar functions. In the illustrative embodiment, the beam splitter 106 is positioned at a forty-five degree angle relative to each of the reflective surface 110 of the specimen 102 (in an unloaded state) and the surface 114 of the target structure 104 as shown in
The camera 108 may be embodied as any peripheral or integrated device suitable for capturing images, such as a still camera, a video camera, or other device capable of capturing images. For example, in embodiments involving IR light 112, the camera 108 may be embodied as an IR camera or be otherwise configured to capture the IR light 112. Further, in embodiments in which other electromagnetic waves 112 are utilized, the camera 108 may be replaced or supplemented with one or more sensors 210 configured to sense those waves 112. In the illustrative embodiment, the beam splitter 106 is positioned at a forty-five degree angle relative to an optical axis 116 of the camera 108 as shown in
As shown in
It should be appreciated that a system 150 having the configuration shown in
As indicated above, the system 100 may determine various geometric characteristics of the reflective surface 110 of the specimen (e.g., slopes, curvatures, twists, topology, etc.) with the imaging system 200. Referring now to
The processor 202 of the imaging system 200 may be embodied as any type of processor(s) capable of performing the functions described herein. For example, the processor 202 may be embodied as one or more single or multi-core processors, digital signal processors, microcontrollers, or other processors or processing/controlling circuits. Similarly, the memory 206 may be embodied as any type(s) of volatile or non-volatile memory or data storage device capable of performing the functions described herein. The memory 206 stores various data and software used during operation of the imaging system 200, such as operating systems, applications, programs, libraries, and drivers. For instance, the memory 206 may store instructions in the form of a software routine (or routines) which, when executed by the processor 202, allows the imaging system 200 to control operation of the imaging system 200 (e.g., to capture images with the camera 108) and process the images to compute the various surface characteristics of the specimen.
The memory 206 is communicatively coupled to the processor 202 via the I/O subsystem 204, which may be embodied as circuitry and/or components to facilitate I/O operations of the imaging system 200. For example, the I/O subsystem 204 may be embodied as, or otherwise include, memory controller hubs, I/O control hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the I/O operations. In the illustrative embodiment, the I/O subsystem 204 includes an analog-to-digital (“A/D”) converter, or the like, that converts analog signals from the camera 108 and the sensors 210 of the imaging system 200 into digital signals for use by the processor 202 (i.e., to digitize the sensed data). It should be appreciated that, if any one or more of the camera 108 and/or the sensors 210 associated with the imaging system 200 generate a digital output signal, the A/D converter may be bypassed. Similarly, the I/O subsystem 204 may include a digital-to-analog (“D/A”) converter, or the like, that converts digital signals from the processor 202 into analog signals for use by various components of the imaging system 200.
In some embodiments, the data captured by the camera 108 and/or the sensors 210 may be transmitted to a remote computing device (e.g., a cloud computing device) for analysis. In other words, the determination of the geometric characteristics of the reflective surface 110 may be determined by a remote computing device based on the sensed data. Accordingly, the imaging system 200 may include communication circuitry 208, which may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications between the imaging system 200 and remote devices. The communication circuitry 208 may be configured to use any one or more communication technology (e.g., wireless or wired communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication.
As indicated above, the system 100 may be configured to use any electromagnetic wave 112 that may be reflected off the reflective surface 110 of the specimen 102. In such embodiments, the system 100 may utilize sensors 210, different from the camera 108, to capture the electromagnetic waves 112. As such, the sensors 210 may be embodied as any type of sensors suitable for capturing such electromagnetic waves 112.
Referring now to
When the reflective surface 110 undergoes an out-of-plane deformation, w, (e.g., a “bulge” due to an applied load), the light ray
As indicated above, the surface 114 of the target structure 104 includes a plurality of feature points that are, for example, stochastically distributed. Accordingly, in the illustrative embodiment, the imaging system 200 applies a digital image correlation (DIC) algorithm to determine the displacements δy and/or δx, which are the relative displacements associated with the deflections by the angles ϕy and ϕx discussed above. More specifically, in order to determine those displacements δy and/or δx, the imaging system 200 compares an image 402 captured by the camera 108 of the target surface 114 during (or after) the applied load (see
It should be appreciated that the feature points represented in the image 402 are shifted relative to the feature points in the reference image 404 due to the deformation of the reflective surface 110 of the specimen 102. In some embodiments, the reference image 404 may be captured by the camera 108 prior to the reflective surface 110 enduring the applied load. In other embodiments, the specimen 102 may be temporarily replaced with, for example, an optical trial of the general shape (e.g., flat) against which the shape of the reflective surface 110 of the specimen 102 is to be compared, and the camera 108 captures the reference image 404 of the target surface 114 reflected in the optical trial rather than the reflective surface 110. In yet other embodiments, the reference image 404 may be otherwise generated or provided (e.g., as a standalone reference image associated with the target surface 114). Of course, in embodiments in which images are not used in the system 100, 150, other suitable reference data may be used.
It should be appreciated that, in some embodiments, each of the two images 402, 404 is stored and/or processed as a two-dimensional array of light intensities. For example, in a monochromatic (i.e., grayscale) digital image, each pixel represents an intensity value of the captured light 112 (e.g., between 0 and 255 for an 8-bit image). In the illustrative embodiment, the imaging system 200 analyzes the images 402, 404 (e.g., using DIC) to determine a distance that each point has been displaced (e.g., in the x and/or y direction(s)) in the captured image 402 relative to the reference image 404. In other words, the imaging system 200 may determine how much each point has been displaced relative to its location with the specimen 102 in its original, undeformed state. In some embodiments, the displacements δy and δx are generated by the imaging system 200 as distance values in a two-dimensional array. As indicated above, such displacements are a result of light 112 incident on the reflective surface 110 of the specimen 102 being deflected proportionally to the curvature or slope of the surface 110.
In the illustrative embodiment, the imaging system 200 calculates the local surface slopes
according to
However, as discussed above, the system 100, 150 is configured to determine microscale changes in the slope of the reflective surface 110 and therefore
due to the small angles. Accordingly, the imaging system 200 may calculate the local surface slopes
according to
based on the displacements δy and δx and the distance Δ between the reflective surface 110 and the target surface 114. That is, the imaging system 200 may calculate the slope
of the reflective surface 110 in the x-direction as
and the slope
of the reflective surface 110 in the y-direction as
Similarly, the deflection angles can be expressed as
In the illustrative embodiment, coordinates on a plane coincident with the reflective surface 110 of the specimen 102 (in an unloaded state) are utilized to determine the slopes, but the displacements are determined based on coordinates of the surface 114 of the target structure 104. Accordingly, in such embodiments, the imaging system 200 may utilize a linear mapping of coordinates between the planes of the reflective surface 110 and the target surface 114 to account for this. At least one technique for doing so is described in Periasamy, et al., “A Full-Field Digital Gradient Sensing Method for Evaluating Stress Gradients in Transparent Solids,” 51 Appl. Opt. 2088-97 (2012). Further, in some embodiments, the determined slopes
may be stored by the imaging system 200 as, for example, a two-dimensional array of slope values and may be represented visually by contour diagrams 502, 504, respectively, as shown in
It should be appreciated that the diagrams 502, 504 of
As discussed above, the imaging system 200 may utilize the slopes
to determine various other geometric characteristics of the reflective surface 110 of the specimen 102. For example, in the illustrative embodiment, the imaging system 200 utilizes numerical differentiation and/or another suitable algorithm or technique to determine various curvatures of the reflective surface 110 (e.g., directional curvatures and/or twist curvatures) based on the determined slopes. It should be appreciated that it may be desirable to determine the curvatures for any number of reasons including, for example, to determine the imposed or residual stress in a reflective substrate (e.g., a silicon wafer) which, if too high, may damage the substrate over time (e.g., by cracking). In particular, the imaging system 200 may calculate the directional curvature
according to
which is represented by the contour diagram 602 of
according to
which is represented by the contour diagram 604 of
In some embodiments, the imaging system 200 may also utilize cross-partial differentiation to determine the twist curvature of the reflective surface 110 of the specimen 102. Specifically, the imaging system 200 may determine the twist curvature
according to
which is represented by the contour diagram 702 of
according to
which is represented by the contour diagram 704 of
In the illustrative embodiment, the imaging system 200 also utilizes numerical integration and/or another suitable algorithm of technique to determine the surface topography of the reflective surface 110 (i.e., its shape) based on the determined slopes. For example, the imaging system 200 may integrate the determined x-directional slope
over x in a region of interest between limits a and b (e.g., the entire surface 110 of the specimen 102) to determine the surface topography of the reflective surface 110. In particular, the imaging system 200 may determine the surface topography as
The imaging system 200 may also determine the surface topography of the reflective surface 110 in terms of the y-directional slope
as
As shown in
In some embodiments, the imaging system 200 may determine various geometric characteristics of the reflective surface 110 of the specimen 102 over time (e.g., during a process). For example, in an embodiment, a polymer film (e.g., an epoxy film) may be applied to a silicon wafer, and the imaging system 200 may determine the surface slopes of the silicon wafer at various point in time as the polymer film cures on the wafer. Referring now to
of the reflective surface 110 of the silicon wafer at a first point in time (e.g., after 25 minutes from application of the polymer film). Similarly, a contour diagram 904 represents the local slope
of the reflective surface 110 of the silicon wafer at a second later point in time (e.g., 35 minutes). Further, a contour diagram 906 represents the local slope
of the reflective surface 110 of the silicon wafer at a third even later point in time (e.g., 45 minutes), and a contour diagram 908 represents the local slope
of the reflective surface 110 at a fourth even later point in time (e.g., 55 minutes). Similarly, contour diagrams 1002, 1004, 1006, 1008 represent the local slope
of the reflective surface 110 of the silicon wafer at the first point in time, at the second point in time, at the third point in time, and at the fourth point in time, respectively. It will be appreciated that the slopes
initially appear somewhat awry but converge to distinct slopes values over time, because the magnitudes of the slopes increase with time as the curing epoxy bends the silicon wafer in space.
Referring now to
The illustrative specimen 102 may be embodied as any structure that is transparent at one or more wavelengths that the camera 108 is sensitive to. Each other illustrative component of
Referring now to
In order to analyze the angular deflection caused by a double-pass through the specimen 1102, an analysis of a single transmission Digital Gradient Sensing (t2-DGS) will first be considered. Referring now to
The optical path change, δS, between the original light ray OP and deflected light ray OQ caused by the deformation of the specimen 1302, can be expressed as:
The two integrals on the right hand side of the above equation represent the contributions of the normal strain in the thickness direction, εzz, and the change in the refractive index, δn, to the overall optical path, respectively. The Maxwell-Neumann relationship states that the refractive index change is proportional to the local state of normal stresses in the specimen. The strain, εzz, can be related to the normal stresses using the generalized Hooke's law for an isotropic, linear elastic solid. Thus, for plane stress conditions, Eq. (1) reduces to:
δS(x,y)=CσB(σxx+σyy), Eq. (2)
where Cσ=D1−(v/E)(n−1) is the elasto-optic constant of the specimen material.
The deflected light ray OQ make solid angles θx and θy with the x- and y-axes, respectively, as shown in
Referring to the planes defined by points OAQ and OCQ shown in
where R(=√{square root over (Δ2+∂x2+∂y2)}) is the distance between points O and Q and Δ is the distance between the specimen 1302 and target structure 1304. For small angular deflections, or ∂x,y»Δ, the two angular deflections of light rays, ϕx:y, are related to the in-plane stress gradients as:
A pin-hole camera mapping function can be used to transfer the coordinates of the target plane to the specimen plane.
Referring now to
δSt2-DGS=2(δSt-DGS). Eq. (6)
The two angular deflections of light rays of t2-DGS (ϕx:y)t2-DGS, which are related to the in-plane stress gradients, can be then be expressed as:
From the above, it is evident that the sensitivity of t2-DGS is twice that of t-DGS.
A 2D ray diagram of the t2-DGS methodology is shown in
Referring back to
In r-DGS, described above in regard to
As noted earlier,
and for plane stress,
and hence
where v is the Poisson's ratio, B is the undeformed thickness of the specimen 1102, and E is the elastic modulus of the specimen 1102. Hence, Eq. (8) can be written as,
A 2D ray diagram of the tr-DGS methodology is shown in
It should be appreciated that the techniques described above can be used to measure dynamic stresses using a high-speed camera. For example, referring now to
Referring now to
The illustrative specimen 102 may be embodied as any structure that is transparent at one or more wavelengths that the camera 108 is sensitive to. Each other illustrative component of
The angular deflections of light rays are shown in
where α is the angle between they and y′ axes. In the embodiment shown in
as
Hence, {ϕy′, ϕx′} is related to
as
where ΔP is the local gap (distance) between the specimen and target planes at point P. Next,
needs to be transformed to obtain the specimen surface slopes,
using the equations above as:
It should be noted that the experimental setup shown here has the specimen rotated about the x-axis, which had led to x=x′, y=√{square root over (2)}y′, but
Also, it can be concluded that when the setup is rotated about the y-axis instead, the governing equations will be x=√{square root over (2)}x′, y=y′, and
As noted earlier, in this simplified r-DGS method, the angle α, between the specimen and target planes is selected to be 45°. If any other convenient angle is more suitable (0°<α<90°), is theoretically acceptable although 45° is relatively straightforward and often more suitable. It is important to note that the coordinates of the specimen plane are utilized for describing the governing equations and the camera 1706 is focused on the target plane during photography. Therefore, a coordinate mapping is needed to transfer the target plane locations to the specimen plane.
While certain illustrative embodiments have been described in detail in the figures and the foregoing description, such an illustration and description is to be considered as exemplary and not restrictive in character, it being understood that only illustrative embodiments have been shown and described and that all changes and modifications that come within the spirit of the disclosure are desired to be protected. There are a plurality of advantages of the present disclosure arising from the various features of the apparatus, systems, and methods described herein. It will be noted that alternative embodiments of the apparatus, systems, and methods of the present disclosure may not include all of the features described yet still benefit from at least some of the advantages of such features. Those of ordinary skill in the art may readily devise their own implementations of the apparatus, systems, and methods that incorporate one or more of the features of the present disclosure.
This application is a continuation-in-part of U.S. patent application Ser. No. 15/675,075, filed on Aug. 11, 2017, and entitled “Determining Geometric Characteristics of Reflective Surfaces,” which is a divisional of U.S. patent application Ser. No. 14/326,856, filed on Jul. 9, 2014, and entitled “Determining Geometric Characteristics of Reflective Surfaces,” which claims the benefit of U.S. Provisional Patent Application Ser. No. 61/844,157, filed on Jul. 9, 2013, and entitled “A Full-Field Digital Gradient Sensing Method for Optically Measuring Slopes and Curvatures of Thin Reflective Structures,” the entire disclosures of which are incorporated herein by reference. This application claims the benefit of U.S. Provisional Patent Application Ser. No. 62/840,866, filed on Apr. 30, 2019, and entitled “Ultrahigh Sensitivity Vision-Based Whole-Field Optical Sensor for Metrology of Transparent Substrates,” by Tippur V. Hareesh and Miao Chengyun, the entire disclosure of which is incorporated herein by reference. This application claims the benefit of U.S. Provisional Patent Application Ser. No. 62/896,328, filed on Sep. 5, 2019, and entitled “Simplified Digital Gradient Sensing Technique,” by Tippur V. Hareesh and Miao Chengyun, the entire disclosure of which is incorporated herein by reference. This invention was made with government support under W911NF-16-1-0093 awarded by the Army Research Office. The government has certain rights in the invention.
Number | Name | Date | Kind |
---|---|---|---|
20020044287 | Otaki | Apr 2002 | A1 |
20100220186 | Chan | Sep 2010 | A1 |
20110310229 | Ueda | Dec 2011 | A1 |
20120133761 | Cho | May 2012 | A1 |
20120206729 | Seligson | Aug 2012 | A1 |
20180246006 | Likins, Jr. | Aug 2018 | A1 |
Entry |
---|
“Higher sensitivity Digital Gradient Sensing configurations for quantitative visualization of stress gradients in transparent solids,” Miao and Tippur, Optics and Lasers in Engineering, 108, 54-67 (2018). |
“A simplified reflection-mode digital gradient sensing technique for measuring surface slopes, curvatures and topography,” Miao and Tippur, Optics in Lasers and Engineering, 124, 105843 (2020). |
Number | Date | Country | |
---|---|---|---|
20200256670 A1 | Aug 2020 | US |
Number | Date | Country | |
---|---|---|---|
62896328 | Sep 2019 | US | |
62840866 | Apr 2019 | US | |
61844157 | Jul 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14326856 | Jul 2014 | US |
Child | 15675075 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15675075 | Aug 2017 | US |
Child | 16863510 | US |