The present disclosure relates generally to shearography. More particularly, in one example the present disclosure relates to shearography methods for generating one or more specklegrams with different shear values from a single set of input data.
Shearography, or speckled pattern shearing interferometry as it is sometimes called, is a non-destructive measuring and testing method utilizing coherent light or sound waves to provide information about the quality of different materials. Generally speaking, shearography uses comparative images, known as specklegrams images of a surface or object both with and without a load applied to the target surface or object to create an interference pattern. The interference pattern is created by using a reference image of the test object and shearing that image to create a double image. Superimposing those two images upon each other provides an interference image (specklegram) representing the surface of the test object in a first state, which is typically an unloaded state. Then a load is applied to the surface or test object to cause a minor deformation therein. From this a second specklegram is generated and is compared with the first producing a shearogram to reveal inconsistencies between the two, which in turn may represent a flaw in the surface or the presence of an unknown or unseen object within or below the surface.
One common use of shearography is to detect buried objects in a substrate wherein the surface of the substrate is the subject surface for shearography and the comparison between shear images may reveal the presence of an object buried in that substrate. The buried objects produce a time dependent distortion in the surface of the substrate which can be accomplished by inducing the buried objects to vibrate by, for example, ensonification by an external sound source. When used in such an application, standard shearography methods generate shearograms with a single fixed shear. The optimal shear length for an object buried within a substrate is often related to the diameter of the object type. Specifically, buried objects tend to show maximum response when detected at a shear length that is approximately one-half of the object's diameter. Thus, when searching for multiple objects of varying sizes, multiple shear lengths are needed to ensure detection of all objects within the substrate, as well as detection of those objects at or near their optimum shear. In current shearography systems, if a shearogram with a different shear is desired, the physical hardware of the system must be adjusted and a second or subsequent data set must be collected. When shearography is used for remote detection of buried objects, the need for multiple shear lengths requires multiple passes over the substrate surface with varying adjustments made to the hardware for each pass. This introduces limits to the capability of current shearography systems as each desired shear length first requires a physical adjustment to the system hardware and then requires an additional pass over the target surface. In some cases, multiple passes over the target surface are not feasible as these systems are commonly employed in combat zones and multiple passes over the target surface may pose a threat to the operator of the shearography system. Alternatively, time constraints or cost constraints may also limit the number of passes a shearography system can make over a target surface. Thus, if searching for buried objects consisting of different sized objects with a current shearography system, many objects may not be detected at their optimum shear and some objects may not be detected at all.
The present disclosure addresses these and other issues by providing a method and apparatus for performing shearography where the shear length and direction can be set in image processing, thus allowing all shear sizes to be computed and tested from a single data set, which can be collected in a single pass over a test surface or test object. The present process assures that a single data set can be processed with optimal shear length for multiple target types, thus reducing or eliminating the chance of missing a target detection while additionally enhancing target shape analysis by allowing the calculation of target response versus shear length and shear direction.
In one aspect, an exemplary embodiment of the present disclosure may provide a method of performing shearography comprising: reflecting a target illumination beam off of a target surface via a transmitter optical component of a shearography system; directing a reference beam from the transmitter optical component to a receiving optical component of the shearography system; receiving a reflected beam from the target surface with the receiving optical component; communicating a data set relating to the reflected beam relative to the reference beam from the receiving optical component to a processor; and processing the data set to generate at least two shear image sets having different shear lengths for each image set.
In another aspect, an exemplary embodiment of the present disclosure may provide a system for detecting objects beneath a target surface, the system comprising: a transmitter optical component operable to generate and reflect a target illumination beam off of the target surface; a receiver optical component operable to receive a reflected beam from the target surface illuminated by the target illumination beam; and a processor in operative communication with the receiver optical component operable to generate at least two shear image sets having different shear lengths for each of the at least two shear image sets from a single data set collected in a single pass of the system over the target surface.
In yet another aspect, an exemplary embodiment of the present disclosure may provide a computer program product including one or more non-transitory machine-readable storage mediums encoding instructions that when executed by one or more processors cause a process to be carried out for generating multiple shear image sets with each image set of the multiple shear image sets having a different shear length, the process comprising: receiving a reflected beam from a target surface that is illuminated by a target illumination beam; collecting a single data set from the received beam relative to a reference beam; and generating at least two shear image sets from the single data set with each image set from the at least two image sets having different shear lengths.
In yet another aspect, another exemplary embodiment of the present disclosure may provide a method of producing a pair of simultaneous artificial specklegram images. The method includes steps of reflecting a target illumination beam off of a target surface via a transmitter optical component of a shearography system; directing a first reference beam from the transmitter optical component to a receiving optical component of the shearography system; directing a second reference beam from the transmitter optical component to the receiving optical component; receiving a reflected beam from the target surface with the receiving optical component; interfering the reflected beam with the first reference beam and the second reference beam; communicating a first data set relating to the pair of simultaneous artificial specklegram images from the receiving optical component to a processor; and processing the first data set to generate the pair of simultaneous artificial specklegram images.
This exemplary embodiment or another exemplary embodiment may further include that the first reference beam is defined at a zero degree phase shift; and wherein the second reference beam is defined at a 90 degree phase shift. This exemplary embodiment or another exemplary embodiment may further include that the target surface is positioned in a military environment that includes hostile objects. This exemplary embodiment or another exemplary embodiment may further include steps of detecting a first object beneath the target surface with a first optimal shear length in one artificial specklegram image of the pair of simultaneous artificial specklegram images; and detecting a second object beneath the target surface with a second optimal shear length in another artificial specklegram image of the pair of simultaneous artificial specklegram images. This exemplary embodiment or another exemplary embodiment may further include a step of calculating a response of at least one of the first and second objects relative to a shear length and a shear direction of the shearography system. This exemplary embodiment or another exemplary embodiment may further include steps of moving the transmitter optical component and the receiver optical component from a first location relative to the target surface to a second location relative to the target surface; reflecting the target illumination beam off of the target surface in the second location; directing the first reference beam from the transmitter optical component to the receiving optical component; directing the second reference beam from the transmitter optical component to the receiving optical component; receiving the reflected beam from the target surface in the second location; interfering the reflected beam with the first reference beam and the second reference beam; communicating a second data set relating to the reflected beam from the receiving optical component to the processor; and processing the second data set to generate a second pair of simultaneous artificial specklegram images. This exemplary embodiment or another exemplary embodiment may further include that the first reference beam is defined at a zero degree phase shift; and wherein the second reference beam is defined at a 90 degree phase shift. This exemplary embodiment or another exemplary embodiment may further include steps of detecting a third object beneath the target surface with a third optimal shear length in one artificial specklegram image of the second pair of simultaneous artificial specklegram images; and detecting a fourth object beneath the target surface with a fourth optimal shear length in another artificial specklegram image of the second pair of simultaneous artificial specklegram images. This exemplary embodiment or another exemplary embodiment may further include a step of calculating a response of at least one of the third and fourth objects relative to a second shear length and a second shear direction of the shearography system. This exemplary embodiment or another exemplary embodiment may further include a step of combining the pair of simultaneous artificial specklegram images and the second pair of simultaneous artificial specklegram images together to form a shearogram.
In yet another aspect, another exemplary embodiment of the present disclosure may provide a computer program product stored on a computer readable medium of a shearography system and executable by a processor of the shearography system that, when executed by the processor, causes a process to be carried out for generating at least one pair of simultaneous artificial specklegram images with each artificial specklegram image having a different shear length. The process includes receiving a reflected beam from a target surface that is illuminated by a target illumination beam, wherein the reflected beam interferes with a first reference beam and a second reference beam; collecting a data set received from a receiving optical component relative to the first and second reference beams; and generating a pair of simultaneous artificial specklegram images from the data set with each artificial specklegram image from the pair of simultaneous artificial specklegram images having different shear lengths.
This exemplary embodiment or another exemplary embodiment may further include that the first reference beam is defined at a zero degree phase shift; and wherein the second reference beam is defined at a 90 degree phase shift. This exemplary embodiment or another exemplary embodiment may further include that the target surface is positioned in a military environment that includes hostile objects. This exemplary embodiment or another exemplary embodiment may further include steps of identifying a first object beneath the target surface with a first optimal shear length in one of the pair of simultaneous artificial specklegram images; and identifying a second object beneath the target surface with a second optimal shear length in another of the pair of simultaneous artificial specklegram images. This exemplary embodiment or another exemplary embodiment may further include that a step of calculating a response of at least one of the first and second objects relative to a shear length and a shear direction of the shearography system. This exemplary embodiment or another exemplary embodiment may further include that the process further comprises: moving the target illumination beam from a first position to a second location on the target surface; receiving the reflected beam from the target surface that is illuminated by the target illumination beam, wherein the reflected beam interferes with the first reference beam and the second reference beam; collecting a second data set received from the receiving optical component relative to the first and second reference beams; and generating a second pair of simultaneous artificial specklegram images from the second data set with each artificial specklegram image from the second pair of simultaneous artificial specklegram images having different shear lengths. This exemplary embodiment or another exemplary embodiment may further include that the first reference beam is defined at a zero degree phase shift; and wherein the second reference beam is defined at a 90 degree phase shift. This exemplary embodiment or another exemplary embodiment may further include that the process further comprises: detecting a third object beneath the target surface with a third optimal shear length in one of the second pair of simultaneous artificial specklegram images; and detecting a fourth object beneath the target surface with a fourth optimal shear length in another of the second pair of simultaneous artificial specklegram images. This exemplary embodiment or another exemplary embodiment may further include a step of calculating a response of at least one of the third and fourth objects relative to a second shear length and a second shear direction of the shearography system. This exemplary embodiment or another exemplary embodiment may further include a step of combining the pair of simultaneous artificial specklegram images and the second pair of simultaneous artificial specklegram images together to form a shearogram.
In yet another aspect, another exemplary embodiment of the present disclosure may provide a phase stepping speckle holography system. The system includes a transmitter, a target illumination beam transmitted by the transmitter to a target surface, a first reference beam transmitted by the transmitter, and a second reference beam transmitted by the transmitter, wherein the second reference beam is phase shifted to 90 degrees. The system also includes a receiver that is operable to receive a reflected beam from the target surface that has been illuminated by the target illumination beam and to receive the first reference beam and the second reference beam from the transmitter. The system also includes a processor that is operable to communicate with the receiver. The system also includes a computer program product that has instructions stored on a non-transitory machine readable medium and are executable by the processor that, when executed by the processor, causes a process to be carried out for generating at least one pair of simultaneous artificial specklegram images with each artificial specklegram image having a different shear length.
In yet another aspect, another exemplary embodiment of the present disclosure may provide a phase stepping speckle holography system. The system includes a transmitter, a target illumination beam transmitted by the transmitter to a target surface, a first reference beam transmitted by the transmitter, and a second reference beam transmitted by the transmitter, wherein the second reference beam is phase shifted to 90 degrees. The system also includes a receiver that is operable to receive a reflected beam from the target surface that has been illuminated by the target illumination beam and to receive the first reference beam and the second reference beam from the transmitter. The system also includes a processor that is operable to communicate with the receiver. The system also includes a computer program product that has instructions stored on a non-transitory machine readable medium and are executable by the processor that, when executed by the processor, causes a process to be carried out for generating at least one pair of simultaneous artificial specklegram images with each artificial specklegram image having a different shear length. The instructions of computer program product comprises: receive the reflected beam from the target surface that is illuminated by the target illumination beam, wherein the reflected beam interferes with the first reference beam and the second reference beam; collect a data set received from the receiver relative to the first and second reference beams; and generate the at least one pair of simultaneous artificial specklegram images from the data set with each artificial specklegram image from the at least one pair of simultaneous artificial specklegram images having different shear lengths.
Sample embodiments of the present disclosure are set forth in the following description, are shown in the drawings and are particularly and distinctly pointed out and set forth in the appended claims.
Similar numbers refer to similar parts throughout the drawings.
With reference to
Transmitter optics 12 may include a light input 16, a first beamsplitter or first splitter 18 having a first splitting surface 20 immersed in a first medium 22. Transmitter optics 12 may further include a mirror 24 and a diverger lens 26. Transmitter optics 12, as discussed further herein, may be operable to direct a light beam 28 into the first splitter 18 where at least a portion of the light beam 28 may be directed to the mirror 24 with at least a second portion of the light beam 28 directed through the diverger lens 26 and out towards a target surface 54 in the form of a target illumination beam 30. The other portion of light beam 28 may reflect off of mirror 24 back through first splitter 18 and into the receiver optics 14 as a reference beam 32. The operation of transmitter optics will be further discussed below.
Receiver optics may include a lens 34, a second splitter 36 having a second splitting surface 38 immersed in a second medium 40, an objective lens 42, an image plane 44, and a beam dump 46. Reference beam 32 may enter into the receiver optics through lens 34 and into second splitter 36 where it may be divided into a first arm 48 and a second arm 50, which may travel out of the second beamsplitter 36 to the image plane 44 and the beam dump 46, respectively. Receiver optics may also receive a reflection of the target illumination beam 30 indicated and shown in the figures as reflected beam 52. Reflected beam 52 may reflect off of the target surface 54 and into the second splitter 36 where it may be recombined with the reference beam 32 with at least a portion of the reflected beam 52 being directed to the image plane 44 while at least another portion of reflected beam 52 may be directed to the beam dump 46.
Receiver optics 14, or more particularly, image plane 44 and/or beam dump 46 may further include one or more outputs 56 connecting to a processor 58 as discussed further herein.
Light input 16 may be a beam generator such as a laser beam generator operable to produce a monochromatic and/or coherent laser light that can be used to measure surface displacements on the target surface 54. According to another aspect, light input 16 may be any device operable to produce a light beam suitable for use in shearography to measure surface displacement and/or surface irregularities as discussed further herein. According to a few non-limiting examples, light input 16 may be a laser transmitter, the aforementioned laser beam generator, or may be an input source from a remote beam generator or beam director assembly utilizing additional optical components such as mirrors, collimators, divergers or the like to generate and/or direct light beam 28 into and through transmitter optics 12 as discussed further herein.
Both first splitter 18 and second splitter 36 may be substantially similar in that they may be beam splitting devices that are commonly used in shearography applications as well as in other beam splitting applications. According to one example, first and second splitters 18, 36 may be cube beam splitters having first splitting surface 20 and second splitting surface 38, respectively, immersed in a medium such as first medium 22 and/or second medium 40, which may be optical glass or another suitable medium as dictated by the desired implementation. Splitting surfaces 20, 38 may be optical components operable to split a beam with at least a portion of the beam being directed 90 degrees from the input direction while at least another portion of the beam may travel straight through splitting surfaces 20, 38. Splitting surfaces 20, 38 may consist of an optical coating or a separate splitting structure embedded in or otherwise immersed in first and/or second medium 22, 40. The main recognized difference between first splitter 18 and second splitter 36 may be their position within system 10 such that first splitter 18 may be disposed within or as part of the transmitter optics 12 while second splitter may be disposed within and/or as part of the receiver optics 14. Second splitter 18 may be oriented within receiver optics 14 to serve as a beam recombining optic, as discussed below. The orientation and operation of first and second splitters 18, 36 will be discussed further herein.
According to one aspect, in place of first splitter 18, second splitter 36, or both first and second splitters 18 and 36, system 10 may be configured to employ a window with an anti-reflective coating to divide the light beam 28 into the target illumination beam 30 and reference beam 32. This implementation may reduce the difference in signal between test and reference wavefronts at the image plane 44.
Mirror 24 may be a standard pellicle or optic mirror, which may reflect the portion of light beam 28 back towards first splitter 18, as discussed further herein. According to one aspect, mirror 24 may be a Piezo mirror or a tilting mirror, which may move or otherwise be movable on a two or three-axis basis as dictated by the desired implementation.
Diverger lens 26 may be a single primary diverger lens in that it may be primarily responsible for all or substantially all of the divergence of light beam 28 as it travels therethrough and spreads to become target illumination beam 30. According to one aspect, diverger lens 26 may be a standard optical component configured and operable to produce the target illumination beam 30 from light beam 28 according to the desired implementation of system 10. According to another aspect, diverger lens 26 may be a spherical optical component configured and operable to produce the target illumination beam 30 from light beam 28.
Lens 34 may similarly be a standard optical component or optical lens and may be formed of any suitable optical quality material including, but not limited to, optical glass. Lens 34 may have beam shaping attributes such that lens 34 may be used to shape reference beam 32 as it passes therethrough while entering into receiver optics 14 as discussed further herein. According to another aspect, lens 34 may be omitted from system 10, such as is shown in
Objective lens 42 may similarly be a standard optical component or optical lenses and may be formed of any suitable optical quality material including, but not limited to, optical glass. Objective lens 42 may have beam shaping attributes such that objective lens 42 may be used to shape reflected beam 52 as it passes therethrough before encountering image plane 44 (
Image plane 44 may be an optical detector of any type suitable for the desired implementation and dependent upon the beam properties being measured. According to one aspect, image plane 44 may be a focal plane array (FPA). In applications utilizing an FPA, image plane 44 may have a series of light sensing pixels arranged in a square or rectangular pattern and operable to detect and/or measure beam properties such as wave length, phase, and the like, of both reference beam 32 and/or reflected beam 52. According to another aspect, image plane 44 may be any other optical detector, such as a camera or the like, as dictated by the desired implementation. Image plane 44 may further include one or more filters operable to filter out specific wave lengths or other specific properties of reference beam 32 and/or reflected beam 52.
Beam dump 46 may be any suitable device designed to absorb the energy of reference beam 32 and/or reflected beam 52. According to one aspect, beam dump 46 may instead be a second detector, such as a second image plane, camera, FPA, or the like, and may be utilized to measure different qualities of reference beam 32 and/or reflected beam 52. Where beam dump 46 is a second detector, it may alternatively be used to measure like qualities of reference beam 32 and/or reflected beam 52 as a backup or redundant measurement, as dictated by the desired implementation.
Depending on the specific application of system 10, target surface 54 may be a ground surface, i.e., the surface forming the ground beneath or otherwise opposite from system 10. According to another aspect, target may be the surface of a target object such as a machine or structure being tested using shearography techniques such as those described below. For purposes of consistency and clarity in this disclosure only, target surface 54 will hereinafter be referred to as a ground surface comprising a substrate having one or more objects buried therein, as discussed further below. This exemplary use of system 10 as discussed below is understood to be a representative example of use, and not a limiting use thereof.
Processor 58 may be a computer, a processor, a logic, a logic controller, a series of logics, or the like which may include or be in further communication with one or more non-transitory storage mediums and may be operable to both in code and/or carry out a set of encoded instructions contained thereon. Processor 58 may control system 10, including transmitter optics 12 and/or receiver optics 14, to dictate or otherwise oversee the operations thereof as discussed further herein. Processor 58 may be in further communications with other systems or processor such as other computers or systems carried alongside or along with system 10 as discussed further below. According to one non-limiting example, where system 10 is carried by a vehicle 62 as discussed below, processor 58 may be in further communication with other systems on the vehicle 62 such as onboard navigational computers and the like.
With reference to
Components of system 10 are illustrated throughout the figures in both specific and generalized configurations and positions; however, it will be understood that each individual component may be placed and/or located at any position within system 10, or within or on vehicle 62. Accordingly, it will be understood that the particulars of the configuration and/or installation of system 10, including as a standalone system or in/on vehicle 62, (or other structure) with which system 10 is carried or otherwise installed, may dictate the positioning and/or placement of individual components. According to another aspect, the components of system 10 may be moved or moveable between multiple positions depending upon the desired use for a specific implementation or as dictated by the particulars of the vehicle 62 being used, as discussed further herein. The specific configuration and placement of system 10 and the components thereof, is therefore considered to be the architecture of system 10 and may be specifically and carefully planned to meet the needs of any particular system 10. The architecture thereof may also be changed or upgraded as needed.
Further, according to one aspect, the processes and systems described herein may be adapted for use with legacy systems, i.e., existing architecture, without a need to change or upgrade such systems. According to another aspect, certain components may be legacy components while other components may be retrofitted for compatibility with legacy components to complete or otherwise enhance system 10, as discussed further herein.
Having thus described the general configurations and components of system 10, the operation and methods of use thereof will now be discussed.
While the operation of system 10 will be described in further detail below, at its most basic, as illustrated in
With reference to
As depicted in
By way of this example, as vehicle 62 moves over the target surface 54, system 10 may generate and record several shear images and data sets relating to one or more specific locations on target surface. When used for detection of objects within or under the target surface 54, each position of vehicle 62 may be viewed as a separate set of images and data, and may be analyzed according to the processes herein for the presence of such buried objects. Each individual image set and data set may be generated and recorded utilizing a single fixed shear, as discussed below. In other words, as vehicle 62 moves between positions, no adjustment to the position or configuration of system 10 components is necessary, and multiple passes over the same location are likewise unnecessary, as discussed further herein.
A shear or shearing of an image, at its most basic, is the process of changing the wave front signal to induce interference patterns into the signal which can give data relating to the surface that reflects that signal. When used in shearography, these interference patterns can tell you what is happening on the target surface 54 surface. Normal operation of shearography equipment typically has a single fixed shear which is set by the angle of mirror 24 relative to the components of the receiver optics 14. When it is desired to obtain a shearogram having a different shear, the hardware itself must be adjusted and a new data set must be collected. In other words, to change the shear, the mirror 24, or more specifically the angle of the mirror 24 must be physically and manually adjusted and the imaging process must be repeated to collect a new data set relating to the target surface 54. Current shearography methods typically generate shearograms utilizing a single fixed shear that is linear and is approximately constant across the image. Where different shear lengths are needed, the hardware must be adjusted for each desired shear length, and a new set of shear images and new data set must be collected. When performing shearography according to the example above, each location of vehicle 62 may require multiple shear lengths, resulting in multiple image sets and multiple data sets taken at every position of vehicle 62, with manual adjustments to the hardware between the collection of each image and data set.
In the case of remote detection of buried objects, the single fixed shear operation limits single-pass system performance, as discussed above, because buried objects tend to show a higher response level when detected at a shear length that is approximately one-half of the object's diameter. Utilizing systems having a fixed shear length in single-pass operations seeking objects of varying size, means many targets will not be interrogated at optimum shear and some targets may not be detected at all. Thus, the tradeoff for this application of shearography is the benefit of having a single pass system is often outweighed by the reduced accuracy in object detection. In certain applications, such as the detection of buried threats, including land mines, IEDs and the like, even a single missed object could have devastating consequences. Thus, the reduced accuracy is further magnified in these scenarios, and current single fixed shear length systems often require additional time to perform multiple shearography processes over each location to maintain accuracy. Even in current systems where the shear length can change across the focal plane, such as rotational shearing interferometers, the shear at any given pixel is fixed. Thus, to properly detect objects of varying size, the shear length would still require adjustment at each pixel and multiple image and data sets to maintain accuracy.
Accordingly, the processes described herein may utilize the system 10, as discussed above, to enable single-pass system performance while utilizing back-end image processing techniques to process the data collected in a single pass for the optimal shear length for each class of buried object within target surface 54. These processes may further enhance target surface 54 analysis by allowing the calculation of the object's response relative to the shear length and shear direction of system 10.
In traditional shearing, an interferometer using a single-fixed shear, the observed intensity at a given pixel may take the form:
Where I1 and I2 are the intensities from the un-sheared location r and sheared location r+Δr (where Δr is the shear) and θ12 is the phase angle between the rays form r and r+Δr.
When computed for every pixel at time t1 equation (“Eq.”) (1.1) represents the intensity of a specklegram image:
Shearograms are computed as a function of the difference between two specklegrams ΔS (a non phase resolved shearogram for example is simply |ΔS|2). Dropping pixel indices, and assuming I1 and I2 are constant over the time interval, provides:
That is, only the term:
in Eq. (1.2) is important for shearogram generation.
Then, with the objective being to collect a set of interference images W that will allow the computation of X12 for an arbitrary shear Δr (X12(Δr)). For purposes of the present analysis, the shear Δr can be taken to be restricted to a shift in pixel locations from i,j to i′,j′ on W. That is:
Where Δi=i′−i and Δj=j′−j.
Thus, in this notation Eq. (1.4) becomes:
Thus, a solution to the post processing shear computation problem is achieved by defining a set of images W from which Eq. (1.6) can be computed.
Below, it is shown that if the interferometric images W are obtained with a speckle holography system that supports global phase stepping of the reference wave, then, for a suitable choice of phase steps the desired quantity X(i,j,i′,j′) can be computed.
Let W1 be a speckle holographic image with 0 phase step applied to the reference wave:
Where IA and IR are the reflected and reference intensities and θAR is the phase angle between the reflected and reference waves.
Similarly let W2 be a speckle holographic image with 90° phase step applied to the reference wave:
It is relatively straightforward to estimate the intensities IA(i,j) and IR(i,j) from the intensity image W1 (or W2). The reference plane wave intensity IR is constant across the image, W1(i,j)˜IR for any pixel where IA(i,j)□0. Using the darkest pixel in W1(i,j) an estimate for IR
The reflected intensity IA(i,j) is expected to vary slowing across the surface being imaged; conversely the cos(θAR(i,j)) term is expected to oscillate rapidly from pixel to pixel.
Applying a low pass filter to W1, (for example a boxcar filter of kernel size k), yields:
Using Eq. (1.9) provides:
The interference terms in Eqs. (1.7) and (1.8) can now be written in terms of measurable quantities:
Where the notation Y1 and Y2 is introduced for convenience.
Currently all interference terms are given in terms of the reference wave θAR(i,j), the objective is to express interference in terms of θ(i,j,i′,j′). These are related by:
Using Eq. (1.14) and simple trigonometric identities:
The objective of expressing Eq. (1.6) in terms of measurable quantities is achieved.
The generation of a Non Phase Resolved shearogram with shear Δi,Δj can be tested as follows using a speckle holography system, such as system 10:
With reference wave at 0° phase step collect image W1(t1)
With reference wave at 90° phase step collect image W2(t1)
With reference wave at 0° phase step collect image W1(t2)
With reference wave at 90° phase step collect image W2(t2)
Here i′=i+Δi and j′=j+Δj.
Using W1(t1) and W2(t1) compute X(i,j,i′,j′,t1)
Using W1(t2) and W2(t2) compute X(i,j,i′,j′,t2) from Eq. (1.15)
Compute the NPR Shearogram as:
With reference to
Process 100 is an exemplary process of performing shearography to detect buried threats under a ground surface such as target surface 54. Although described for use for this particular purpose, it will be understood that process 100 may be utilized for any desired shearography applications as well as other similar imaging techniques and application.
The first step in process 100 is to generate a light beam 28 from the light input 16 and direct the light beam 28 into the first splitter 18. The generation of light beam 28 and direction thereof into first splitter 18 is generally indicated as step 102 in process 100. As beam 28 moves into first splitter 18 and encounters first splitting surface 20, it is split with approximately half of beam 28 being redirected to mirror 24 where it is reflected therefrom towards the receiver optics 14 as reference beam 32 and the remainder of the light passes through the first splitting surface 20 towards diverger lens 26. The splitting of beam 28 is indicated as step 104 in process 100.
Once light beam 28 is split, the portion encountering diverger lens 26 is diverged and projected outwards towards target surface 54 as the target illumination beam 30. This projection of target illumination beam 30 towards target surface 54 is indicated as step 106. As mentioned above, the other portion of beam 28 that is split and directed 90 degrees towards mirror 24 before being reflected therefrom and towards receiver optics 14 as reference beam 32 is indicated, the reflection and direction of reference beam 32 is indicated as step 108.
While target illumination beam 30 is travelling towards target surface 54, the reference beam may enter receiver optics 14 and may pass through lens 34 to collimate, resize, or otherwise organize reference beam 32 as it enters into the second splitter 36, which may function as a recombining optical component, as discussed above. The action of lens 34 on reference beam 32 is shown and indicated as step 110 in process 100. However, step 110 is illustrated as a dashed line box as step 110 is optional, depending on the specific configuration of system 10. For example, when using system 10 as illustrated in
Simultaneously or in rapid succession with reference beam 32 entering the second splitter 36, the target illumination beam 30 may reflect off of the target surface 54 and may enter second splitter 36 as reflected beam 52. Second splitter 36, now functioning as a recombining optical component, may recombine or otherwise direct both reference beam 32 and reflected beam 52 into a first arm 48 towards the image plane 44 and a second arm 50 towards the beam dump 46. The recombination and direction of the reference beam 32 and reflected beam 52 is indicated as step 112 in process 100.
Next, indicated as step 114, the first arm 48 of the recombined reference beam 32 and reflected beam 52 may pass through the objective lens 42 before reaching the image plane 44. In this approach, the objective lens 42 may function to recollimate the reference beam onto the image plane 44. This step 114 is illustrated using a dash-dot line pattern box as step 114 may be performed in a different manner, depending on the particular implementation. For example, as discussed with reference to
Once the reference beam 32 and reflected beam 52 are recombined and directed to the image plane 44 and/or beam dump 46, data may be collected via the image plane 44 relating to the interference patterns created by reflecting target illumination beam 30 off of target surface 54. The collection of data is indicated as step 116 in process 100. Once the appropriate data is collected, it may be communicated via output(s) 56 to processor 58 for further processing according to the methods and formulas provided herein. The data may be processed according to the methods and formulas herein to generate multiple shear images having different shear lengths for each image set all from the single data set collected in step 116. The communication of collected data to the processor is indicated as step 118 in process 100 while the processing of that data is indicated as step 120.
Process 100 may be repeated as system 10 is moved across an area to be tested, for example, by vehicle 62 as discussed previously herein. Each data set collected at each specific position may be collected using a fixed shear with no need or necessity of moving any physical components such as mirror 24 within system 10. Instead, the processing of the collected data in step 120 may be done according to the methods and formulas described herein, to allow for extrapolating the optimal shear length for each object class buried or contained within target surface 54 from each single data set at each position. Thus, the accuracy and detectability of objects in this particular implementation may be maintained at a high level while performing a single pass over the target surface 54.
Although described herein, as used for buried object detection, it will be understood that process 100 as well as the processing formulas and methods described herein may be readily adapted for other similar shearography or imaging applications as needed.
System 210 may include two main components, namely a transmitter optical component hereinafter referred to as transmitter optics 212 and a receiver optical component hereinafter referred to as receiver optics 214. It should be understood that following descriptions and illustrations relate specifically to the transmitter optics 212 and the receiver optics 214 of system 210.
As best seen in
Still referring to
As discussed previously, system 210 includes receiver optics 214 that is operable with the transmitter optics 212. As best seen in
At a first stage, receiver optics 214 is operable to recombine and/or interfere the reflected beam 252 with the first reference beam 232. At this stage, a first device or beam sensor (not illustrated herein) of receiver optics 214 detects and receives the recombination and/or interference of the reflected beam 252 with the first reference beam 232; such detection of this recombination by the first beam sensor is formed into first collected data. At a second stage, receiver optics 214 is also operable to recombine and/or interfere the reflected beam 252 with the second reference beam 232′. At this stage, a second device or beam sensor (not illustrated herein) of receiver optics 214 detects and receives the recombination and/or interference of the reflected beam 252 with the second reference beam 232′; such detection of this recombination by the second beam sensor is formed into second collected data that is different than the first collected data due to the difference in phases between the first reference beam 232 and the second reference beam 232′.
The first and second beam sensors that are included with receiver optics 214 may be any suitable device designed to absorb the energy of first and second reference beams 232, 232′ and/or reflected beam 252. According to one aspect, each of the first and second beam sensors of receiver optics 214 may instead be detectors, such as an image plane, camera, FPA, or the like, and may be utilized to measure different qualities of first and second reference beams 232, 232′ and/or reflected beam 252. Where first and second beam sensors of receiver optics 214 are each a detector, one of the first and second beam sensors of receiver optics 214 may alternatively be used to measure like qualities of first and second reference beams 232, 232′ and/or reflected beam 252 as a backup or redundant measurement, as dictated by the desired implementation. Although this example describes the recombination and/or interference of the reflected beam 252 with the second reference beam 232′ occurring in a second device or beam sensor, it may be possible to perform this operation in the first device or beam sensor should an application specific need or configuration be warranted.
System 210 also includes a processor 258 that is operatively in communication with the receiver optics 214 by an output 256 of receiver optics 214 (see
In system 210, processor 258 may be a computer, a processor, a logic, a logic controller, a series of logics, or the like which may include or be in further communication with one or more non-transitory storage mediums and may be operable to both in code and/or carry out a set of encoded instructions contained thereon. Processor 258 may control system 210, including transmitter optics 212 and/or receiver optics 214, to dictate or otherwise oversee the operations thereof as discussed further herein. Processor 258 may be in further communications with other systems or processor such as other computers or systems carried alongside or along with system 210 as discussed further below. According to one non-limiting example, where system 210 is carried by a vehicle 262 as discussed below, processor 258 may be in further communication with other systems on the vehicle 262 such as onboard navigational computers and the like.
Components of system 210 are illustrated throughout the figures in both specific and generalized configurations and positions; however, it will be understood that each individual component may be placed and/or located at any position within system 210, or within or on vehicle 262. Accordingly, it will be understood that the particulars of the configuration and/or installation of system 210, including as a standalone system or in/on vehicle 262, (or other structure) with which system 210 is carried or otherwise installed, may dictate the positioning and/or placement of individual components. According to another aspect, the components of system 210 may be moved or moveable between multiple positions depending upon the desired use for a specific implementation or as dictated by the particulars of the vehicle 262 being used, as discussed further herein. The specific configuration and placement of system 210 and the components thereof, is therefore considered to be the architecture of system 210 and may be specifically and carefully planned to meet the needs of any particular system 210. The architecture thereof may also be changed or upgraded as needed.
Further, according to one aspect, the processes and systems described herein may be adapted for use with legacy systems, i.e., existing architecture, without a need to change or upgrade such systems. According to another aspect, certain components may be legacy components while other components may be retrofitted for compatibility with legacy components to complete or otherwise enhance system 10, as discussed further herein.
Having thus described the general configurations and components of system 210, the operation and methods of use thereof will now be discussed.
While the operation of system 210 will be described in further detail below, at its most basic, as illustrated in
With reference to
As depicted in
By way of this example, as vehicle 262 moves over the target surface 254, system 210 may generate and record several specklegram images and data sets relating to one or more specific locations on target surface 254. When used for detection of objects 270 within or under the target surface 254, each position of vehicle 262 may be viewed as a separate set of images and data, and may be analyzed according to the processes herein for the presence of such buried objects. Each individual image set and data set may be generated and recorded utilizing a single fixed shear. In other words, as vehicle 262 moves between positions, no adjustment to the position or configuration of system 210 components is necessary, and multiple passes over the same location are likewise unnecessary, as discussed further herein. It should be noted that the induced vibration applied on the buried target or objects 270 creates a deformation on target surface 254 as the vehicle 262 moves over the target surface 254 between two position. With such deformation, the deformation of the target surface 254 is detected in the shearogram generated by system 210 which deduces the present of such buried object 270; such generation of the shearogram by system 210 is discussed in greater detail below.
A shear or shearing of an image, at its most basic, is the process of changing the wave front signal to induce interference patterns into the signal which can give data relating to the surface that reflects that signal or by interfering an image with a shifted sheared version of itself. When used in shearography, these interference patterns may provide information as what is happening on the target surface 254. Normal operation of shearography equipment typically has a single fixed shear which is set by the angle of a mirror of the transmitter optic 212 relative to the components of the receiver optics 214. When it is desired to obtain a shearogram having a different shear, the hardware itself must be adjusted and a new data set must be collected. In other words, to change the shear, the mirror, or more specifically the angle of the mirror must be physically and manually adjusted and the imaging process must be repeated to collect a new data set relating to the target surface 254. Current shearography methods typically generate shearograms utilizing a single fixed shear that is linear and is approximately constant across the image. Where different shear lengths are needed, the hardware must be adjusted for each desired shear length, and a new set of specklegram images and new data set must be collected. When performing shearography according to the example above, each location of vehicle 62 may require multiple shear lengths, resulting in multiple image sets and multiple data sets taken at every position of vehicle 262, with manual adjustments to the hardware between the collection of each image and data set.
In the case of remote detection of buried objects, such as objects 270, the single fixed shear operation limits single-pass system performance, as discussed above, because buried objects tend to show a higher response level when detected at a shear length that is approximately one-half of the object's diameter. Utilizing systems having a fixed shear length in single-pass operations seeking objects of varying size, means many targets will not be interrogated at optimum shear and some targets may not be detected at all. Thus, the tradeoff for this application of shearography is the benefit of having a single pass system is often outweighed by the reduced accuracy in object detection. In certain applications, such as the detection of buried threats, including land mines, IEDs and the like, even a single missed object could have devastating consequences. Thus, the reduced accuracy is further magnified in these scenarios, and current single fixed shear length systems often require additional time to perform multiple shearography processes over each location to maintain accuracy. Even in current systems where the shear length can change across the focal plane, such as rotational shearing interferometers, the shear at any given pixel is fixed. Thus, to properly detect objects of varying size, the shear length would still require adjustment at each pixel and multiple image and data sets to maintain accuracy.
It should be noted that such objects 270 discussed and illustrated herein may be buried and/or positioned underneath or below the target surface 254. In one particular example, and as best seen in
Accordingly, the processes described herein may utilize the system 210, as discussed above, to enable single-pass system performance while utilizing back-end image processing techniques to process the data collected in a single pass for the optimal shear length for each class of buried object 270 within target surface 254. These processes may further enhance target surface 254 analysis by allowing the calculation of the object's response relative to the shear length and shear direction of system 10.
With reference to
Process 300 is an exemplary process of performing shearography to detect buried threats under a ground surface such as target surface 254. Although described for use for this particular purpose, it will be understood that process 300 may be utilized for any desired shearography applications as well as other similar imaging techniques and application.
The first step in process 300 is to generate a light beam 228 from the transmitter optics 212. The generation of light beam 228 by the transmitter optics 212 is generally indicated as step 302 in process 300. Upon generation of beam 230, the beam 230 is then split and/or divided into approximately three portions based on the configuration of the transmitter optics 212 (e.g., one or more beam splitters equipped to transmitter optics 212). In this step, a first portion of the beam 230 is split inside of the transmitter optics 212 as target illumination beam 230, a second portion of the beam 230 is split inside of the transmitter optics 212 as first reference beam 232, and a third portion of the beam 230 is split inside of the transmitter optics 212 as second reference beam 232′. The splitting of beam 230 is indicated as step 304 in process 300.
Once light beam 330 is split, the transmitter optics 212 is operable to diverge and project the target illumination beam 230 towards target surface 254. This projection of target illumination beam 230 towards target surface 254 is indicated as step 306.
As mentioned above, the second portion of beam 330 is transmitted towards receiver optics 214 as first reference beam 232 where the first reference beam 232 is raw and is free from any phase shift. The transmission and direction of first reference beams 232 is indicated as step 308A. As also mentioned above, the third portion of beam 330 is transmitted towards receiver optics 214 as second reference beam 232′ where the second reference beam 232′ is altered and is set to a 90 degree phase shift. The transmission and direction of second reference beams 232′ is also indicated as step 308B. It should be understood that steps 308A, 308B are performed concurrently with one another upon such generation of the light beam 228 by transmitter optics 212.
While target illumination beam 230 is travelling towards target surface 254, the first and second reference beams 232, 232′ may enter receiver optics 214 to collimate, resize, or otherwise organize first and second reference beams 232, 232′ for recombination processes. Simultaneously or in rapid succession with first and second reference beams 232, 232′ entering the receiver optics 214, the target illumination beam 230 may reflect off of the target surface 254 and may enter receiver optics 214 as reflected beam 252. The receiver optics 214 may recombine or otherwise direct both the first reference beam 232 and reflected beam 252 towards the first beam sensor of receiver optics 214; such recombination and direction of the first reference beam 232 and reflected beam 252 is indicated as step 310A in process 300. The receiver optics 214 may also simultaneously or in rapid succession recombine or otherwise direct both the second reference beam 232′ and reflected beam 252 towards the second beam sensor of receiver optics 214; such recombination and direction of the second reference beam 232′ and reflected beam 252 is indicated as step 310B in process 300.
Once the first reference beam 232 and reflected beam 252 are recombined and directed to the first beam sensor of receiver optics 214, data may be collected relating to the interference patterns created by reflecting target illumination beam 230 off of target surface 254. The collection of data or first collected data is indicated as step 312A in process 300. Simultaneously or in rapid succession, data may also be collected relating to the interference patterns created by reflecting target illumination beam 230 off of target surface 254 once the second reference beam 232′ and reflected beam 252 are recombined and directed to the second beam sensor of receiver optics 214. The collection of data or second collected data is indicated as step 312B in process 300. It should be understood that the first collected data relating to the first reference beam 232 and reflected beam 252 is different from the second collected data relating to the second reference beam 232′ and reflected beam 252 given the phase shift of the second reference beam 232′.
Once the appropriate data is collected, it may be communicated via output(s) 256 to processor 258 for further processing according to the methods and formulas provided herein. The data may be processed according to the methods and formulas herein to generate multiple specklegram images having different shear lengths for each image set all from the single data set collected in steps 312A, 312B. The communication of the first collected data to the processor 258 relating to the first reference beam 232 and reflected beam 252 is indicated as step 314A in process 300 while the processing of the first collected data is indicated as step 316. Similarly, communication of the second collected data to the processor 258 relating to the second reference beam 232′ and reflected beam 252 is indicated as step 314B in process 300 while the processing of the second collected data is indicated as step 316. Upon such completion of step 316, a first artificial specklegram image of target surface 254 is generated by processor 258 relative to the first collected data. Similarly, upon such completion of step 316, a second artificial specklegram image of target surface 254 is generated by processor 258 relative to the second collected data. As such, a first pair of simultaneous artificial specklegram or shear images are generated and are used with another or second pair of artificial specklegram or shear images generated at a second time interval to acquire any sheath length needed, which is discussed in greater detail below.
Process 300 may be repeated as system 210 is moved across an area to be tested, for example, by vehicle 262 as discussed previously herein. As discussed previously, the first and second collected data were taken at a first time interval at a first position (denoted by vehicle 262 shown in dashed lines in
Each data set collected at each specific position may be collected using a fixed shear with no need or necessity of moving any physical components within system 210. Instead, the processing of the collected data in step 316 may be done according to the methods and formulas described herein, to allow for extrapolating the optimal shear length for each object class buried or contained within target surface 254 from each single data set at each position. Thus, the accuracy and detectability of objects in this particular implementation may be maintained at a high level while performing a single pass over the target surface 254.
Although described herein, as used for buried object detection, it will be understood that process 300 as well as the processing formulas and methods described herein may be readily adapted for other similar shearography or imaging applications as needed.
The system of the present disclosure may additionally include one or more sensors to sense or gather data pertaining to the surrounding environment or operation of the system. Some exemplary sensors capable of being electronically coupled with the system of the present disclosure (either directly connected to the system of the present disclosure or remotely connected thereto) may include but are not limited to: accelerometers sensing accelerations experienced during rotation, translation, velocity/speed, location traveled, elevation gained; gyroscopes sensing movements during angular orientation and/or rotation, and rotation; altimeters sensing barometric pressure, altitude change, terrain climbed, local pressure changes, submersion in liquid; impellers measuring the amount of fluid passing thereby; global positioning sensors sensing location, elevation, distance traveled, velocity/speed; audio sensors sensing local environmental sound levels, or voice detection; photo/light sensors sensing ambient light intensity, ambient, day/night, UV exposure; TV/IR sensors sensing light wavelength; temperature sensors sensing machine or motor temperature, ambient air temperature, and environmental temperature; radar sensors; lidar sensors; ultrasonic sensors; magnetic sensors, image sensors; and moisture sensors sensing surrounding moisture levels.
If sensors are utilized to gather data relating to the system of the present disclosure, then sensed data may be evaluated and processed with artificial intelligence (AI). Analyzing data gathered from sensors using artificial intelligence involves the process of extracting meaningful insights and patterns from raw sensor data to produce refined and actionable results. Raw data is gathered from various sensors, for example those which have been identified herein or others, capturing relevant information based on the intended analysis. This data is then preprocessed to clean, organize, and structure it for effective analysis. Features that represent key characteristics or attributes of the data are extracted. These features serve as inputs for AI algorithms, encapsulating relevant information essential for the analysis. A suitable AI model, such as machine learning or deep learning (regardless of whether it is supervised or unsupervised), is chosen based on the nature of the data and the desired analysis outcome. The model is then trained using labeled or unlabeled data to learn the underlying patterns and relationships. The model is fine-tuned and optimized to enhance its performance and accuracy. This process involves adjusting parameters, architectures, and algorithms to achieve better results. The trained model is used to make predictions or inferences on new, unseen data. The model processes the extracted features and generates refined output based on the patterns it has learned during training. The results produced by the AI model are refined through post-processing techniques to ensure accuracy and relevance. These refined results are then interpreted to extract meaningful insights and derive actionable conclusions. Feedback from the refined results is used to improve the AI model iteratively. The process involves incorporating new data, adjusting the model, and enhancing the analysis based on real-world feedback and evolving requirements. Further, AI results can be used to alter the operation of the device, assembly, or system of the present disclosure based on feedback. For example, AI feedback can be used to improve the efficiency of the device, assembly, or system of the present disclosure by responding to predicted changes in the environment or predicted changes to the device, assembly, or system of the present disclosure more quickly than if only sensed by one or more of the sensors.
A sensor model may be employed, once trained, in the system of the present disclosure. In one embodiment, the system of the present disclosure can be used to teach a sensor model to predict sensor data for a specific scenario. Alternatively, sensor models can be utilized to generate the data to train the AI. The sensor model can be trained for any type of sensor, such as those types of sensors described above, and/or other sensor types. The elements described herein may be implemented as discrete or distributed components in any suitable combination and location. The various functions described herein may be conducted by hardware, firmware, and/or software. For example, a processor may perform various functions by executing instructions stored in memory.
The AI model and/or sensor model can include a deep neural network (DNN), convolutional neural network (CNN), another neural network (NN) or the like and can support generative learning. For example, the sensor model can include a generative adversarial network (GAN), a variational autoencoder (VAE), and/or another type of DNN, CNN, NN or machine learning model (e.g., natural language processing (NLP)). Generally, the sensor model can accept some encoded representation of a scene as input using any number of data structures and/or channels (e.g., concatenated vectors, matrices, tensors, images, etc.).
In a particular embodiment, the system of the present disclosure can use the sensors to acquire a representation of the real-world environment (e.g., a physical environment) at a given point in time. Data from these sensors may be used to generate a representation of a scene or scenario, which may then be used to teach a sensor model. For example, a representation of a scene can be derived from sensor data, properties of objects in the scene or surrounding environment such as positions or dimensions (e.g., depth maps), classification data identifying objects in the scene or surrounding environment, properties or classification data of components of the system of the present disclosure, or some combination thereof. Generally, the sensor model learns to predict sensor data from a representation of the scene, environment or operation of the system of the present disclosure.
The sensor model architecture can be selected to fit the shape of the desired input and output data. Examples of architectures (e.g., DNNs) include, but are not limited to, perceptron, feed-forward, radial basis, deep feed-forward, recurrent, long/short term memory, gated recurrent unit, autoencoder, variational autoencoder, convolutional, deconvolutional, and generative adversarial. Some DNN architectures, such as a GAN, can include a convolutional neural network (CNN) that accepts and evaluates an input image and may include multiple input channels, which may be used to accept and evaluate multiple input images and/or input vectors.
In one embodiment, training data for the sensor model may be generated using real-world (e.g., physical environment) data. To collect real-world training data, the system of the present disclosure may collect sensor data by fusing sensors as the vehicle traverses a real-world environment. The sensors of the system of the present disclosure may include, for example, one or more global navigation satellite systems sensors (e.g., Global Positioning System sensors (GPS)), RADAR sensors, ultrasonic sensors, LIDAR sensors, inertial measurement unit (IMU) sensors (e.g., accelerometer(s), gyroscope(s), magnetic compass(es), magnetometer(s), etc.), ego-motion sensors, microphones, stereo cameras, wide-view cameras (e.g., fisheye cameras), infrared cameras, surround cameras (e.g., 360 degree cameras), long-range and/or mid-range cameras, speed sensors (e.g., for measuring the speed of the vehicle), vibration sensors, steering sensors, brake sensors (e.g., as part of the brake sensor system), and/or other sensor types.
In another embodiment, training data for the sensor model is generated based on simulated or virtual environments. The training data may then be used to train the sensor model for use in real-world autonomous applications, e.g., to control the operation of the system of the present disclosure. The training data may be derived to fit the shape of the input and output data for the sensor model, which may depend on the architecture of the sensor model. For example, sensor data may be used to encode an input scene, input parameters, and/or ground truth sensor data using different data structures and/or channels (e.g., concatenated vectors, matrices, tensors, images, etc.).
The system of the present disclosure may include hardware, software and/or firmware responsible for managing the sensor data generated by the sensors. The autonomous hardware, software, and/or firmware being executed may manage different environments using one or more maps (e.g., 3D maps), positioning component(s), and the like. The autonomous hardware, software, and/or firmware may also include components to plan, control, and generally manage the system of the present disclosure. In one example, the autonomous hardware, software, and/or firmware can be installed in and used to control the system of the present disclosure through the environment based on the sensor data, one or more machine learning models (e.g., neural networks), and the like. A training system may use the training data to train the sensor model to predict virtual sensor data for a given scene, environment, or operation of a component.
The training system can include one or more servers (e.g., a graphics processing unit server) and data stores and may use a cloud-based deep learning infrastructure with artificial intelligence to analyze the sensor data received from the system of the present disclosure and/or stored in the data store. The training system can also incorporate or train up-to-date, real-time neural networks (and/or other machine learning models) for one or more sensor models.
The system of the present disclosure may include wireless communication logic coupled to sensors on the system. The sensors gather data and provide the data to the wireless communication logic. Then, the wireless communication logic may transmit the data gathered from the sensors to a remote device. Thus, the wireless communication logic may be part of a broader communication system, in which one or several devices, assemblies, or systems of the present disclosure may be networked together to report alerts and, more generally, to be accessed and controlled remotely. Depending on the types of transceivers installed in the device, assembly, or system of the present disclosure, the system may use a variety of protocols (e.g., Wi-Fi®, ZigBee®, MIWI, BLUETOOTH®) for communication. In one example, each of the devices, assemblies, or systems of the present disclosure may have its own IP address and may communicate directly with a router or gateway. This would typically be the case if the communication protocol is Wi-Fi®. (Wi-Fi® is a registered trademark of Wi-Fi Alliance of Austin, TX, USA; ZigBee® is a registered trademark of ZigBee Alliance of Davis, CA, USA; and BLUETOOTH® is a registered trademark of Bluetooth Sig, Inc. of Kirkland, WA, USA).
In another example, a point-to-point communication protocol like MiWi or ZigBee® is used. One or more of the system of the present disclosure may serve as a repeater, or the systems of the present disclosure may be connected together in a mesh network to relay signals from one system to the next. However, the individual system in this scheme typically would not have IP addresses of their own. Instead, one or more of the system of the present disclosure communicates with a repeater that does have an IP address, or another type of address, identifier, or credential needed to communicate with an outside network. The repeater communicates with the router or gateway.
In either communication scheme, the router or gateway communicates with a communication network, such as the Internet, although in some embodiments, the communication network may be a private network that uses transmission control protocol/internet protocol (TCP/IP) and other common Internet protocols but does not interface with the broader Internet, or does so only selectively through a firewall.
The system also allows individuals to access the system of the present disclosure for configuration and diagnostic purposes. In that case, the individual processors or microcontrollers of the system of the present disclosure may be configured to act as Web servers that use a protocol like hypertext transfer protocol (HTTP) to provide an online interface that can be used to configure the system. In some embodiments, the systems may be used to configure several systems of the present disclosure at once. For example, if several systems are of the same model and are in similar locations in the same location, it may not be necessary to configure the systems individually. Instead, an individual may provide configuration information, including baseline operational parameters, for several systems at once.
Various inventive concepts may be embodied as one or more methods, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
Any flowchart and/or block diagrams in the Figures illustrate some exemplary architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
While various inventive embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the inventive embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the inventive teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific inventive embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed. Inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the inventive scope of the present disclosure.
The above-described embodiments can be implemented in any of numerous ways. For example, embodiments of technology disclosed herein may be implemented using hardware, software, firmware or a combination thereof. When implemented in software, the software code or instructions can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers or in firmware. Furthermore, the instructions or software code can be stored in at least one non-transitory computer readable storage medium.
Also, a computer or smartphone may be utilized to execute the software code or instructions via its processors may have one or more input and output devices. These devices can be used, among other things, to present a user interface. Examples of output devices that can be used to provide a user interface include printers or display screens for visual presentation of output and speakers or other sound generating devices for audible presentation of output. Examples of input devices that can be used for a user interface include keyboards, and pointing devices, such as mice, touch pads, and digitizing tablets. As another example, a computer may receive input information through speech recognition or in other audible format.
Such computers or smartphones may be interconnected by one or more networks in any suitable form, including a local area network or a wide area network, such as an enterprise network, and intelligent network (IN) or the Internet. Such networks may be based on any suitable technology and may operate according to any suitable protocol and may include wireless networks, wired networks or fiber optic networks.
The various methods or processes outlined herein may be coded as software/instructions that is executable on one or more processors that employ any one of a variety of operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages and/or programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine.
In this respect, various inventive concepts may be embodied as a computer readable storage medium (or multiple computer readable storage media) (e.g., a computer memory, one or more floppy discs, compact discs, optical discs, magnetic tapes, flash memories, USB flash drives, SD cards, circuit configurations in Field Programmable Gate Arrays or other semiconductor devices, or other non-transitory medium or tangible computer storage medium) encoded with one or more programs that, when executed on one or more computers or other processors, perform methods that implement the various embodiments of the disclosure discussed above. The computer readable medium or media can be transportable, such that the program or programs stored thereon can be loaded onto one or more different computers or other processors to implement various aspects of the present disclosure as discussed above.
The terms “program” or “software” or “instructions” are used herein in a generic sense to refer to any type of computer code or set of computer-executable instructions that can be employed to program a computer or other processor to implement various aspects of embodiments as discussed above. Additionally, it should be appreciated that according to one aspect, one or more computer programs that when executed perform methods of the present disclosure need not reside on a single computer or processor, but may be distributed in a modular fashion amongst a number of different computers or processors to implement various aspects of the present disclosure.
Computer-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments. As such, one aspect or embodiment of the present disclosure may be a computer program product including least one non-transitory computer readable storage medium in operative communication with a processor, the storage medium having instructions stored thereon that, when executed by the processor, implement a method or process described herein, wherein the instructions comprise the steps to perform the method(s) or process(es) detailed herein.
Also, data structures may be stored in computer-readable media in any suitable form. For simplicity of illustration, data structures may be shown to have fields that are related through location in the data structure. Such relationships may likewise be achieved by assigning storage for the fields with locations in a computer-readable medium that convey relationship between the fields. However, any suitable mechanism may be used to establish a relationship between information in fields of a data structure, including through the use of pointers, tags or other mechanisms that establish relationship between data elements.
All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms.
“Logic”, as used herein, includes but is not limited to hardware, firmware, software, and/or combinations of each to perform a function(s) or an action(s), and/or to cause a function or action from another logic, method, and/or system. For example, based on a desired application or needs, logic may include a software controlled microprocessor, discrete logic like a processor (e.g., microprocessor), an application specific integrated circuit (ASIC), a programmed logic device, a memory device containing instructions, an electric device having a memory, or the like. Logic may include one or more gates, combinations of gates, or other circuit components. Logic may also be fully embodied as software. Where multiple logics are described, it may be possible to incorporate the multiple logics into one physical logic. Similarly, where a single logic is described, it may be possible to distribute that single logic between multiple physical logics.
Furthermore, the logic(s) presented herein for accomplishing various methods of this system may be directed towards improvements in existing computer-centric or internet-centric technology that may not have previous analog versions. The logic(s) may provide specific functionality directly related to structure that addresses and resolves some problems identified herein. The logic(s) may also provide significantly more advantages to solve these problems by providing an exemplary inventive concept as specific logic structure and concordant functionality of the method and system. Furthermore, the logic(s) may also provide specific computer implemented rules that improve on existing technological processes. The logic(s) provided herein extends beyond merely gathering data, analyzing the information, and displaying the results. Further, portions or all of the present disclosure may rely on underlying equations that are derived from the specific arrangement of the equipment or components as recited herein. Thus, portions of the present disclosure as it relates to the specific arrangement of the components are not directed to abstract ideas. Furthermore, the present disclosure and the appended claims present teachings that involve more than performance of well-understood, routine, and conventional activities previously known to the industry. In some of the method or process of the present disclosure, which may incorporate some aspects of natural phenomenon, the process or method steps are additional features that are new and useful.
The articles “a” and “an,” as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean “at least one.” The phrase “and/or,” as used herein in the specification and in the claims (if at all), should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc. As used herein in the specification and in the claims, “or” should be understood to have the same meaning as “and/or” as defined above. For example, when separating items in a list, “or” or “and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as “only one of” or “exactly one of,” or, when used in the claims, “consisting of,” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term “or” as used herein shall only be interpreted as indicating exclusive alternatives (i.e. “one or the other but not both”) when preceded by terms of exclusivity, such as “either,” “one of,” “only one of,” or “exactly one of.” “Consisting essentially of,” when used in the claims, shall have its ordinary meaning as used in the field of patent law.
As used herein in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
While components of the present disclosure are described herein in relation to each other, it is possible for one of the components disclosed herein to include inventive subject matter, if claimed alone or used alone. In keeping with the above example, if the disclosed embodiments teach the features of components A and B, then there may be inventive subject matter in the combination of A and B, A alone, or B alone, unless otherwise stated herein.
As used herein in the specification and in the claims, the term “effecting” or a phrase or claim element beginning with the term “effecting” should be understood to mean to cause something to happen or to bring something about. For example, effecting an event to occur may be caused by actions of a first party even though a second party actually performed the event or had the event occur to the second party. Stated otherwise, effecting refers to one party giving another party the tools, objects, or resources to cause an event to occur. Thus, in this example a claim element of “effecting an event to occur” would mean that a first party is giving a second party the tools or resources needed for the second party to perform the event, however the affirmative single action is the responsibility of the first party to provide the tools or resources to cause said event to occur.
When a feature or element is herein referred to as being “on” another feature or element, it can be directly on the other feature or element or intervening features and/or elements may also be present. In contrast, when a feature or element is referred to as being “directly on” another feature or element, there are no intervening features or elements present. It will also be understood that, when a feature or element is referred to as being “connected”, “attached” or “coupled” to another feature or element, it can be directly connected, attached or coupled to the other feature or element or intervening features or elements may be present. In contrast, when a feature or element is referred to as being “directly connected”, “directly attached” or “directly coupled” to another feature or element, there are no intervening features or elements present. Although described or shown with respect to one embodiment, the features and elements so described or shown can apply to other embodiments. It will also be appreciated by those of skill in the art that references to a structure or feature that is disposed “adjacent” another feature may have portions that overlap or underlie the adjacent feature.
Spatially relative terms, such as “under”, “below”, “lower”, “over”, “upper”, “above”, “behind”, “in front of”, and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if a device in the figures is inverted, elements described as “under” or “beneath” other elements or features would then be oriented “over” the other elements or features. Thus, the exemplary term “under” can encompass both an orientation of over and under. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly. Similarly, the terms “upwardly”, “downwardly”, “vertical”, “horizontal”, “lateral”, “transverse”, “longitudinal”, and the like are used herein for the purpose of explanation only unless specifically indicated otherwise.
Although the terms “first” and “second” may be used herein to describe various features/elements, these features/elements should not be limited by these terms, unless the context indicates otherwise. These terms may be used to distinguish one feature/element from another feature/element. Thus, a first feature/element discussed herein could be termed a second feature/element, and similarly, a second feature/element discussed herein could be termed a first feature/element without departing from the teachings of the present invention.
An embodiment is an implementation or example of the present disclosure. Reference in the specification to “an embodiment,” “one embodiment,” “some embodiments,” “one particular embodiment,” “an exemplary embodiment,” or “other embodiments,” or the like, means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the invention. The various appearances “an embodiment,” “one embodiment,” “some embodiments,” “one particular embodiment,” “an exemplary embodiment,” or “other embodiments,” or the like, are not necessarily all referring to the same embodiments.
If this specification states a component, feature, structure, or characteristic “may”, “might”, or “could” be included, that particular component, feature, structure, or characteristic is not required to be included. If the specification or claim refers to “a” or “an” element, that does not mean there is only one of the element. If the specification or claims refer to “an additional” element, that does not preclude there being more than one of the additional element.
As used herein in the specification and claims, including as used in the examples and unless otherwise expressly specified, all numbers may be read as if prefaced by the word “about” or “approximately,” even if the term does not expressly appear. The phrase “about” or “approximately” may be used when describing magnitude and/or position to indicate that the value and/or position described is within a reasonable expected range of values and/or positions. For example, a numeric value may have a value that is +/−0.1% of the stated value (or range of values), +/−1% of the stated value (or range of values), +/−2% of the stated value (or range of values), +/−5% of the stated value (or range of values), +/−10% of the stated value (or range of values), etc. Any numerical range recited herein is intended to include all sub-ranges subsumed therein.
Additionally, the method of performing the present disclosure may occur in a sequence different than those described herein. Accordingly, no sequence of the method should be read as a limitation unless explicitly stated. It is recognizable that performing some of the steps of the method in a different order could achieve a similar result.
In the claims, as well as in the specification above, all transitional phrases such as “comprising,” “including,” “carrying,” “having,” “containing,” “involving,” “holding,” “composed of,” and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases “consisting of” and “consisting essentially of” shall be closed or semi-closed transitional phrases, respectively, as set forth in the United States Patent Office Manual of Patent Examining Procedures.
To the extent that the present disclosure has utilized the term “invention” in various titles or sections of this specification, this term was included as required by the formatting requirements of word document submissions pursuant the guidelines/requirements of the United States Patent and Trademark Office and shall not, in any manner, be considered a disavowal of any subject matter.
In the foregoing description, certain terms have been used for brevity, clearness, and understanding. No unnecessary limitations are to be implied therefrom beyond the requirement of the prior art because such terms are used for descriptive purposes and are intended to be broadly construed.
Moreover, the description and illustration of various embodiments of the disclosure are examples and the disclosure is not limited to the exact details shown or described.
This application is a continuation-in-part of U.S. patent application Ser. No. 16/925,420, filed on Jul. 10, 2020; the disclosure of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | 16925420 | Jul 2020 | US |
Child | 18613522 | US |