This disclosure relates generally to data processing systems and methods, and more particularly, but without limitation, to systems and methods related to detection, imaging, and/or other processing associated with ultrasonic energy.
A practical application for the detection and imaging of ultrasonic energy is to locate, by inference, ultrasonic energy sources. Such sources may be, for instance, a gas or fluid escaping from a pressurized container (in other words, a leak). Alternatively, ultrasonic energy may be caused by a mechanical vibration, such as that caused by an excessively-worn bearing or by missing teeth on a gear drive assembly.
Piezoelectric and other detectors are known for detecting ultrasonic energy emissions. Known systems and methods utilizing such detectors have many disadvantages, however. For instance, because of the signal frequencies involved, known systems may utilize very high-speed sampling rates that increase the expense of data acquisition hardware. Furthermore, known detection systems do not provide user-friendly outputs. For example, such systems may not support imaging at all. And known systems that do provide imaging of the ultrasonic energy may not sufficiently relate the detected source of ultrasonic energy to the surrounding environment in a way that allows for a targeted response to the detection event. Moreover, known detection systems and methods may be limited to a narrow Field-Of-View (FOV) without a structured way to fully screen a Unit Under Test (UUT) that occupies an area that is many times the size of the detector's FOV. Known hand-held detection systems and methods are exemplary of this latter problem, relying on an operator to wave the hand-held detection system with respect to the UUT in an effort to provide an effective screen.
What is needed are systems and methods for detecting ultrasonic energy that reduce the cost of data acquisition, provide more useful outputs to a test operator, and enable more complete and repeatable ultrasonic energy detection over a broad target area.
Embodiments of the invention seek to overcome one or more of the disadvantages described above. For example, embodiments of the invention utilize an array of ultrasonic sensors where data from each sensor in the array are processed by RMS-to-DC conversion. An advantage of this approach is that it may eliminate the need for high-speed analog-to-digital conversion (ADC) hardware as part of the data collection channel. In addition, embodiments of the invention output a contour map based on detected ultrasonic energy and blend at least one feature of the contour map with a feature of a visible or other image so that a blended image can be displayed to an operator. Such a system and method may be more intuitive and useful to a user than a system that merely outputs an image based upon the ultrasonic energy alone. Furthermore, embodiments of the invention provide a system and method for repositioning an array of ultrasonic sensors with respect to target area or Unit Under Test (UUT) to facilitate a thorough and repeatable test. As used herein, ultrasonic energy refers generally to vibrations in the ultrasonic frequency range, for example at frequencies greater than about 20 kHz.
Embodiments of the invention provide a system configured to detect ultrasonic energy including: an ultrasonic sensor array, the ultrasonic sensor array including a plurality of ultrasonic sensors; a processor operably coupled to the ultrasonic sensor array; and a visible image detector module operably coupled to the processor, the system configured to calculate a Root-Mean-Square (RMS) value associated with each of the plurality of ultrasonic sensors.
Embodiments of the invention provide a method for graphically displaying ultrasonic energy including: receiving data from each of a plurality of ultrasonic sensors, the data based on Root-Mean-Square (RMS) calculation; building a contour map based on the received data; receiving a camera image; and blending at least one feature of the contour map with at least one feature of the received camera image.
Embodiments of the invention provide a processor-readable medium having stored thereon instructions for a method of generating a graphical user interface (GUI), the method including: receiving data from each of a plurality of ultrasonic sensors, the data based on Root-Mean-Square (RMS) calculation; building a contour map based on the received data; receiving a visual camera image; blending at least one feature of the contour map with at least one feature associated with the received visual camera image to create a blended image; and displaying the blended image in a first portion of a display screen.
Embodiments of the invention provide a method for testing including: selecting a first view of a unit under test, the first view associated with a relative position between a first ultrasonic sensor array and the unit under test; selecting a first region, the first region being associated with a portion of the first view; calculating a contour map based on the first region; and recognizing at least one feature of the contour map.
The invention will now be described with respect to exemplary embodiments illustrated in the drawings and discussed in the detailed description.
Variations to the illustrated functional architecture are possible. For example, as indicated by dashed lines, the ultrasonic transmitter 135 and the motor controller 140 are optional. In addition, in some embodiments, the video camera 130 may be coupled to the data I/O module 115. Further, the video camera 130 could be adapted to capture still images (“snapshots”) instead of, or in addition to, video images. Moreover, the functional architecture illustrated in
In operation, the ultrasonic sensor array 105 detects ultrasonic energy in response to ultrasonic energy sources (not shown) from a target area or Unit Under Test (UUT). The ultrasonic sensor array 105 may include signal processing modules (not shown) powered by the power supply 110. Processed ultrasonic data is coupled from the ultrasonic sensor array 105 to the processor 120 via the data I/O module 115. Video Camera 130 may be positioned to have a similar view of the ultrasonic source (not shown) as the ultrasonic sensor array 105. The processor 120 may be configured to further process the data received from the ultrasonic sensor array 105 and/or the video camera 130. The processor 120 may be adapted to display data derived from the ultrasonic sensor array 105, the video camera 130, or some combination or combinations thereof via the display 125.
In embodiments lacking a natural ultrasonic source, the ultrasonic transmitter 135 may be applied. For example, to facilitate the detection of defects, such as cracks or holes in the UUT, the ultrasonic transmitter 135 may be positioned inside of, or behind, the UUT so that ultrasonic energy from the ultrasonic transmitter 135 passing through the defect is detected by the ultrasonic sensor array 105.
In some instances, it may be desirable to change a relative position between the UUT and the ultrasonic sensor array 105. In this respect, it may be advantageous to move the position of the UUT. In other embodiments, it may be advantageous to move the ultrasonic sensor array. To reposition either the UUT or the ultrasonic sensor array 105, the motor controller 140 may receive commands from the processor 120, for instance to drive a stepper motor associated with tooling in the test environment. Exemplary embodiments of such repositioning will be described in further detail with respect to
The number of ultrasonic sensor assemblies 205, and relative positioning with respect to each other, may be varied according to design choice. For example, in an alternative embodiment, a circular pattern or other two or three dimensional arrangement of sensor assemblies 205 could be utilized.
In alternative embodiments, one or more sensor assemblies 205 in the ultrasonic sensor array 105 are sensitive to energy at other than ultrasonic frequencies.
The CCA 320 may include an amplifier 310 coupled to a true RMS-to-DC converter 315. The output 325 of the true RMS-to-DC converter 315 may be coupled to the processor 120 via the data I/O module 115.
In operation, the detector 305 detects ultrasonic energy, providing an analog signal (a varying voltage) to the amplifier 310. The amplifier 310 amplifies the analog signal from the detector 305 and provides the amplified signal to the true RMS-to-DC converter 315.
Roughly stated, Root Mean Square (RMS) transformation (“transformation” being used interchangeably herein with “conversion” or “calculation”) produces a Direct Current (DC) equivalent of an Alternating Current (AC) voltage. True RMS-to-DC conversion is a statistical measure (the quadratic mean) of the magnitude of a varying quantity. For N measured voltages (v1, v2, . . . vN), the true RMS voltage (vRMS) is given by:
vRMS=SQRT(1/N(v12+v22+ . . . +vN2)).
In practical application, the N measurements must be taken rapidly enough (as compared to the frequency of the signal) and over a meaningful window of time.
In alternative embodiments, approximations of the true RMS may be used. For example, a common approximation of the RMS value for a sinusoidal signal is:
vRMS=(0.707)(vpeak), where vpeak=the peak voltage of an AC signal.
Other RMS approximations, such as the arithmetic mean or arithmetic median of the N measurements could also be used, although such approximated RMS-to-DC transformations would likely lead to less accurate results than the true RMS-to-DC conversion described above.
Preferably, RMS-to-DC conversions are performed for signals associated with each of the ultrasonic sensor assemblies over the same time interval. An output 325 associated with each of the ultrasonic sensor assemblies 205 provides a DC equivalent voltage to the data I/O module 115. Analog Devices AD536A is a suitable true RMS-to-DC converter.
Other embodiments of the CCA 320 may include multiple amplifier stages. For example, the amplifier 320 may include a front-end preamplifier (not shown) and a variable-gain amplifier (not shown). One or more National Semiconductor LM386 low-voltage power amplifiers are suitable for the amplifier 310.
CCA 320 may include additional signal conditioning devices according to design choice. Furthermore, the system could be partitioned so that the true RMS-to-DC converter 315 and/or the amplifier 310 is included with the data I/O module 115 instead of the ultrasonic sensor assemblies 205.
Other form factors for the shroud 405, chassis 410, and CCA 320 are possible. For example, the shroud 405 may be of a hyperbolic, conical, hemispherical, or other shape that would facilitate focusing, filtering, or other conditioning of ultrasonic signals received by the detector 305. The use of a shroud 405 is optional.
In an alternative embodiment, a single cylindrical, hyperbolic, conical, hemispherical, or other shroud (not shown) is adapted to focus, filter, or otherwise condition ultrasonic energy with respect to the entire ultrasonic sensor array 105 rather than being present on each of the ultrasonic sensor assemblies 205.
After receiving the ultrasonic sensor data in step 510, the process advances to step 515 to smooth data. The purpose of smoothing step 515 is to reduce the effect of transient noise in the detection frequency range. In one embodiment, smoothing step 515 is performed via a simple moving average calculation on the received data values. For example, smoothing step 515 may add the five most recent data values, then divide by five. Alternatively, a low-pass filter (LPF) (not shown), which may be implemented in hardware or software, could be utilized to perform smoothing step 515.
Next, the process advances to step 520 to build a contour map. An embodiment of step 520 is described below with reference to
Meanwhile, subsequent to receiving the video camera signal in step 525, the process advances to step 530 to zoom or scale (zoom/scale) an image associated with the video camera. The purpose of zooming/scaling step 530 is to scale the Field Of View (FOV) of the video camera to the contour map so that corresponding pixel locations on the video camera image and the contour map relate to the same physical locations of the UUT. The FOV of the video camera may be adjusted optically or via software. In addition, zooming/scaling step 530 may be performed manually or automatically. In automatic operation, step 530 may be informed of the distance between the video camera and the UUT by a ultrasonic range-finder, a laser range-finder, or other range-finder. A transfer function may then be used to convert the distance data to a video camera zoom value.
In step 535, the process blends data output from steps 520 and 530. For example, one or more features of the contour map resulting from step 520 may be overlaid with one or more features of the scaled image resulting from step 530. An example of such a blended image is provided in the blended display window 615 of
Variations to the process illustrated in
Next, in step 550, known data (z values) associated with the true RMS-to-DC conversion (or approximation thereof) from each sensor assembly are associated with corresponding positions on the initialized 2D matrix. Then, in step 555, the process determines unknown values, for example by mathematical interpolation or extrapolation, which are also added to the initialized matrix. Interpolation/extrapolation step 555 can take into account the fact that the FOV of individual sensor assemblies may overlap at the UUT. Next, in step 560, the matrix is scaled based on a desired range of data in each dimension of the three dimensional matrix. For example, the scaling may be set for one or more of the x, y, and z axes.
In step 565, a color ramp is created for data ranges of the z axis in the matrix. For instance, where the z-axis scale varies from 0.0 to 1.0, a color ramp may specify that values between 0.0 and 0.7 are blue, values at 0.8 and 0.9 are yellow and orange, respectively, and values at 1.0 are red. In step 570, the matrix may be colored according to the color ramp created in step 565 and then output to a display in step 570.
Variations to the illustrated process are possible. For example, in some embodiments, the determining unknown values step 555 may be omitted where resolution requirements do not require additional matrix data values. Scaling step 560 may likewise be omitted if not required by the application. Moreover, in some embodiments, pre-existing color ramps may be used, eliminating the need to create a color map in step 565.
In an alternative embodiment (not shown) a blended window could include a blended image that includes a complete contour map (optionally at least partially transparent) overlaid onto a video camera image. Other blended variants are also possible.
In embodiments of the invention, a laser or other range finder (not shown) may also be mounted to the sensor array 105 to measure a distance to the UUT 705. Resulting distance information may be used, for example, to facilitate focusing the video camera 130 or performing zooming/scaling step 530 as discussed with reference to
The systems and methods described above with reference to
As described above with reference to
In embodiments of the invention, it may be advantageous to change the position of the UUT 705 with respect to the ultrasonic sensor array 105. Exemplary material handling devices for accomplishing this are illustrated in
The methods discussed next with reference to
Next, the process selects a first view (for example a predetermined plan or perspective view of the UUT) in step 1115, then selects a first region (portion of the view) in step 1120. The process then calculates at least one contour map in step 1125, for example using the process described above with reference to
Where the result of conditional step 1140 is in the affirmative, the process advances to conditional step 1145 to determine whether all predetermined views have been considered. Where the result of conditional step 1140 is in the negative, the process selects a next region in step 1120.
Where the result of conditional step 1145 is in the negative, the process selects a next predetermined view in step 1115. Where the result of conditional step 1145 is in the affirmative, the process terminates in step 1150. Upon completion of the process illustrated in
Variations to the process illustrated in
Next, the process selects a first view (for example a predetermined plan or perspective view of the UUT part number) in step 1174, then selects a first region (portion of the view) in step 1176. Views and regions selected in steps 1172 and 1174 correspond to views and regions selected in steps 1115 and 1120, respectively. The process then calculates at least one contour map in step 1178, for example using the process described above with reference to
Then, in step 1180, the process selects a first pattern (including reading the first pattern from memory) and performs a recognition task in step 1182. Recognition task 1182 may be or include artificial intelligence and/or neural network approaches for analyzing data derived from sensors. Such data may be analyzed on its own (in which case selection step 1180 is not needed), or by comparison to one or more stored patterns. In a simple form, recognition step 1182 may simply compare one or more peak measured values to a predetermined threshold value. The result of recognition step 1182 may be or include, for example, the identification of a leak, the precise location of a leak (by coordinates or with reference to a descriptive feature), an assessment of the flow rate and/or direction of a leak, and/or an estimate of the size of a hole in a pressurized UUT.
The process then advances to conditional step 1184 to determine whether a response is required. Where the result of conditional step 1184 is in the affirmative, the process advances to response step 1186, which may include, for example, logging or saving the results of recognition step 1182 by UUT part number and serial number, notifying an operator, and/or facilitating indicated rework or repair operations (not shown). Subsequent to step 1186, and where the result of conditional step 1184 is in the negative, the process is promoted to conditional step 1188 to determine whether all relevant patterns have been considered.
Where the result of conditional step 1188 is in the negative, the process selects a next pattern in step 1180. Where the result of conditional step 1188 is in the affirmative, the process advances to conditional step 1190 to determine whether all regions have been considered. Where the result of conditional step 1190 is in the negative, the process selects a next region in step 1176. Where the result of conditional step 1190 is in the affirmative, the process advances to conditional step 1192 to determine whether all views have been considered. Where the result of conditional step 1192 is in the negative, the process selects a next predetermined view in step 1174. Where the result of conditional step 1192 is in the affirmative, the process terminates in step 1294.
Variations to the process illustrated in
The system described with reference to
The disclosed systems and method may be applicable to a wide variety of applications where it may be advantageous to detect, display or otherwise process data associated with ultrasonic energy. As described above, an ultrasonic signature may be associated with certain types of defects. Moreover, an ultrasonic source may be used in a testing or diagnostic mode to uncover certain holes, cracks, voids, or other defects indicated by mechanical vibrations of mechanical components or systems in the ultrasonic frequency range.
Accordingly, the systems and/or methods described herein may be applicable for testing or diagnostics associated with, for instance: cylinders, transmissions, engine blocks, fuel tanks, fittings, valves, flanges, vehicles cabs, pump cavitations, missing gear teeth gear boxes, line blockage, steam traps, compressors, motors, pipes, flow direction, underground leaks, vacuum leaks, welds, substations, heat exchangers, seals, pump tanks, air brakes, gaskets, pressure leaks, electrical arcs, caulking, and/or junction boxes.
It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed systems and methods. For instance, systems and/or methods could be implemented using arithmetic mean, arithmetic average, or other approximation instead of the true RMS-to-DC translations described herein. In addition, combinations of the disclosed embodiments not specifically described are also possible, for example those that utilize other sensor types instead of, or in combination with, sensor types described herein. Accordingly, other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed system and methods. It is intended that the specification and examples be considered as exemplary only, with a true scope being indicated by the following claims and their equivalents.
Number | Name | Date | Kind |
---|---|---|---|
3592967 | Harris | Jul 1971 | A |
3691819 | Guest | Sep 1972 | A |
3814207 | Kusuda et al. | Jun 1974 | A |
4066095 | Massa | Jan 1978 | A |
4083229 | Anway | Apr 1978 | A |
4201092 | Dau | May 1980 | A |
4416145 | Goodman et al. | Nov 1983 | A |
4713798 | Kay | Dec 1987 | A |
4809538 | Fisch | Mar 1989 | A |
4833716 | Cote, Jr. | May 1989 | A |
4852390 | Fisch | Aug 1989 | A |
4858462 | Coulter et al. | Aug 1989 | A |
4991426 | Evans | Feb 1991 | A |
5117676 | Chang | Jun 1992 | A |
5185609 | DeBord | Feb 1993 | A |
5231866 | Peacock | Aug 1993 | A |
5361636 | Farstad et al. | Nov 1994 | A |
5400645 | Kunze et al. | Mar 1995 | A |
5408867 | Kunze et al. | Apr 1995 | A |
5416724 | Savic | May 1995 | A |
5433104 | Kunze et al. | Jul 1995 | A |
5475613 | Itoga et al. | Dec 1995 | A |
5544074 | Suzuki et al. | Aug 1996 | A |
5548530 | Baumoel | Aug 1996 | A |
5557969 | Jordan | Sep 1996 | A |
5650943 | Powell et al. | Jul 1997 | A |
5790690 | Doi et al. | Aug 1998 | A |
5865751 | Okuno et al. | Feb 1999 | A |
5925821 | Bousquet | Jul 1999 | A |
5974862 | Lander et al. | Nov 1999 | A |
5979239 | Youngquist et al. | Nov 1999 | A |
6216539 | Johnson et al. | Apr 2001 | B1 |
6220098 | Johnson et al. | Apr 2001 | B1 |
6247353 | Battenberg et al. | Jun 2001 | B1 |
6267000 | Harper et al. | Jul 2001 | B1 |
6354147 | Gysling et al. | Mar 2002 | B1 |
6367328 | Gorman et al. | Apr 2002 | B1 |
6370414 | Robinson | Apr 2002 | B1 |
6442999 | Baumoel | Sep 2002 | B1 |
6453247 | Hunaidi | Sep 2002 | B1 |
6595059 | Gorman et al. | Jul 2003 | B2 |
6707762 | Goodman et al. | Mar 2004 | B1 |
6725878 | Nawa et al. | Apr 2004 | B1 |
6804992 | Goodman et al. | Oct 2004 | B2 |
6971576 | Tsikos et al. | Dec 2005 | B2 |
6973344 | Finneran et al. | Dec 2005 | B2 |
6983642 | Stumpf | Jan 2006 | B2 |
7058190 | Zakarauskas et al. | Jun 2006 | B1 |
7107849 | Sugiyama et al. | Sep 2006 | B2 |
7475801 | Johansen et al. | Jan 2009 | B2 |
20030164044 | Gayle | Sep 2003 | A1 |
20040035209 | Sugiyama et al. | Feb 2004 | A1 |
20040122602 | Nagase | Jun 2004 | A1 |
20050162261 | Li | Jul 2005 | A1 |
20060219013 | Baba et al. | Oct 2006 | A1 |
20060254359 | Langlois et al. | Nov 2006 | A1 |
20070276250 | Donaldson | Nov 2007 | A1 |
20090082668 | Hamada et al. | Mar 2009 | A1 |
20090099451 | Nakaya et al. | Apr 2009 | A1 |
Number | Date | Country |
---|---|---|
WO 0055644 | Sep 2000 | WO |
Number | Date | Country | |
---|---|---|---|
20070238993 A1 | Oct 2007 | US |