The present invention relates to test and measurement instruments, and more particularly to the processing of acquired signals.
Real-time spectrum analyzers such as the RSA6100 and RSA3400 families available from Tektronix, Inc. of Beaverton, Oreg. trigger on, capture, and analyze RF signals in real-time. These test and measurement instruments seamlessly capture RF signals so that, unlike conventional swept spectrum analyzers and vector signal analyzers, no data is missed within a specified bandwidth.
An experienced user can look at a display of a real-time spectrum analyzer and recognize different types of signals based on their visual appearance. For example, an experienced user can observe a signal's bandwidth, duration, amplitude or power, the number and shape of its spectral lobes, and other visual clues, and based on that, determine whether the signal is modulated according to a particular modulation standard, whether the signal was transmitted by a particular transmitter, and so on. The user may then use that information to select an appropriate measurement for the signal.
However, requiring a user to recognize a signal can be time-consuming and inconvenient for the user. Furthermore, in some cases, an inexperienced user may need to identify a signal but lack the expertise to do so.
The inventors of the present invention have recognized that, in order to overcome the deficiencies of the prior art discussed above, what is needed is a test and measurement instrument that is capable of automatically recognizing signals without any user intervention.
Accordingly, embodiments of the present invention provide a test and measurement instrument that processes digital data that represents an input signal to produce a target image, and then uses a computer vision technique to recognize a signal depicted within the target image. In some embodiments, the location of the signal within the target image is identified on a display device. In other embodiments, the location of the signal within the target image is used to perform a measurement. In other embodiments, when the signal is recognized, a trigger signal is generated that causes digital data that represents the input signal to be stored in a memory.
The objects, advantages, and other novel features of the present invention are apparent from the following detailed description when read in conjunction with the appended claims and attached drawings.
Referring now to
An “image” is a data structure that depicts any one of various visualizations of the input signal. One such visualization is a “frequency spectrum.” A frequency spectrum is formed by transforming a frame of the digital data into the frequency domain using a frequency transform such as a fast Fourier transform, a chirp-Z transform, or the like.
Tektronix real-time spectrum analyzers use a technology referred to as “Digital Phosphor” or alternatively as “DPX®” to produce a specialized frequency spectrum referred to as a “DPX spectrum.” A DPX spectrum is formed by transforming the continuous stream of digital data into a series of frequency spectra in real-time, and then accumulating the frequency spectra in a database. The database provides a precise measurement of the percentage of time during the measurement period that the input signal occupied particular locations in the amplitude or power versus frequency space, also referred to as “density.” A DPX spectrum is commonly displayed with the x-axis being frequency, the y-axis being amplitude or power, and the z-axis being density, represented by the color or intensity of each point in the power versus frequency space. DPX acquisition and display technology reveals signal details such as short-duration or infrequent events that are completely missed by conventional spectrum analyzers and vector signal analyzers. For more information on DPX, see Tektronix document number 37W-19638 titled “DPX® Acquisition Technology for Spectrum Analyzers Fundamentals” available at http://www.tek.com/.
Another such visualization is a “spectrogram.” A spectrogram is formed by transforming frames of the digital data into a series of colored lines representing frequency spectra and then laying each line “side-by-side” to form an image, with each “slice” of the image corresponding to one frequency spectrum. A spectrogram is commonly displayed with the x-axis being frequency, the y-axis being time, and different amplitude or power values within the time versus frequency space being indicated by different colors or intensities. A spectrogram provides an intuitive visualization of how frequency and amplitude or power behavior change over time.
Referring now to
“Computer vision” refers to the theory of artificial systems that extract information from images, or in other words, computers that “see.” One particular application of computer vision is “object recognition,” which is the task of determining whether an image depicts a specified object. As an example, a digital camera such as the EOS 50D available from Canon Inc. of Ōta, Tokyo, Japan uses object recognition to automatically determine whether an image formed by the camera's field of view depicts a human face, a feature referred to as “face detection.”
Various object recognition techniques exist. In one object recognition technique referred to as “template matching,” a first image referred to as a “target image” is searched for regions that are similar to a second image referred to as a “reference image.” The search is typically performed using two-dimensional correlation.
The following example illustrates how template matching can be used to recognize a signal depicted within a target image.
Correlating the reference image 400 over the entire target image 300 with a two-dimensional correlation yields the result 500 shown in
In some embodiments, the location of a recognized signal is identified visually on the display device 130. For example, as shown in
In some embodiments, the accuracy of the computer vision technique is enhanced by applying any one of various image processing techniques to the reference image and/or the target image before the computer vision technique. One such image processing technique, “edge detection,” may be used to detect the features of the reference image and the target image, thereby filtering out less relevant information while preserving the important structural properties. For example, after edge detection, the frequency spectrum 300 shown in
In some embodiments, the target image depicts a spectrogram as shown in
It will be appreciated that the target image is not limited to depicting a frequency spectrum or a spectrogram as described above, but in general may depict any visualization of the input signal. The visualization may depict the input signal in any domain such as the frequency domain, the time domain, the modulation domain, the code domain, and the statistical domain. Frequency domain visualizations include frequency spectra, spectrograms, and the like. Time domain visualizations include graphs of frequency versus time, amplitude or power versus time, phase versus time, I/Q versus time, eye diagrams, and the like. Modulation domain visualizations include constellation diagrams and the like. Code domain visualizations include “codograms” and the like. Statistical domain visualizations include graphs of complementary cumulative distribution functions and the like.
In some embodiments, the location of a recognized signal within a target image is used to perform a measurement. That is, the location of the recognized signal within the target image may be input to a measurement and used to identify a signal automatically rather than requiring the user to manually identify it by, for example, placing a cursor on it. For example, after recognizing the HD signals 320, 330, and 335 of
In the embodiments described above, the processor 125 recognizes a signal based on one reference image. Alternatively, in other embodiments, the processor 125 recognizes a signal based on a plurality of reference images, each of which corresponds to a portion of the signal.
In some embodiments, the processor 125 converts the digital data into a plurality of sequential target images and recognizes a signal within each target image. For example, the processor 125 may recognize a first signal within a first target image, a second signal within a second target image, and so on. Alternatively, the processor 125 may recognize a first signal within a specified number of target images, a second signal within the next specified number of target images, and so on. In this manner, the processor 125 can recognize the time-evolution of a signal such as a sequential test pattern.
In some embodiments, the processor 125 only searches within a specified region of the target image. For example, in the case where the target image is a spectrogram, constraining the search to a specified region of the target image is equivalent to constraining the search to a particular frequency range, a particular time range, or a particular region in the time versus frequency space.
In the embodiments described above, the processor 125 recognizes a signal by determining that a portion of the target image is similar to a reference image. In other embodiments, the processor 125 recognizes a signal by determining that a specified portion of the target image is not similar to a reference image, or in other words, determining that the specified portion of the target image does not appear as expected.
In some embodiments, the search is constrained to signals having a specified amplitude or power. For example, in the case where the target image is a spectrogram, where different amplitude or power values are indicated by different colors, the target image may be constrained to signals having a particular color.
In various embodiments, the reference image may be an image of a previously recognized signal, an image supplied by the user, or an image defined by a standard. In various embodiments, the reference image may be specified by a user, defined by a standard, or automatically determined by the processor 125.
In some embodiments, a command or a button is provided to the user that causes the processor 125 to store a portion of an image for use as a reference image.
In some embodiments, the processor 125 periodically updates the reference image by replacing it with a portion of the current or a previous target image. For example, the user may specify that with each new target image, the processor 125 is to replace the reference image with a specified portion of the previous target image, or the target image before that, and so on. For another example, the user may specify that the processor 125 is to replace the reference image every five seconds with a specified portion of the then-current target image. In this manner, the processor 125 may track slowly-evolving changes in an input signal, thereby allowing it to recognize only sudden changes.
In some embodiments, the reference image is determined automatically by the processor 125 based on a library of stored reference images. In this case, the processor 125 tests the portion of the target image that contains a signal against the library of stored reference images to determine whether the signal is similar to one of the stored reference images. In this manner, the real-time spectrum analyzer 100 may automatically identify a signal that is unknown to a user.
In some embodiments, the trigger detector 140 processes the digital data by performing the steps of (1) converting the digital data into a target image, and (2) using a computer vision technique to recognize a signal depicted within the target image. When the trigger detector 140 recognizes the signal, it generates a trigger signal. As described above, the trigger signal causes the memory 135 to store a block of digital data. The stored digital data are then analyzed by the processor 125, and the results may be displayed on the display device 130 or stored in a storage device (not shown). In recognizing the signal, the trigger detector 140 may use any of the techniques described herein that the processor 125 uses. It will be appreciated that a user may use the trigger detector 140 to trigger on any characteristic of interest of a signal that can be depicted by an image rather than having to specify trigger criteria parametrically (e.g., trigger on a particular amplitude, frequency, or other property) as with conventional trigger detectors.
In various embodiments, the processor 125 and the trigger detector 140 may be implemented in hardware, software, or a combination of the two, and may comprise a general purpose microprocessor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or the like.
Although the embodiments described above are described in the context of a real-time spectrum analyzer, it will be appreciated that the principles described herein are equally applicable to any test and measurement instrument that processes digital data that represents an input signal by converting it into an image such as a swept spectrum analyzer, a vector signal analyzer, and an oscilloscope.
It will be appreciated that a test and measurement instrument according to an embodiment of the present invention is not only capable of recognizing signals without any user intervention, but is also capable of recognizing signals more quickly, accurately, precisely, and consistently than a human user is capable of. For example, a test and measurement instrument according to an embodiment of the present invention is capable of recognizing a signal depicted within a DPX spectrum that has a density distribution, also referred to as a “density profile,” that is specified more precisely than a human eye is capable of reliably discerning.
It will be appreciated from the foregoing discussion that the present invention represents a significant advance in the field of test and measurement equipment. Although specific embodiments of the invention have been illustrated and described for purposes of illustration, it will be understood that various modifications may be made without departing from the spirit and scope of the invention. Accordingly, the invention should not be limited except as by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
5613039 | Wang et al. | Mar 1997 | A |
5644331 | Hazama | Jul 1997 | A |
5796868 | Dutta-Choudhury | Aug 1998 | A |
6173074 | Russo | Jan 2001 | B1 |
6404923 | Chaddha et al. | Jun 2002 | B1 |
6434515 | Qian | Aug 2002 | B1 |
6975687 | Jackson et al. | Dec 2005 | B2 |
7362814 | Sugar | Apr 2008 | B2 |
7459898 | Woodings | Dec 2008 | B1 |
7561090 | Muenter et al. | Jul 2009 | B1 |
7570305 | Joskin | Aug 2009 | B2 |
7664333 | Kasutani et al. | Feb 2010 | B2 |
7801476 | Feenstra et al. | Sep 2010 | B2 |
8036468 | Chrysanthakopoulos | Oct 2011 | B2 |
8283623 | Li | Oct 2012 | B2 |
8379703 | Nara | Feb 2013 | B2 |
20030165259 | Balent et al. | Sep 2003 | A1 |
20050244033 | Ekin et al. | Nov 2005 | A1 |
20050276462 | Silver et al. | Dec 2005 | A1 |
20060065844 | Zelakiewicz et al. | Mar 2006 | A1 |
20060114322 | Romanowich et al. | Jun 2006 | A1 |
20060132353 | Natsume et al. | Jun 2006 | A1 |
20060281429 | Kishi et al. | Dec 2006 | A1 |
20070213618 | Li et al. | Sep 2007 | A1 |
20080144899 | Varma et al. | Jun 2008 | A1 |
20090012387 | Hanson et al. | Jan 2009 | A1 |
20090034793 | Dong et al. | Feb 2009 | A1 |
20090082982 | Cain | Mar 2009 | A1 |
20120069341 | Mullen et al. | Mar 2012 | A1 |
Number | Date | Country |
---|---|---|
0677746 | Oct 1995 | EP |
2003084037 | Mar 2003 | JP |
9117525 | Nov 1991 | WO |
9319378 | Sep 1993 | WO |
9422025 | Sep 1994 | WO |
2004095758 | Nov 2004 | WO |
2010049686 | May 2010 | WO |
Entry |
---|
Venayagamoorthy; Ganesh, et al., “Voice recognition using neural networks,” IEEE, 1998, pp. 29-32. |
Mellinger; David, et al., “Recognizing transient low-frequency whale sounds by spectrogram correlation,” J. Acoust. Soc. Am. 107 (6), Jun. 2000, pp. 3518-3529. |
Elliot; Tom, “Classify Digital RF Signals in the Field,” Microwaves & RF, ED Online ID #20586, Jan. 2009, available at http://www.mwrf.com/Articles/Index.cfm?Ad=1&ArticleID=20586. |
Paragios; Nikos, et al., “Geodesic Active Contours for Supervised Texture Segmentation,” 1063-6919/99, 1999, IEEE. |
Xu Mankun, et al., “A New Time-Frequency Spectrogram Analysis of FH Signals by Image Enhancement and Mathematical Morphology,” IEEE Fourth International Conference on Image and Graphics (ICIG 2007), Aug. 1, 2007, pp. 610-615, IEEE, Piscataway, NJ, USA. |
Number | Date | Country | |
---|---|---|---|
20110280489 A1 | Nov 2011 | US |