This disclosure relates generally to distributed fiber optic sensing (DFOS) systems, methods, and structures. More specifically, it pertains to the detection and localization of acoustic events across a city-scale environment using DFOS.
Distributed fiber optic sensing (DFOS) systems, methods, and structures have shown great utility in a number of unique sensing applications due to their intrinsic advantages over conventional technologies. They can be integrated into normally inaccessible areas and can function in very harsh environments. They are immune to radio frequency interference and electromagnetic interference and can provide continuous, real-time measurements along entire lengths of fiber optic cable(s).
Recent advances in DFOS technologies have been shown to allow for continuous, long-distance sensing over existing telecommunications networks, enabling telecommunications carriers to provide not only communications services but also a variety of sensing services including, but not limited to, traffic/road condition monitoring, infrastructure monitoring, and intrusion detection, using the same network. When used in this manner, an entire telecommunications network may now act as a large-scale sensor enabling—for example—constant monitoring of an environment including one spanning an entire city or other large community.
Advance in the art is made according to aspects of the present disclosure directed to distributed fiber optic sensor (DFOS) systems, methods, and structures that monitor an entire community including a city or other urban environment(s) using acoustic DFOS techniques. At the heart of our disclosure, is our inventive method that analyzes acoustic events and localizes their source(s).
In sharp contrast to the prior art, systems, methods, and structures according to aspects of the present disclosure effectively transform fiber optic cables—that may already be deployed in an environment such as telecommunications cables—into a “microphone array” that advantageously permits detecting and locating acoustic events while discriminating acoustic events of interest from normal, everyday acoustic events that occur in such a setting.
Of particular advantage—and in further contrast to the prior art—systems, methods, and structures according to aspects of the present disclosure only require a DFOS distributed acoustic sensing (DAS) system that may be conveniently centrally located, a fiber optic cable—preferably one(s) already deployed—that is/are used as a microphone array, and our inventive method that as we have noted analyzes acoustic events and localizes their source(s).
As we shall show and describe, particular distinguishing aspects of systems, methods and structures according to the present disclosure include—but are not limited to—use existing deployed fiber optic cable thereby eliminating any additional deployment cost(s); providing a city-wide/community-wide surveillance area that is scalable to larger area(s) by adding more fiber route(s); and exhibiting an ability to adaptively “move” or change (add/delete) listening points (i.e., fiber “microphones”) without physically/mechanically moving anything. Our inventive methods and systems are evaluated and demonstrate distributed acoustic detection and localization of acoustic events using standard, live aerial telecommunications optical fiber cables while exhibiting an error of less than 1.22 m.
A more complete understanding of the present disclosure may be realized by reference to the accompanying drawing in which:
The following merely illustrates the principles of the disclosure. It will thus be appreciated that those skilled in the art will be able to devise various arrangements which, although not explicitly described or shown herein, embody the principles of the disclosure and are included within its spirit and scope.
Furthermore, all examples and conditional language recited herein are intended to be only for pedagogical purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventor(s) to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions.
Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
Thus, for example, it will be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the disclosure.
Unless otherwise explicitly specified herein, the FIGs comprising the drawing are not drawn to scale.
By way of some additional background—and with reference to
As will be appreciated, a contemporary DFOS system includes an interrogator that periodically generates optical pulses (or any coded signal) and injects them into an optical fiber. The injected optical pulse signal is conveyed along the optical fiber.
At locations along the length of the fiber, a small portion of signal is reflected and conveyed back to the interrogator. The reflected signal carries information the interrogator uses to detect, such as a power level change that indicates—for example—a mechanical vibration.
The reflected signal is converted to electrical domain and processed inside the interrogator. Based on the pulse injection time and the time signal is detected, the interrogator determines at which location along the fiber the signal is coming from, thus able to sense the activity of each location along the fiber.
Generally, such an acoustic event produces an acoustic vibration in the air which is then detected by the fiber optic cable. Such vibrations may advantageously be detected by a DAS system—including interrogator and analysis system and/or AI—based system—which is/are located in a central office—or other location including cloud systems—away from the actual acoustic event. As noted previously and will be described in greater detail—detected signals resulting from the acoustic event(s) are analyzed using our inventive method(s) including both spatial domain, and temporal domain analysis.
As those skilled in the art will understand and appreciate, a spatial domain analysis—according to aspects of the present disclosure—determines which point(s) along a sensing fiber optic have detected an acoustic disturbance/signal, and those points are selected as our virtual microphones. In a next step, our inventive method determines a time of arrival of the signal(s) for each virtual microphone. Once a time signature is determined for each virtual microphone, the location(s) (i.e. the coordinates) of this acoustic event is determined as a probability distribution on an actual map, based on the physical location(s) of the virtual microphones.
Operationally, when an acoustic event occurs in an environment in which the sensing fiber optic cable occurs—for example in an urban environment in an unknown location—acoustic vibrations due to this event create a traveling vibration pattern in three dimensions (3D) which subsequently interact with the fiber optic cable generating strain changes at multiple locations of the fiber optic cable at different times. These strain(s) (vibration patterns) are detected both time and space domains by the DAS system at the central office and analyzed.
Operationally, and according to aspects of the present disclosure, a set of “virtual microphones” are selected. The virtual microphones” selected are generally those locations along the fiber optic cable route exhibiting the most sensitivity to strain and hence, acoustic events. Such understood locations include—for example—a down-lead fiber optic cable along a pole, a spool of fiber optic cable, fiber optic connection points to a pole, or a central part (substantially midpoint) of a fiber optic cable length.
Once the virtual microphones are selected, signal(s) recorded by each of these microphones is/are analyzed using a change point detection algorithm such as a Z-test, and the time of arrival is calculated for each microphone.
Finally,
Next, a time difference matrix, involving a relative time difference between all virtual microphone combinations, is generated, an example of which is shown in the table below.
We note that the time difference matrix together with the geometric physical positions of the virtual microphones are then used in a 3-dimensional acoustic-location-error function, whose minimum value determination provides a most probable location of the acoustic event(s).
Advantageously, this determination may be output in at least two convenient and informative formats. First, a single location for the acoustic event source can be displayed on a 2-dimensional map. Second, and perhaps more informative, system noise and imperfections may be considered to further improve the results and a heat-map-like distribution map can be generated for the source location. When so displayed, a greater probability location may be readily determined from the map.
Those skilled in the art will readily understand and appreciate that additional analysis capabilities can be added to our inventive system and method as well, such as classification of the acoustic event (whether it is a gunshot, an explosion, a car accident, etc—among others) by performing spectral analysis and machine learning models that may include neural network structures and methods as part of the interrogator/analysis systems and methods. Such detected/analyzed events may then be reported to appropriate responders and/or authorities to take an appropriate action or actions.
With this disclosure in place, we may now provide experimental results of our systems and methods as applied to real-world environment(s). The experiments are conducted in our research testbed consisting of three real-scale class II utility poles, with installed power cables and a single-mode telecom fiber cable. The poles are 35 feet long and placed 90 feet apart from each other in a linear arrangement. The aerial fiber cable used in the experiments is an outdoor figure-8 cable with 36 fiber cores supported by a 0.25-inch messenger wire. The fiber cable is installed on the poles at a height of ˜4 meters.
To localize the acoustic sound source by triangulation, a linear arrangement of the sensors is not preferred, therefore in addition to the 3 poles, we have placed a fiber spool on the ground near one end of the pole line to break the symmetry. These 4 locations (3 poles, and 1 fiber spool) are chosen as our “virtual microphones” to be used as reference points for acoustic source localization. The DAS system was located inside a control office approximately 350 meters away from the first pole (located in the origin of our testbed) in terms of fiber distance. A birds-eye view plan of the testbed is shown illustratively in
The DAS system was operated at an optical pulse width of 40 ns, at a pulse repetition rate of 20 kHz. The spatial resolution of the system was—1.22 meters. The locations of the poles and the fiber spool along the fiber optic cable were obtained by analyzing the DAS data of manual hammer hits at each location.
The geographical locations of those points were measured using an industrial tape measure with an expected error of ±15 cm, relative to Pole 1, which was chosen as the origin of the testbed coordinate system. The locations of these reference points along the fiber cable and in the testbed coordinate system are given in the following table.
A .32 caliber starter gun, shooting short black powder blanks was utilized as the impulsive acoustic source, and fired once at 4 different locations, above head level approximately 2 meters above the ground at the testbed. The DAS signatures of each shot are recorded separately and analyzed to calculate the location of the impulsive acoustic event.
The starter gunshot events are illustrated in a “waterfall” trace plot in the figure, which is a 2D representation of the detected DAS signal along the interrogated fiber length (x-axis), and how it changes in time (y-axis) where the signal strength may be color-coded. This figure shows a total time duration of 150 milliseconds at the fiber range between 300 m-550 m.
As one can observe in the waterfall plot, the same acoustic event is detected by different parts of the same aerial fiber optic cable (aerial is another term for cables suspended from utility poles) at slightly different times shown with red ellipses. By knowing the actual locations of these reference points and the time difference of arrival (TDOA) of the acoustic signal at multiple reference points, it is possible to determine/calculate the source location.
To determine the time of arrival, we employ an online change-point detection algorithm based on Z-score. In this approach, we characterize the distribution of sensing measurements prior to the arrival of acoustic events by its running mean and variance, and for the next data point, we compute the probability of observing a value that is at least as extreme as the value observed, under the assumption that it is drawn from the same distribution.
The threshold (p-value) in our algorithm was chosen as 0.001, so the earliest data value with a probability below this threshold is registered as a change point, and its time coordinate is taken as the signal arrival time. Once the relative time differences are calculated we use the 3-D triangulation formula to obtain the source location as follows:
√{square root over ((xs−xi)2+(ys−yi)2+(zs−zi)2)}−√{square root over ((xs−xj)2+(ys−yj)2+(zs−zj)2)}=c˜Δτij
In this equation x, y, and z are the standard coordinates. The subscripts s, i and j are denoting the “source”, i-th sensor, and j-th sensor respectively and c is the speed of sound taken as 343 m/s, and Δτij is the relative time difference of arrival between i-th and j-th sensors.
By using the above equation/relationship after the change-point detection algorithm, the coordinates of the source location are determined/calculated. The actual gunshot locations and their calculated locations at cross-section z=2 m are illustrated in
At this point we note that since DAS systems measure strains by measuring differential phase changes over a fiber segment of one gauge length, the reference microphones based on our DAS technology collect acoustic energies spatially accumulated along the fiber segments about 1.22 m long instead of in a truly point manner. Despite this linear spatial-reception footprint of the reference microphones, the deviations to the true source locations by our method were still less than 1.12 meters. It is to be noted that, part of this inaccuracy is due to the manual localization errors of reference points and actual event locations. In summary, we describe herein acoustic source localization using standard aerial
Telecommunication fiber optic cables—including those deployed and operating to actively carry telecommunications traffic. Our experimental results verify our approach of integrating DAS technology to existing aerial telecommunications fiber optic networks for smart city and safer city applications that advantageously reduce installation costs associated with such systems.
In addition, systems, methods, and structures according to aspects of the present disclosure may advantageously provide for the use of DAS for continuous monitoring of a large area for acoustic impulse events by employing fiber optic cables already deployed in an urban setting as a “microphone array”.
Advantageously, our inventive techniques employ DAS for detection and localization of acoustic impulse events by using time-frequency-spatial domain methods for data analysis including using spatial distribution of the fiber optic as part of sensing configuration and using frequency filtering optimization to preprocess the data, using time-domain change point-detection method for relative time of arrival estimation and formulation of the localization as an optimization problem (rather than equation solving) to estimate the event location (using multiple measurements), with a notion of uncertainty quantification and then informing relevant authorities on the detected event time and location(s).
At this point, while we have presented this disclosure using some specific examples, those skilled in the art will recognize that our teachings are not so limited. Accordingly, this disclosure should only be limited by the scope of the claims attached hereto.
This disclosure claims the benefit of U.S. Provisional Patent Application Ser. No. 63/069,791 filed 25 Aug. 2020 and U.S. Provisional Patent Application Ser. No. 63/140,977 filed 25 Jan. 2021, the entire contents of each is incorporated by reference as if set forth at length herein.
Number | Date | Country | |
---|---|---|---|
63069791 | Aug 2020 | US | |
63140977 | Jan 2021 | US |