The present disclosure is related to an optical fiber sensing system, an optical fiber sensing device, and a sound output method.
In recent years, a technology called optical fiber sensing has been known, that uses an optical fiber as a sensor to detect sound. Since an optical fiber enables superimposition of sound on an optical signal transmitted therethrough, detection of sound is made possible through use of an optical fiber.
For example, Patent Literature 1 discloses a technology of detecting sound by analyzing a phase change of an optical wave transmitted through an optical fiber.
Published Japanese Translation of PCT International Publication for Patent Application, No. 2010-506496
Incidentally, as an acoustic system configured to output sound such as a person's voice, an acoustic system configured to employ a microphone and to output sound collected by the microphone has been generally known.
However, a typical microphone requires configuration such as arrangement and connection according to an acoustic system that employs the microphone and circumstances of the use. For example in a case in which the acoustic system is a conference system, it is required to provide a microphone (a plurality of microphones as the case may be), and, according to the number of participants and a seating plan of the conference, to change the position(s) of the microphone(s) and organize electrical cables connected to the microphone(s). Therefore, the acoustic system that employs a microphone is cumbersome to configure and difficult to construct flexibly.
On the other hand, an optical fiber, which enables detection of sound as described above, is provided with a function corresponding to a sound collection function of a microphone.
However, the technology disclosed in Patent Literature 1 does little more than detect the sound from the optical wave transmitted through the optical fiber, and does not encompass the concept of outputting the detected sound itself.
Given this, an objective of the present disclosure is to provide an optical fiber sensing system, an optical fiber sensing device, and a sound output method that solve the aforementioned problems and enable flexible constitution of an acoustic system.
An optical fiber sensing system according to an aspect comprises: an optical fiber configured to transmit an optical signal with sound superimposed thereon; a conversion unit configured to convert the optical signal into acoustic data; and an output unit configured to output the sound on the basis of the acoustic data.
A sound output method according to an aspect comprises: a transmitting step in which an optical fiber transmits an optical signal with sound superimposed thereon; a conversion step of converting the optical signal into acoustic data; and an output step of outputting the sound on the basis of the acoustic data.
The above-described aspects provide an effect of providing an optical fiber sensing system, an optical fiber sensing device, and a sound output method that enable flexible constitution of an acoustic system.
Hereinafter, example embodiments of the present disclosure are described with reference to the drawings. Note that the following descriptions and the drawings involve omissions and simplifications as appropriate for the sake of clarification of explanation. In addition, in each of the following drawings, the same element is denoted by the same symbol and repeated explanation is omitted as needed.
First, with reference to
As shown in
The optical fiber 10 is laid in a predetermined area. For example, in a case of applying the optical fiber sensing system to a conference system, the optical fiber 10 is laid in a predetermined area in a conference room. The predetermined area in the conference room is, for example, a table, a floor, walls, a ceiling, or the like in the conference room. Alternatively, in a case of applying the optical fiber sensing system to a monitoring system, the optical fiber 10 is laid in a predetermined monitoring area to be monitored. The predetermined monitoring area is, for example, a border, a prison, a commercial facility, an airport, a hospital, a street, a port, a plant, a nursing home, a company premise, a nursery school, a private home, or the like. Note that the optical fiber 10 may be laid in the predetermined area in a form of an optical fiber cable obtained by covering the optical fiber 10.
The conversion unit 21 emits pulsed light into the optical fiber 10. The conversion unit 21 also receives, as return light, reflected light and scattered light generated while the pulsed light is transmitted through the optical fiber 10, via the optical fiber 10.
When sound is produced around the optical fiber 10, the sound is superimposed on the return light transmitted by the optical fiber 10. The optical fiber 10 can thus detect the sound produced around the optical fiber 10.
The conversion unit 21 converts the return light with the sound superimposed thereon received from the optical fiber 10 into acoustic data. The conversion unit 21 can be embodied by using, for example, a distributed acoustic sensor (DAS).
The output unit 22 outputs the sound on the basis of the acoustic data converted by the conversion unit 21. For example, the output unit 22 carries out acoustic output of the sound from a speaker (not illustrated) or the like, or display output of the sound on a monitor (not illustrated) or the like. In the case of display output of the sound, the output unit 22 may, for example, carry out voice recognition of the sound and output a result of the voice recognition as characters.
Subsequently, with reference to
As shown in
The conversion unit 21 receives the return light with the sound superimposed thereon from the optical fiber 10, and converts the return light into acoustic data (Step S12).
Thereafter, the output unit 22 outputs the sound on the basis of the acoustic data converted by the conversion unit 21 (Step S13).
As described above, according to the first example embodiment: the optical fiber 10 superimposes the sound produced around the optical fiber 10 on the return light (optical signal) transmitted through the optical fiber 10 to transmit the sound; the conversion unit 21 converts the return light with the sound superimposed thereon into the acoustic data; and the output unit 22 outputs the sound on the basis of the acoustic data.
As a result, the sound detected by the optical fiber 10 can be reproduced in the output unit 22 in a separate location. In this regard, the optical fiber 10 is capable of detecting sound in any location where the optical fiber 10 is laid, and can thus be used as a microphone. At this time, the optical fiber 10 detects sound in a linear manner, unlike the typical microphone that detects sound in a pinpoint manner. Consequently, there is no need for arranging the typical microphone according to circumstances of the use and connecting the microphone to an electrical cable, whereby configuration is facilitated. In addition, the optical fiber 10 can be laid over a broad area inexpensively and easily. Therefore, employing the optical fiber sensing system of the first example embodiment enables flexible configuration of the acoustic system.
Note that, in the first example embodiment, the acoustic data may be stored and then the sound may be output on the basis of the acoustic data thus stored. In such a case, as shown in
Subsequently, with reference to
As shown in
On the basis of the return light with the sound superimposed thereon received by the conversion unit 21, the identification unit 23 identifies the position of production of the sound (a distance of the optical fiber 10 from the position to the conversion unit 21).
For example, on the basis of a time difference between the clock time at which the conversion unit 21 emits the pulsed light into the optical fiber 10 and the clock time at which the conversion unit 21 receives the return light with the sound superimposed thereon, the identification unit 23 identifies the distance of the optical fiber 10 from the position of production of the sound to the conversion unit 21. In this regard, if the identification unit 23 holds in advance a correspondence table in which the distance of the optical fiber 10 is associated with the position (spot) corresponding to the distance, the position of production of the sound (in this case, spot A) may also be identified through use of the correspondence table.
Alternatively, the identification unit 23 may also compare intensity of the sound detected at positions corresponding to predetermined distances of the optical fiber 10 from the conversion unit 21, to identify the position of production of the sound (a distance of the optical fiber 10 from the position to the conversion unit 21) on the basis of the result of the comparison. For example, suppose that the intensity of sound is detected at predetermined distances of the optical fiber 10, which is laid evenly over a table 42 as shown in
Note that when an event producing sound occurs around the optical fiber 10, vibration is likely to accompany the occurrence of the event. The vibration is also superimposed on the return light transmitted by the optical fiber 10. The optical fiber 10 can thus also detect the vibration generated with the sound around the optical fiber 10.
Consequently, the identification unit 23 may also identify the distance of the optical fiber 10 from the position of production of the sound to the conversion unit 21 on the basis of a time difference between the clock time at which the conversion unit 21 emits the pulsed light into the optical fiber 10 and the clock time at which the conversion unit 21 receives the return light with the vibration generated with the sound superimposed thereon. In this case, the conversion unit 21 can be embodied by using a distributed vibration sensor (DVS). The use of the distributed vibration sensor also enables the conversion unit 21 to convert the return light with the vibration superimposed thereon into vibration data.
The notification unit 24 notifies, when the output unit 22 outputs the sound, a position of production as the position of production of the sound, in association with the sound that the output unit 22 outputs. For example, as shown in
Note that when the optical fiber sensing device 20 is configured to store the acoustic data in the storage unit 25 (see
Subsequently, with reference to
As shown in
On the other hand, the identification unit 23 identifies the position of production of the sound superimposed on the return light received from the optical fiber 10 (Step S24).
The notification unit 24 notifies, when the output unit 22 outputs the sound in Step S23, the position of production identified by the identification unit 23 as the position of production of the sound, in association with the sound that the output unit 22 outputs (Step S25).
As described above, according to the second example embodiment, the identification unit 23 identifies the position of production of the sound superimposed on the return light received from the optical fiber 10, and the notification unit 24 notifies, when the output unit 22 outputs the sound, the position of production identified by the identification unit 23 as the position of production of the sound, in association with the sound that the output unit 22 outputs.
As a result, upon output of the sound detected by the optical fiber 10, the position of production of the sound can be notified. Other effects are identical to those of the first example embodiment described above.
The optical fiber sensing system according to the third example embodiment is the same in configuration as
The audio data converted by the conversion unit 21 has an intrinsic pattern according to the type (for example, person, animal, robot, heavy machinery, etc.) of the sound source of the sound on which the acoustic data is based.
Consequently, the identification unit 23 is capable of identifying the type of the sound source of the sound on which the acoustic data is based, by analyzing a dynamic change in the pattern of the acoustic data.
Furthermore, when the type of the sound source is person, the pattern of the acoustic data of voice of the person is different from person to person.
Consequently, the identification unit 23 is capable of identifying, not only that the type of the sound source is person, but also which person is the sound source by analyzing a dynamic change in the pattern of the acoustic data.
At this time, the identifying unit 23 may identify the person through use of pattern matching for example. In particular, the identification unit 23 holds in advance acoustic data of voice of the person as teacher data for each of a plurality of persons. Note that the teacher data may also have been learned by the identification unit 23 through machine learning or the like. The identification unit 23 compares the pattern of the acoustic data converted by the conversion unit 21 with each of the patterns of the plurality of pieces of the teacher data held in advance. When a pattern of any of the teacher data is matched, the identification unit 23 identifies that the acoustic data converted by the conversion unit 21 is the acoustic data of voice of the person corresponding to the matched teacher data.
The notification unit 24 notifies, when the output unit 22 outputs the sound, the position of production and the type of the sound source identified by the identification unit 23 as the position of production of the sound and the type of the sound source of the sound in association with the sound that the output unit 22 outputs. For example, as shown in
Note that when the optical fiber sensing device 20 is configured to store the acoustic data in the storage unit 25 (see
Subsequently, with reference to
As shown in
On the other hand, the identification unit 23 identifies the position of production of the sound superimposed on the return light received from the optical fiber 10, and identifies the type of the sound source of the sound (Step S34).
The notification unit 24 notifies, when the output unit 22 outputs the sound in Step S33, the position of production and the type of the sound source identified by the identification unit 23 as the position of production of the sound and the type of the sound source of the sound, in association with the sound that the output unit 22 outputs (Step S35).
As described above, according to the third example embodiment, the identification unit 23 identifies the position of production of the sound superimposed on the return light received from the optical fiber 10, and identifies the type of the sound source of the sound, and the notification unit 24 notifies, when the output unit 22 outputs the sound, the position of production and the type of the sound source identified by the identification unit 23 as the position of production of the sound and the type of the sound source of the sound, in association with the sound that the output unit 22 outputs.
As a result, upon output of the sound detected by the optical fiber 10, the position of production of the sound and the type of the sound source of the sound can be notified. Other effects are identical to those of the first example embodiment described above.
Subsequently, with reference to
As shown in
In other words, the identification unit 23 identifies, regarding each of a plurality of pieces of sound from different positions of production, the position of production of the sound and the type of the sound source of the sound.
The example of
The notification unit 24 notifies, when the output unit 22 outputs the sound, the position of production and the type of the sound source identified by the identification unit 23 as the position of production of the sound and the type of the sound source of the sound, in association with the sound that the output unit 22 outputs. For example, as shown in
Note that in the fourth example embodiment, processes of Steps S32 and later in
As described above, according to the fourth example embodiment, the identification unit 23 identifies, regarding each of a plurality of pieces of sound from different positions of production, the position of production of the sound and the type of the sound source of the sound, and the notification unit 24 notifies, regarding each of the plurality of pieces of sound, when the output unit 22 outputs the sound, the position of production and the type of the sound source identified by the identification unit 23 as the position of production of the sound and the type of the sound source of the sound, in association with the sound that the output unit 22 outputs.
As a result, regarding each of the plurality of pieces of sound from different positions of production detected by the optical fiber 10, the position of production of the sound and the type of the sound source of the sound can be notified upon output of the sound. Other effects are identical to those of the first example embodiment described above.
Subsequently, with reference to
As shown in
The collection unit 26 emits the pulsed light into the optical fiber 10 and receives, as return light (including return light with sound and vibration superimposed thereon), reflected light and scattered light generated while the pulsed light is transmitted through the optical fiber 10, via the optical fiber 10. The optical fiber sensing device 20 transmits the return light received at the collection unit 26 to the analysis device 31.
In the analysis device 31, the conversion unit 21 converts the return light into the acoustic data, and the identification unit 23 identifies the position of production and the type of the sound source of the sound. The analysis device 31 transmits the acoustic data converted by the conversion unit 21 and the position of production and the type of the sound source of the sound identified by the identification unit 23 to the optical fiber sensing device 20.
In the optical fiber sensing device 20, the output unit 22 outputs the sound on the basis of the acoustic data converted by the conversion unit 21, and the notification unit 24 notifies the position of production and the type of the sound source of the sound identified by the identification unit 23 in association with the sound that the output unit 22 outputs.
As a result, according to the fifth example embodiment, in the optical fiber sensing device 20, the load required for the process of converting the return light into the acoustic data and the process of identifying the position of production and the type of the sound source of the sound can be distributed to another device (the analysis device 31).
Note that, in the fifth example embodiment, among the constitutive elements provided in the optical fiber sensing device 20 of
Hereinafter, a specific application example of applying the optical fiber sensing system according to the above-described example embodiments to an acoustic system is described. In the following description, the acoustic system is exemplified by a conference system, a monitoring system, and a sound collection system; however, the acoustic system to which the optical fiber sensing system is applied is not limited thereto.
The first application example is an example of applying the optical fiber sensing system according to the above-described example embodiments to a conference system. In particular, the first application example is an example of applying the optical fiber sensing system of the configuration of
With reference to
As shown in
Examples of the microphone 41 include:
a cylindrical object with the optical fiber 10 wound therearound;
an object obtained by densely laying the optical fiber 10 in a predetermined shape (the shape of laying of the optical fiber 10 is not limited and may be, for example, a baton-like shape, a spiral shape, a star-like shape, etc.);
a box with the optical fiber 10 wound therearound; —an object with the optical fiber 10 wound therearound and covered; and
a box accommodating the optical fiber 10 (the optical fiber 10 is not necessarily required to be wound around the object, and may be, for example, accommodated in a box, embedded on a floor or a table, laid on a ceiling, etc.).
In the conference system according to the first application example, the sound detected by the microphone (#A) 41A and the microphone (#B) 41B are subjected to acoustic output from the speaker 32 or display output on the monitor 33.
The microphone (#A) 41A and the microphone (#B) 41B allow switching of an ON/OFF status. For example, when the microphone (#A) 41A is turned off, the output unit 22 is prevented from outputting the sound detected by the microphone (#A) 41A, or the conversion unit 21 is prevented from converting the return light with the sound detected by the microphone (#A) 41A superimposed thereon into the acoustic data. In this case, the conversion unit 21 and the output unit 22 may determine whether or not the sound is the sound detected by the microphone (#A) 41A, on the basis of the position of production of the sound identified by the identification unit 23. Alternatively, the notification unit 24 may notify the ON/OFF statuses of the microphone (#A) 41A and the microphone (#B) 41B. At this time, the notification unit 24 may also carry out, for example, display output of the statuses of the microphone (#A) 41A and the microphone (#B) 41B on the monitor 33 as shown in
In addition, in the conference system according to the first application example, the optical fiber 10 is connected through use of optical fiber connectors CN. For example, in a configuration of connecting the optical fiber 10 without using the optical fiber connector CN, there has been a problem of requiring use of a dedicated tool or a person with expertise for handling when the optical fiber 10 is disconnected or the like. Given this, connecting the optical fiber 10 by using the optical fiber connectors CN as in the first application example facilitates maintenance and equipment replacement in case of a failure.
The second application example is an example of applying the optical fiber sensing system according to the above-described example embodiments to a conference system carrying out a video conference involving a plurality of sites. In particular, the second application example is an example of applying the optical fiber sensing system of the configuration of
With reference to
In the example of
In the example of
In the example of
In addition, in the example of
In addition, in the example of
In this regard, all the examples of
For example, in a case of laying a plurality of optical fibers 10, a plurality of optical fiber sensing devices 20 must be provided to correspond to the plurality of optical fibers 10 respectively.
On the other hand, all the examples of
In addition, in all the examples of
Subsequently, a display example of a case of notifying, through display output, the position of production of sound and the type of the sound source in the conference system according to the second application example is described.
Hereinafter, suppose that a video conference is held between two sites X and Y, in which the position of production of sound and the type of the sound source detected by the optical fiber 10 in a conference room at the site X are subjected to display output on a monitor (hereinafter referred to as “monitor 44Y”) in a conference room at the site Y, while the position of production of sound and the type of the sound source detected by the optical fiber 10 in the conference room at the site Y are subjected to display output on a monitor 44X (see
In addition, the following description exemplifies a case of carrying out display output of the position of production of sound and the type of a sound source detected by the optical fiber 10 in the conference room at the site X on the monitor 44Y in the conference room at the site Y. Suppose also that in the conference room at the site X, a table 42X, four chairs 43XA to 43XD, and the monitor 44X are arranged, and the participants are seated in the chairs 43XA to 43XD to participate in a conference as shown in
First, with reference to
In the first display example in
A procedure for embodying the first display example in
For example, the identification unit 23 holds in advance a correspondence table in which positions of the chairs 43XA to 43XD are associated with respective distances of the optical fiber 10 from the positions of the chairs 43XA to 43XD to the conversion unit 21 (see
In addition, in the first display example in
A procedure for obtaining the names of the participants seated in the chairs 43XA to 43XD is, for example, as follows.
For example, the identification unit 23 holds in advance acoustic data of voice of each of a plurality of persons as teacher data in association with respective names and the like of the plurality of persons. Note that the teacher data may also have been learned by the identification unit 23 through machine learning or the like. When a participant speaks, the identification unit 23 identifies, as described above, the position of the chair 43X in which the participant is seated. In addition, the identification unit 23 compares the pattern of the acoustic data of the voice of the participant with each of the patterns of the plurality of pieces of the teacher data. When a pattern of any of the teacher data is matched, the identification unit 23 obtains a name of a person associated with the matched teacher data as the name of the participant seated in the chair 43X thus identified.
Alternatively, the identification unit 23 may prompt the participants to register their names as shown in
Yet alternatively, the identification unit 23 may also carry out voice recognition of voice of the participants during the conference, analyze content of speech on the basis of results of the voice recognition, and obtain names of the participants on the basis of the content of speech enabling identification of the participants (for example, “What do you think, Mr. XX?”).
Note that the identification unit 23 may also analyze acoustic data of the participants during the conference, and hold the acoustic data as teacher data in association with the names and the like of the participants. This enables the acoustic data of the participants not having been held as the teacher data to be newly held as the teacher data, and teacher data to be further accumulated for the acoustic data of the participants having been held as the teacher data, whereby the teacher data can be improved in accuracy. As a result, when the participants participate in subsequent conferences, this enables the participants to be smoothly identified and their names to be smoothly obtained.
Subsequently, with reference to
In the second display example in
A procedure for embodying the second display example in
For example, the identification unit 23 holds in advance a correspondence table in which positions of the chairs 43XA to 43XD are associated with respective distances of the optical fiber 10 from the positions of the chairs 43XA to 43XD to the conversion unit 21 (see
In addition, in the second display example in
Note that a procedure for obtaining the name of the participant seated in the chair 43X may be similar to that of the above-described first display example, and the description thereof is therefore omitted.
Subsequently, with reference to
In the third display example in
A procedure for embodying the third display example in
For example, when a participant speaks, the output unit 22 carries out display output of the voice of the participant. The identification unit 23 holds in advance a correspondence table in which positions of the chairs 43XA to 43XD are associated with respective distances of the optical fiber 10 from the positions of the chairs 43XA to 43XD to the conversion unit 21 (see
A procedure for obtaining the facial image of the participant is, for example, as follows.
The identification unit 23 holds arrangement data of the inside of the conference room at the site X in order to enable determination of which parts of the captured image of the inside of the conference room at the site X correspond to the chairs 43XA to 43XD respectively. When a participant speaks, the identification unit 23 identifies the position of the chair 43X in which the participant is seated as described above. And then the identification unit 23 detects facial images in the captured image of the inside of the conference room at the site X through use of face recognition technology, and, among the facial images thus detected, obtains a facial image in the position closest to the position of the chair 43X as the facial image of the participant seated in the chair 43X identified as described above.
Alternatively, the identification unit 23 holds in advance acoustic data of voice of each of a plurality of persons as teacher data in association with respective names, facial images and the like of the plurality of persons. Note that the teacher data may also have been learned by the identification unit 23 through machine learning or the like. When a participant speaks, the identification unit 23 identifies the position of the chair 43X in which the participant is seated as described above. In addition, the identification unit 23 compares the pattern of the acoustic data of the voice of the participant with each of the patterns of the plurality of pieces of the teacher data. When a pattern of any of the teacher data is matched to a pattern of the acoustic data of the voice of the participant, the identification unit 23 obtains a facial image of a person associated with the matched teacher data as the facial image of the participant seated in the chair 43X identified as described above.
In addition, in the third display example in
Note that a procedure for obtaining the name of the participant seated in the chair 43X identified as described above may be similar to that of the above-described first display example, and the description thereof is therefore omitted.
Note that, in the description of the second application example, the participants positioned around the table 42X have been supposed to be seated in chairs 43X around the table 42X; however, the chairs 43X are not necessarily fixed. Therefore, as shown in
The third application example is an example of applying the optical fiber sensing system according to the above-described example embodiments to a monitoring system. In particular, the third application example is an example of applying the optical fiber sensing system of the configuration of
A monitoring area to be monitored by the monitoring system is, for example, a border, a prison, a commercial facility, an airport, a hospital, streets, a port, a plant, a nursing home, company premises, a nursery school, a private home, or the like.
Hereinafter, in the case of the monitoring area being a nursery school, an example of the monitoring system allowing a parental guardian to connect to the monitoring system through an application on a mobile terminal such as a smartphone, to check the condition of a child by means of the child's voice is described.
In the nursery school, the optical fiber 10 is laid on a floor, walls, a ceiling, and the like inside a building.
In addition, the identification unit 23 holds in advance acoustic data of voice of each of a plurality of children who are pupils of the nursery school as teacher data in association with respective identification information (names, identification numbers, etc.) of parental guardians of the children. Note that the teacher data may also have been learned by the identification unit 23 through machine learning or the like.
When a parental guardian uses the monitoring system, the following operations are made.
First, the parental guardian connects to the monitoring system through an application on a mobile terminal and submits identification information of the parental guardian.
Upon reception of the identification information from the parental guardian, the identification unit 23 identifies, among the teacher data held in advance, the acoustic data of voice of a child of the parental guardian associated with the identification information.
And then, when the optical fiber 10 detects sound in the nursery school, the identification unit 23 compares the pattern of acoustic data of the sound with the pattern of the acoustic data identified as described above. When the pattern of the acoustic data of the sound detected by the optical fiber 10 is matched to the pattern of the acoustic data identified as described above, the identification unit 23 extracts the acoustic data of the sound detected by the optical fiber 10 as the acoustic data of the voice of the child of the parental guardian.
The output unit 22 carries out acoustic output of the voice of the child from a speaker and the like of the mobile terminal, on the basis of the acoustic data extracted by the identification unit 23.
At this time, it is preferred that the output unit 22 does not output voices other than that of the child of the parental guardian. This prevents voices of children other than the child of the parental guardian and nursery staff members from being output, whereby privacy of others is protected.
Note that, in the above description, the identification unit 23 extracts the acoustic data of the child of the parental guardian through use of pattern matching; however, the present disclosure is not limited thereto. For example, the identification unit 23 may also use voice recognition technology to extract the acoustic data of the child of the parental guardian. When the voice recognition technology is used, the following operations are made.
The identification unit 23 holds in advance a characteristic feature of acoustic data of voice of each of a plurality of children who are pupils of the nursery school in association with respective identification information (names, identification numbers, etc.) of parental guardians of the children. Note that the characteristic feature of the acoustic data may also be learned by the identification unit 23 through machine learning or the like.
Upon reception of the identification information from the parental guardian, the identification unit 23 identifies, among the characteristic features of the acoustic data held in advance, the characteristic feature of the acoustic data of voice of a child of the parental guardian associated with the identification information.
And then, when the optical fiber 10 detects sound in the nursery school, the identification unit 23 compares a characteristic feature of acoustic data of the sound with the characteristic feature of the acoustic data identified as described above. When the characteristic feature of the acoustic data of the sound detected by the optical fiber 10 is matched to the characteristic feature of the acoustic data identified as described above, the identification unit 23 extracts the acoustic data of the sound detected by the optical fiber 10 as the acoustic data of the voice of the child of the parental guardian.
The fourth application example is an example of applying the optical fiber sensing system according to the above-described example embodiments to a sound collection system. In particular, the third application example is an example of applying the optical fiber sensing system of the configuration of
A sound collection area in which the sound collection system collects sound is, for example, an area where a marked person is likely to appear, such as a border, a prison, a station, an airport, religious facilities, monitoring facilities, or the like.
Hereinafter, an example of the sound collection system configured to collect voice of the marked person in the sound collection area is described.
In the sound collection area, the optical fiber 10 is laid on a floor, walls and a ceiling inside a building, as well as in the ground and on a fence outside the building, and the like.
When the sound collection system collects voice of the marked person, the following operations are made.
The identification unit 23 identifies the marked person. For example, when a suspicious person detection system (not illustrated) or the like analyzes behavior and the like of a person being present in the sound collection area to identify a suspicious person, the identification unit 23 identifies the suspicious person as the marked person.
Subsequently, the identification unit 23 identifies a position of the marked person (a distance of the optical fiber 10 from the position to the conversion unit 21) in cooperation with the suspicious person detection system or the like.
And then, the conversion unit 21 converts the return light with the sound, which is detected by the optical fiber 10 in the position identified by the identification unit 23, superimposed thereon into acoustic data. The identification unit 23 analyzes a dynamic change in the pattern of the acoustic data to extract the acoustic data of voice of the marked person (voice during conversation with another marked person and the like).
The output unit 22 carries out acoustic output or display output of the voice of the marked person to a security system or a security guards' room, on the basis of the acoustic data extracted by the identification unit 23. Alternatively, the notification unit 24 may also notify the security system or the security guards' room of the detection of the marked person.
Subsequently, a hardware configuration of a computer 50 embodying the optical fiber sensing device 20 is described hereinafter with reference to
As shown in
The processor 501 is, for example, an arithmetic processing unit such as a CPU (Central Processing Unit) and a GPU (Graphics Processing Unit). The memory 502 is, for example, memory such as RAM (Random Access Memory) and ROM (Read Only Memory). The storage 503 is, for example, a storage device such as an HDD (Hard Disk Drive), an SSD (Solid State Drive), or a memory card. Alternatively, the storage 503 may be memory such as RAM and ROM.
The storage 503 stores programs configured to embody the functions of constitutive elements (the conversion unit 21, the output unit 22, the identification unit 23, and the notification unit 24) provided in the optical fiber sensing device 20. The processor 501 executes these programs to realize the respective functions of the constitutive elements provided in the optical fiber sensing device 20. In this regard, upon execution of each of the above-described programs, the processor 501 may either read these programs into the memory 502 for execution, or execute these programs without reading into the memory 502. In addition, the memory 502 and the storage 503 also serve to store the information and the data held by the constitutive elements provided in the optical fiber sensing device 20. Furthermore, the memory 502 and the storage 503 also serve as the storage unit 25 in
The aforementioned programs may also be provided to a computer (including the computer 50) in a state of being stored by using various types of non-transitory computer-readable media. The non-transitory computer-readable medium includes various types of tangible storage media. Examples of the non-transitory computer-readable medium include: a magnetic recording medium (for example, a flexible disk, a magnetic tape, or a hard disk drive); a magneto-optical recording medium (for example, a magneto-optical disk); a CD-ROM (Compact Disc-ROM); a CD-R (CD-Recordable); a CD-R/W (CD-ReWritable); and semiconductor memory (for example, mask ROM, PROM (Programmable ROM), EPROM (Erasable PROM), flash ROM, or RAM). Alternatively, the programs may be provided to the computer by means of various types of transitory computer-readable media. Examples of the transitory computer-readable medium include an electrical signal, an optical signal, and an electromagnetic wave. The transitory computer-readable media can provide the program to a computer via a wired communication channel such as an electric wire and a fiber-optic cable, or a wireless communication channel.
The input/output interface 504 is connected to a display device 5041, an input device 5042, a sound output device 5043 and the like. The display device 5041 is a device such as an LCD (Liquid Crystal Display), a CRT (Cathode Ray Tube) display, and a monitor, configured to display a screen corresponding to drawing data processed by the processor 501. The input device 5042 is a device, for example, a keyboard, a mouse, a touch sensor, and the like configured to accept an operation input from an operator. The display device 5041 and the input device 5042 may be integrally embodied as a touch panel. The sound output device 5043 is a device such as a speaker configured to carry out acoustic output of sound corresponding to acoustic data processed by the processor 501.
The communication interface 505 sends and receives data with respect to external devices. For example, the communication interface 505 communicates with the external devices via a wired communication channel or a wireless communication channel.
The present disclosure has been described with reference to the example embodiments; however, the present disclosure is not limited to the above-described example embodiments. Various modifications comprehensible by one of ordinary skill in the art within the scope of the present disclosure can be made to the configurations and details of the present disclosure.
A part or all of the above-described example embodiment may be stated as in the supplementary note presented below, but is not limited thereto.
An optical fiber sensing system comprising:
an optical fiber configured to transmit an optical signal with sound superimposed thereon;
a conversion unit configured to convert the optical signal into acoustic data; and
an output unit configured to output the sound on the basis of the acoustic data.
The optical fiber sensing system according to Supplementary Note 1, further comprising:
an identification unit configured to identify a position of production of the sound on the basis of the optical signal; and
a notification unit configured to notify, when the output unit outputs the sound, the position of production of the sound in association with the sound that the output unit outputs.
The optical fiber sensing system according to Supplementary Note 2, wherein
the identification unit identifies a type of a sound source of the sound on the basis of a pattern of the acoustic data; and
the notification unit notifies, when the output unit outputs the sound, the position of production of the sound and the type of the sound source of the sound in association with the sound that the output unit outputs.
The optical fiber sensing system according to Supplementary Note 3, wherein
the identification unit identifies, regarding each of a plurality of pieces of sound from different positions of production, the type of the sound source of the sound; and
the notification unit notifies, regarding each of the plurality of pieces of sound from different positions of production, when the output unit outputs the sound, the position of production of the sound and the type of the sound source of the sound in association with the sound that the output unit outputs.
The optical fiber sensing system according to Supplementary Note 3 or 4, further comprising a storage unit configured to store the acoustic data,
wherein the output unit reads the acoustic data from the storage unit and outputs the sound on the basis of the acoustic data thus read.
The optical fiber sensing system according to Supplementary Note 5, wherein the storage unit stores the position of production of the sound and the type of the sound source of the sound in association with the acoustic data, and
when the output unit outputs the sound, the notification unit reads the position of production of the sound and the type of the sound source of the sound from the storage unit and notifies the position of production of the sound and the type of the sound source of the sound thus read in association with the sound that the output unit outputs.
The optical fiber sensing system according to any one of Supplementary Notes 1 to 6, further comprising an object configured to accommodate the optical fiber,
wherein the optical fiber transmits the optical signal with the sound, which is produced around the object, superimposed thereon.
An optical fiber sensing device comprising:
a conversion unit configured to convert an optical signal with sound superimposed thereon transmitted through an optical fiber into acoustic data; and
an output unit configured to output the sound on the basis of the acoustic data.
The optical fiber sensing device according to Supplementary Note 8, further comprising:
an identification unit configured to identify a position of production of the sound on the basis of the optical signal; and
a notification unit configured to notify, when the output unit outputs the sound, the position of production of the sound in association with the sound that the output unit outputs.
The optical fiber sensing device according to Supplementary Note 9, wherein
the identification unit identifies a type of a sound source of the sound on the basis of a pattern of the acoustic data; and
the notification unit notifies, when the output unit outputs the sound, the position of production of the sound and the type of the sound source of the sound in association with the sound that the output unit outputs.
The optical fiber sensing device according to Supplementary Note 10, wherein
the identification unit identifies, regarding each of a plurality of pieces of sound from different positions of production, the position of production of the sound and the type of the sound source of the sound; and
the notification unit notifies, regarding each of the plurality of pieces of sound from different positions of production, when the output unit outputs the sound, the position of production of the sound and the type of the sound source of the sound in association with the sound that the output unit outputs.
The optical fiber sensing device according to any one of Supplementary Notes 7 to 11, further comprising a storage unit configured to store the acoustic data,
wherein the output unit reads the acoustic data from the storage unit and outputs the sound on the basis of the acoustic data thus read.
The optical fiber sensing device according to Supplementary Note 12, wherein the storage unit stores the position of production of the sound and the type of the sound source of the sound in association with the acoustic data, and
when the output unit outputs the sound, the notification unit reads the position of production of the sound and the type of the sound source of the sound from the storage unit and notifies the position of production of the sound and the type of the sound source of the sound thus read in association with the sound that the output unit outputs.
The optical fiber sensing device according to any one of Supplementary Notes 8 to 13, wherein the optical fiber transmits the optical signal with the sound, which is produced around an object accommodating the optical fiber, superimposed thereon.
A sound output method by an optical fiber sensing system comprising:
a transmitting step in which an optical fiber transmits an optical signal with sound superimposed thereon;
a conversion step of converting the optical signal into acoustic data; and
an output step of outputting the sound on the basis of the acoustic data.
The sound output method according to Supplementary Note 15, further comprising:
an identification step of identifying a position of production of the sound on the basis of the optical signal; and
a notification step of notifying, when the sound is output in the output step, the position of production of the sound in association with the sound that is output in the output step.
The sound output method according to Supplementary Note 16, wherein
in the identification step, a type of a sound source of the sound is identified on the basis of a pattern of the acoustic data, and
in the notification step, when the sound is output in the output step, the position of production of the sound and the type of the sound source of the sound are notified in association with the sound that is output in the output step.
The sound output method according to Supplementary Note 17, wherein
in the identification step, regarding each of a plurality of pieces of sound from different positions of production, the type of the sound source of the sound is identified; and
in the notification step, regarding each of the plurality of pieces of sound from different positions of production, when the sound is output in the output step, the position of production of the sound and the type of the sound source of the sound are notified in association with the sound that is output in the output step.
The sound output method according to Supplementary Note 17 or 18, further comprising a storage step of storing the acoustic data,
wherein in the output step, the acoustic data thus stored is read and the sound is output on the basis of the acoustic data thus read.
The sound output method according to Supplementary Note 19, wherein in the storage step, the position of production of the sound and the type of the sound source of the sound are stored in association with the acoustic data, and
in the notification step, when the sound is output in the output step, the position of production of the sound and the type of the sound source of the sound thus stored are read, and the position of production of the sound and the type of the sound source of the sound thus read are notified in association with the sound that is output in the output step.
The sound output method according to any one of Supplementary Notes 15 to 21, wherein in the transmitting step, the optical fiber transmits the optical signal with the sound, which is produced around an object accommodating the optical fiber, superimposed thereon.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2019/021210 | 5/29/2019 | WO | 00 |