The present disclosure relates to an unmanned aerial vehicle detection system and an unmanned aerial vehicle detection method for detecting an unmanned aerial vehicle (UAV).
A flying object monitoring device has been known that is capable of detecting an existence of an object and detecting a flying direction of the object using a plurality of sound detectors that detect sounds generating within a monitoring area for each direction. (see PTL 1, for example). When detecting a flying and a flying direction of the flying object by a sound detection using a microphone, a processing device of the flying object monitoring device moves a monitoring camera in the direction in which the flying object flew. The processing device displays an image captured by the monitoring camera on a display device.
However, for example, in an image captured by the monitoring camera, various flying objects other than an unmanned aerial vehicle desired by a user may be reflected. In this case, it is difficult to grasp an existence of an unmanned aerial vehicle desired by a user and even if an unmanned aerial vehicle exists, it is also difficult to easily grasp a position of the unmanned aerial vehicle form surrounding circumstances.
The present disclosure aims to easily determine an existence and a position of an unmanned aerial vehicle desired by a user by using a captured image by a camera.
PTL 1: Japanese Patent Unexamined Publication No. 2006-168421
PTL 2: Japanese Patent Unexamined Publication No. 2014-143678
PTL 3: Japanese Patent Unexamined Publication No. 2015-029241
According to an aspect of the present disclosure, there is provided an unmanned aerial vehicle detection system including a camera that images an imaging area, a microphone array that picks up a sound in the imaging area, a display that displays a captured image of the imaging area captured by the camera, and a signal processor that detects an unmanned aerial vehicle appearing in the imaging area using the sound picked up by the microphone array, in which the signal processor superimposes first identification information obtained by converting the unmanned aerial vehicle into visual information in the captured image of the imaging area on the captured image of the imaging area and displays on the displayer.
According to another aspect of the present disclosure, there is provided an unmanned aerial vehicle detection method in the unmanned aerial vehicle detection system, the method including imaging an imaging area by a camera, picking up a sound in the imaging area by a microphone array, detecting an unmanned aerial vehicle appearing in the imaging area using the sound picked up by the microphone array, generating first identification information obtained by converting the unmanned aerial vehicle into visual information in a captured image of the imaging area, and displaying the first identification information by superimposing on the captured image of the imaging area on the displayer.
According to the present disclosure, it is possible to easily determine an existence and a position of an unmanned aerial vehicle desired by a user by using a captured image by a camera.
Hereinafter, each exemplary embodiment specifically disclosing an unmanned aerial vehicle detection system and an unmanned aerial vehicle detection method according to the present invention will be described in detail with reference to drawings as appropriate. However, the detailed explanation may be omitted more than necessary. For example, detailed explanations of already well-known matters and redundant explanation on substantially the same configuration may be omitted. This is to avoid making the following explanation unnecessarily redundant and to facilitate understanding by those skilled in the art. Note that, the accompanying drawings and the following descriptions are provided to enable those skilled in the art to sufficiently understand the present disclosure, and are not intended to limit the claimed subject matter.
In the present exemplary embodiment, a multicopter type drone having a plurality of rotors (in other words, rotating blades) mounted on is exemplified as unmanned aerial vehicle dn. In a multicopter type drone, in general, when the number of wings of a rotor is two, harmonics with a frequency two times higher than a specific frequency, and harmonics with multiples of a frequency are generated. Similarly, when the number of wings of a rotor is three, harmonics with a frequency three times higher than a specific frequency, and harmonics with multiples of the frequency are also generated. This also applies to a case where the number of wings of a rotor is four or more.
Unmanned aerial vehicle detection system 5 has a configuration including a plurality of sound source detection units UD, monitoring device 10, and monitor 50. The plurality of sound source detection units UD is mutually connected to monitoring device 10 via network NW. Each sound source detection unit UD has microphone array MA, omnidirectional camera CA, and PTZ camera CZ. Note that, each sound source detection unit is referred to as sound source detection unit UD unless it is necessary to particularly distinguish each sound source detection unit. Similarly, each microphone array, omnidirectional camera, and PTZ camera are referred to as microphone array MA, omnidirectional camera CA, and PTZ camera CZ unless it is necessary to particularly distinguish each microphone array, omnidirectional camera, and PTZ camera.
In sound source detection unit UD, microphone array MA picks up omnidirectional sounds in a sound pickup area where its own device is installed in a nondirectional state. Microphone array MA has casing 15 (see
Microphone array MA includes a plurality of nondirectional microphones M1 to Mn (see
Microphone array MA has a plurality of amplifiers PA1 to PAn (see
Omnidirectional camera CA that substantially coincides with a volume of the opening port is accommodated inside the opening port formed in a center of casing 15 (see
In each sound source detection unit UD, omnidirectional camera CA is fitted inside the opening portion of casing 15, so that omnidirectional camera CA and microphone array MA are disposed coaxially. In this way, when an optical axis of omnidirectional camera CA and a central axis of the casing of microphone array MA coincide with each other, the imaging area and the sound pickup area in an around axial direction (that is, horizontal direction) are substantially the same, therefore a position of a subject in an image and a position of a sound source of a pickup target can be expressed in a same coordinate system (for example, coordinates indicated by (horizontal angle, vertical angle)). Note that, in order to detect unmanned aerial vehicle dn flying from the sky, respective sound source detection units UD are provided, for example, so that upward in a top-and-bottom direction becomes a sound pickup surface and an image pickup surface (see
Monitoring device 10 forms a directivity with any direction as a main beam direction based on a user operation with respect to omnidirectional sounds picked up by microphone array MA (that is, beam forming), and a sound in the directional direction can be emphasized. A technology related to a directivity control processing of sound data for performing beam forming of sounds picked up by microphone array MA is a known technology as disclosed in, for example, PTL 2 and PTL 3.
Monitoring device 10 uses an image captured by omnidirectional camera CA (hereinafter may be abbreviated as “captured image” in some cases), performs a processing of the captured image, and generates an omnidirectional image. Note that, the omnidirectional image may be generated by omnidirectional camera CA instead of monitoring device 10.
Using an image based on a calculated value of sound pressure of sounds picked up by microphone array MA (see
Monitor 50 displays omnidirectional image GZ1 captured by omnidirectional camera CA. Monitor 50 also generates and displays a composite image in which identification mark mk is superimposed on omnidirectional image GZ1. Note that, monitor 50 may be configured as a device integrated with monitoring device 10.
In
First attachment plate 73 and second attachment plate 74 are attached so as to straddle two rails 72 and have substantially the same plane. In addition, first attachment plate 73 and second attachment plate 74 can move freely on two rails 72, and are adjusted and fixed in a position to be separated or close to each other.
First attachment plate 73 is a disc-shaped plate material. Opening portion 73a is formed in a center of first attachment plate 73. In opening portion 73a, casing 15 of microphone array MA is accommodated and fixed. On the other hand, second attachment plate 74 is a substantially rectangular plate material. Opening portion 74a is formed in a portion close to an outside of second attachment plate 74. In opening portion 74a, PTZ camera CZ is accommodated and fixed.
As shown in
Tripod 71 is supported by a ground contact surface with three legs 71b and in tripod 71, a position of top plate 71a can be moved freely in a direction perpendicular to the ground contact surface by a manual operation and an orientation of top plate 71a can be adjusted in a pan direction and a tilt direction. Thereby, it is possible to set a sound pickup area of microphone array MA (in other words, an imaging area of omnidirectional camera CA) to any direction.
Compression processor 25 generates a packet of sound data based on digital sound signals output from A/D converters A1 to An. Transmitter 26 transmits the packet of the sound data generated by compression processor 25 to monitoring device 10 via network NW.
In this manner, microphone array MA amplifies the output signals of microphones M1 to Mn with amplifiers PA1 to PAn and converts the output signals of microphones M1 to Mn into digital sound signals with A/D converters A1 to An. Thereafter, microphone array MA generates a packet of sound data in compression processor 25 and transmits the packet of the sound data to monitoring device 10 via network NW.
CPU 41 performs a signal processing for overall control of movements of each part of omnidirectional camera CA, input and output processing of data among each of the other parts, operation processing of data, and storage processing of data. Instead of CPU 41, a processor such as a micro processing unit (MPU) or a digital signal processor (DSP) may be provided.
For example, CPU 41 generates cutout image data by cutting out an image in a specific range (direction) from omnidirectional image data by designation of user operating monitoring device 10, and stores the cutout image data in memory 46.
Image sensor 45 is configured using, for example, a complementary metal oxide semiconductor (CMOS) sensor or a charge coupled device (CCD) sensor, and acquires omnidirectional image data by imaging processing an optical image of reflected light from the imaging area collected by a fish-eye lens (not shown) on the light receiving surface.
Memory 46 includes ROM 46z that stores a program for defining a movement of omnidirectional camera CA or data for setting values, RAM 46y that stores omnidirectional image data or cutout image data from which a part of a range is cut out or work data, and memory card 46x that is connected to omnidirectional camera CA in a detachable manner and stores various data.
Communicator 42 is a network interface (I/F) for controlling data communication with network NW connected via network connector 47.
Power supply manager 44 supplies direct current power to each part of omnidirectional camera CA. Power supply manager 44 may also supply direct current power to a device connected to network NW via network connector 47.
Network connector 47 is a connector that transmits omnidirectional image data or two-dimensional panoramic image data to monitoring device 10 via network NW and can supply power via a network cable.
Like omnidirectional camera CA, PTZ camera CZ includes CPU 51, communicator 52, power supply manager 54, image sensor 55, memory 56, and network connector 57, as well as imaging direction controller 58 and lens driving motor 59. When receiving an instruction to change an angle of view of monitoring device 10, CPU 51 informs imaging direction controller 58 of the angle of view change instruction.
Imaging direction controller 58 controls an imaging direction of PTZ camera CZ to at least one of a pan direction and a tilt direction according to the angle of view change instruction informed from CPU 51 and further outputs a control signal for changing a zoom magnification to lens driving motor 59, if necessary. Lens driving motor 59 drives an imaging lens according to the control signal, changes the imaging direction (the direction of optical axis L2), and adjusts a focal length of the imaging lens to change the zoom magnification.
Signal processor 33 is configured by using, for example, a central processing unit (CPU), a micro processing unit (MPU), or a digital signal processor (DSP), and performs a control processing for overall control of movements of each part of monitoring device 10, input and output processing of data among each of the other parts, operation (calculation) processing of data, and storage processing of data. Signal processor 33 includes directivity processor 63, frequency analyzer 64, target detector 65, detection result determiner 66, scan controller 67, detection direction controller 68, sound source direction detector 34, and output controller 35. Further, monitoring device 10 is connected to monitor 50.
Sound source direction detector 34 estimates a sound source position by using sound data of sound of monitoring area 8 picked up by microphone array MA according to a known cross-power spectrum phase analysis (CSP) method, for example. In the CSP method, sound source direction detector 34 divides monitoring area 8 shown in
Setting manager 39 has a coordinate conversion equation relating to coordinates of a position designated by a user on a screen of monitor 50 on which omnidirectional image data captured by omnidirectional camera CA is displayed, in advance. The coordinate conversion equation is an equation for converting coordinates (that is, (horizontal angle, vertical angle)) of the user designated position on the omnidirectional image data to coordinates in a direction viewed from PTZ camera CZ based on a physical distance difference between, for example, an installation position of omnidirectional camera CA (see
Using the above described coordinate conversion equation held by setting manager 39, signal processor 33 uses the installation position of PTZ camera CZ (see
As shown in
However, when omnidirectional camera CA and microphone array MA are not disposed coaxially, setting manager 39 needs to convert coordinates derived by omnidirectional camera CA into coordinates in a direction viewed from microphone array MA according to a method disclosed in, for example, PTL 3.
Setting manager 39 holds first threshold value th1 and second threshold value th2 to be compared with sound pressure p for each pixel calculated by signal processor 33. Sound pressure p is used as an example of a sound parameter relating to a sound source, represents the magnitude of a sound picked up by microphone array MA, and is distinguished from a sound volume representing the magnitude of a sound output from speaker device 37. First threshold value th1 and second threshold value th2 are values to be compared with sound pressure of a sound generated in monitoring area 8 and are set to predetermined values for determining a sound emitted by unmanned aerial vehicle dn, for example. Further, a plurality of threshold values can be set, and in the present exemplary embodiment, two values are set, for example, first threshold value th1 and second threshold value th2 that is larger than first threshold value th1 (first threshold value th1<second threshold th2). In the present exemplary embodiment, three or more threshold values may be set.
As will be described later, region R1 (see
Communicator 31 receives omnidirectional image data or cutout image data transmitted by omnidirectional camera CA and sound data transmitted by microphone array MA, and outputs omnidirectional image data, cutout image data, and sound data to signal processor 33.
Operator 32 is a user interface (UI) for informing signal processor 33 of content of an input operation by a user, and is configured of a pointing device such as a mouse, a keyboard, or the like. Operator 32 may be configured using, for example, a touch panel or a touch pad disposed corresponding to a screen of monitor 50 and capable of direct input operation by a user's finger or a stylus pen.
When a red region R1 of sound pressure heat map MP (see
Memory 38 is configured of a ROM or a RAM. Memory 38, for example, holds various data including sound data in a certain section, setting information, programs, or the like. Memory 38 has a pattern memory in which unique sound patterns are registered in each unmanned aerial vehicle dn. Memory 38 stores data of sound pressure heat map MP. Further, in memory 38, identification mark mk (see
In
Directivity processor 63 performs the aforementioned directivity forming process (beam forming) using sound signals picked up by nondirectional microphones M1 to Mn (also referred to as sound data), and performs a processing for extracting sound data in which any direction is set as a directional direction. Directivity processor 63 can also perform a processing for extracting sound data in which range of any direction is set as a directional range. Here, the directional range is a range including a plurality of adjacent directional directions, and is intended to include spread of directional directions to some extent as compared with the directional direction.
Frequency analyzer 64 performs frequency analysis processing on the sound data extracted in the directivity direction by directivity processor 63. In the frequency analysis processing, a frequency and sound pressure included in the sound data of the directional direction are detected.
Target detector 65 performs a detection processing of unmanned aerial vehicle dn. In the detection processing of unmanned aerial vehicle dn, target detector 65 compares patterns of detection sounds obtained as a result of the frequency analysis processing (see
Whether the patterns of both are approximate or not is determined as follows, for example. When sound pressures of at least two frequencies included in the detection sound data, among the four frequencies f1, f2, f3, and f4, exceed a threshold value respectively, it is determined that the sound patterns are approximate, and target detector 65 detects unmanned aerial vehicle dn. Unmanned aerial vehicle dn may be detected when other conditions are satisfied.
When it is determined that there is no unmanned aerial vehicle dn, detection result determiner 66 instructs detection direction controller 68 to transit to the detection of unmanned aerial vehicle dn in a next directional direction. When it is determined that there is unmanned aerial vehicle dn as a result of the scanning in the directional direction, detection result determiner 66 informs output controller 35 of a detection result of unmanned aerial vehicle dn. Note that, the detection result includes information of detected unmanned aerial vehicle dn. The information of unmanned aerial vehicle dn includes, for example, identification information of unmanned aerial vehicle dn and position information (for example, direction information) of unmanned aerial vehicle dn in the sound pickup space.
Detection direction controller 68 controls a direction for detecting unmanned aerial vehicle dn in the sound pickup space based on an instruction from detection result determiner 66. For example, detection direction controller 68 sets any direction of directional range BF1 including the sound source position estimated by sound source direction detector 34 as a detection direction in an entire sound pickup space.
Scan controller 67 instructs directivity processor 63 to perform the beam forming of the detection direction set by detection direction controller 68 as the directional direction.
Directivity processor 63 performs the beam forming for the directional direction instructed from scan controller 67. In an initial setting, directivity processor 63 sets an initial position in directional range BF1 (see
On the basis of the omnidirectional image data captured by omnidirectional camera CA and the sound data picked up by microphone array MA, the output controller 35 calculates sound pressure for every pixel configuring the omnidirectional image data. The sound pressure calculation processing is a well-known technology, and the detailed description of the processing is omitted. As a result, output controller 35 generates sound pressure heat map MP by assigning the calculated value of the sound pressure to the position of the corresponding pixel for every pixel configuring the omnidirectional image data. Further, output controller 35 performs a color conversion processing on the sound pressure value of each pixel of generated sound pressure heat map MP, and then generates sound pressure heat map MP as shown in
Note that, output controller 35 generates sound pressure heat map MP by assigning the sound pressure value calculated on a pixel-by-pixel basis to the position of the corresponding pixel, however, output controller 35 may not calculate the sound pressure for every pixel but may calculate an average value of the sound pressure values in units of pixel blocks composed of a predetermined number (for example, four) of pixels, and may generate sound pressure heat map MP by assigning the average value of the sound pressure values corresponding to the predetermined number of corresponding pixels.
Further, output controller 35 controls each movement of monitor 50 and speaker device 37, and also outputs the omnidirectional image data or the cutout image data transmitted from omnidirectional camera CA to monitor 50 for displaying, and further outputs sound data transmitted from microphone array MA to speaker device 37. When unmanned aerial vehicle dn is detected, output controller 35 outputs identification mark mk representing unmanned aerial vehicle dn to monitor 50 so as to be superimposed on an omnidirectional image and displays identification mark mk.
Output controller 35 also uses the sound data picked up by microphone array MA and coordinates indicating the direction of the sound source position derived by omnidirectional camera CA and emphasizes the sound data in the directional direction by performing the directivity formation processing of the sound data picked up by microphone array MA. The directivity forming processing of the sound data is a known technology disclosed in, for example, PTL 3.
Speaker device 37 outputs sound data picked up by microphone array MA or sound data in which microphone array MA picked up sound and directivity is formed by signal processor 33, as a sound. Note that, speaker device 37 may be configured as a device separate from monitoring device 10.
Movement of unmanned aerial vehicle detection system 5 having the above described configuration is shown.
In an initial movement, monitoring device 10 performs an image transmission request to PTZ camera CZ (T1). In accordance with the request, PTZ camera CZ starts imaging processing in response to application of the power (T2). Similarly, monitoring device 10 performs an image transmission request to omnidirectional camera CA (T3). In accordance with the request, omnidirectional camera CA starts imaging processing in response to application of the power (T4). Monitoring device 10 performs a sound transmission request to microphone array MA (T5). In accordance with the request, microphone array MA starts pickup sound processing in response to application of the power (T6).
Upon completion of the initial movement, PTZ camera CZ transmits captured image (for example, still image, moving image) data obtained by the imaging to monitoring device 10 via network NW (T7). Monitoring device 10 converts the captured image data transmitted from PTZ camera CZ into display data such as NTSC (T8) and outputs the display data to monitor 50 (T9). Upon inputting the display data, monitor 50 displays PTZ image GZ2 (see
Similarly, omnidirectional camera CA transmits omnidirectional image (for example, still image, moving image) data obtained by the imaging to monitoring device 10 via network NW (T10). Monitoring device 10 converts the omnidirectional image data transmitted from omnidirectional camera CA into display data such as NTSC (T11) and outputs the display data to monitor (T12). Upon inputting the display data, monitor 50 displays omnidirectional image GZ1 (see
Microphone array MA encodes the sound data of the sounds obtained by picking up sounds via network NW and transmits the encoded sound data to monitoring device 10 (T13). In monitoring device 10, the sound source direction detector 34 estimates the sound source position in monitoring area 8 (T14). The estimated sound source position is used as a standard position of the directional range BF1 (see
Monitoring device 10 performs detection determination of unmanned aerial vehicle dn (T15). Details of a detection determination processing of unmanned aerial vehicle dn will be described later.
When unmanned aerial vehicle dn is detected as a result of the detection determination processing, output controller 35 in monitoring device 10 superimposes identification mark mk representing unmanned aerial vehicle dn existing in the directional direction determined in procedure T15 on omnidirectional image GZ1 displayed on the screen of monitor 50, and then displays identification mark mk (T16).
Output controller 35 transmits information relating to the directional direction obtained from procedure T15 to PTZ camera CZ and performs a request for changing an imaging direction of PTZ camera CZ to the directional direction (in other words, an angle of view change instruction) (T17). When PTZ camera CZ receives the information relating to the directional direction (that is, the angle of view change instruction), imaging direction controller 58, based on the information relating to the directional direction, drives lens driving motor 59, changes optical axis L2 of an imaging lens of PTZ camera CZ, and changes the imaging direction to the directional direction (T18). At the same time, imaging direction controller 58 changes zoom magnification of the imaging lens of PTZ camera CZ to a value set in advance, a value corresponding to a ratio occupied in the captured image of unmanned aerial vehicle dn, or the like.
On the other hand, as a result of the detection determination processing in procedure T15, if unmanned aerial vehicle dn is not detected, the processing of T16, T17, and T18 is not performed.
Thereafter, the processing of unmanned aerial vehicle detection system 5 returns to procedure T7, and the same processing is repeated until a predetermined event such as turning off the power is detected.
The directivity processor 63 determines whether or not the sound data, which is picked up by microphone array MA and converted into digital values by A/D converters An1 to An, has been temporarily stored in memory 38 (S22). When it is not stored, the processing of directivity processor 63 returns to procedure S21.
When the sound data picked up by microphone array MA is temporarily stored in memory 38 (YES in S22), directivity processor 63 performs the beam forming for any directional direction BF2 in directional range BF1 of monitoring area 8 and performs an extraction processing on the sound data of directional direction BF2 (S23).
Frequency analyzer 64 detects a frequency and sound pressure of the extracted sound data (S24).
Target detector 65 compares the pattern of the detection sound registered in the pattern memory of memory 38 with a pattern of a detection sound obtained as a result of the frequency analysis processing to perform a detection of an unmanned aerial vehicle (S25).
Detection result determiner 66 informs output controller 35 of the result of the comparison, and informs detection direction controller 68 about the detection direction transition (S26).
For example, target detector 65 compares patterns of the detection sounds obtained as a result of the frequency analysis processing with the four frequencies f1, f2, f3, and f4 registered in the pattern memory of memory 38. As a result of the comparison, when target detector 65 has at least two identical frequencies in patterns of both detection sounds, and the sound pressures of these frequencies are larger than a first threshold th1, the patterns of the both detection sounds are approximated, and it is determined that unmanned aerial vehicle dn exists.
Although it is assumed herein that at least two frequencies coincide, however, when one frequency coincides and the sound pressure of this frequency is larger than the first threshold th1, target detector 65 may determine the detection sound is approximated.
Target detector 65 may set an allowable frequency error for each frequency, and determine the presence or absence of the approximation on the assumption that frequencies within the error range are the same frequency.
Further, in addition to the comparison of the frequencies and the sound pressures, target detector 65 may determine by addition to determination condition that sound pressure ratios of the sounds of the respective frequencies substantially coincide. In this case, since the determination condition becomes strict, sound source detection unit UD can easily identify detected unmanned aerial vehicle dn as a preregistered target (for example, unmanned aerial vehicle dn which is a moving object) thereby improving detection accuracy of unmanned aerial vehicle dn.
As a result of step S26, detection result determiner 66 determines whether or not unmanned aerial vehicle dn exists (S27).
When there is unmanned aerial vehicle dn, detection result determiner 66 informs output controller 35 that unmanned aerial vehicle dn exists (detection result of unmanned aerial vehicle dn) (S28).
On the other hand, if there is no unmanned aerial vehicle dn in step S27, scan controller 67 moves the directional direction BF2 of scan target in monitoring area 8 in a next different direction (S29). Note that, information of the detection result of unmanned aerial vehicle dn may be performed collectively after completion of an omnidirectional scanning, not at the timing when the detection processing of one directional direction ends.
An order in which directional direction BF2 is sequentially moved in monitoring area 8, for example, within directional range BF1 of monitoring area 8 or within an entire range of monitoring area 8, may be a spiral (coiled) order from an outer circumference toward an inner circumference or from the inner circumference toward the outer circumference.
Detection direction controller 68 does not continuously scan the directional direction like one stroke, but may set a position in advance in monitoring area 8, and move the directional direction BF2 to each position in any order. As a result, monitoring device 10 can start a detection processing from a position where, for example, unmanned aerial vehicle dn is likely to intrude, so that the detection processing can be made more efficient.
Scan controller 67 determines whether or not scanning in all directions in monitoring area 8 is completed (S30). When the omnidirectional scanning is not completed (NO in S30), the processing of directivity processor 63 returns to step S23 and the same operation is performed. That is, directivity processor 63 performs the beam forming in directional direction BF2 of the position moved in step S29, and extracts the sound data of directional direction BF2. As a result, even if one unmanned aerial vehicle dn is detected, sound source detection unit UD continues to detect unmanned aerial vehicle dn that may be present, so that detection of a plurality of unmanned aerial vehicles dn is possible.
On the other hand, when the omnidirectional scanning is completed in step S30 (YES in S30), directivity processor 63 removes sound data temporarily stored in memory 38 and collected by microphone array MA (S31).
After removing the sound data, signal processor 33 determines whether or not to terminate the detection processing of unmanned aerial vehicle dn (S32). The detection processing of unmanned aerial vehicle dn is terminated in accordance with a predetermined event. For example, the number of times unmanned aerial vehicle dn was not detected in step S6 may be stored in memory 38, and in a case where the number is equal to or more than a predetermined number, the detection processing of unmanned aerial vehicle dn may be terminated. Signal processor 33 may terminate the detection processing of unmanned aerial vehicle dn based on time-up by timer or a user operation on a user Interface (UI) included in operator 32 (not shown). In addition, it may be terminated when the power supply of monitoring device 10 is turned off.
Note that, in the processing of step S24, frequency analyzer 64 analyzes the frequency and also measures the sound pressure of the frequency. When sound pressure level measured by frequency analyzer 64 gradually increases with the lapse of time, detection result determiner 66 may determine that unmanned aerial vehicle dn is approaching sound source detection unit UD.
For example, when sound pressure level of a predetermined frequency measured at time t11 is smaller than sound pressure level of the same frequency measured at time t12 after the time t11, the sound pressure increases with the lapse of time, accordingly it may be determined that unmanned aerial vehicle dn is approaching. Also, the sound pressure level may be measured three or more times, and it may be determined that unmanned aerial vehicle dn is approaching based on transition of a statistical value (for example, dispersion value, average value, maximum value, minimum value, or the like).
When the measured sound pressure level is larger than third threshold th3 which is warning level, detection result determiner 66 may determine that unmanned aerial vehicle dn has intruded a warning area.
Note that, third threshold value th3 is, for example, a value larger than second threshold value th2. The warning area is, for example, a same area as monitoring area 8 or an area included in monitoring area 8 and narrower than monitoring area 8. The warning area is, for example, an area in which intrusion of unmanned aerial vehicle dn is regulated. Approach determination and intrusion determination of unmanned aerial vehicle dn may be executed by detection result determiner 66.
That is, in
Here, identification mark mk is superimposed on omnidirectional image GZ1 captured by omnidirectional camera CA, and unmanned aerial vehicle dn is directly displayed in PTZ image GZ2 captured by PTZ camera CZ. This is because it is difficult to distinguish unmanned aerial vehicle dn even if the image of unmanned aerial vehicle dn appears in omnidirectional image GZ1 as it is. On the other hand, since PTZ image GZ2 captured by PTZ camera CZ is the zoomed up image, when the image of unmanned aerial vehicle dn appears on the display screen, unmanned aerial vehicle dn is clearly displayed. Therefore, it is also possible to specify a model of unmanned aerial vehicle dn from an outline of unmanned aerial vehicle dn which is clearly displayed. In this way, sound source detection unit UD can appropriately display unmanned aerial vehicle dn in consideration of visibility of an image displayed on the display screen of monitor 50.
Note that, unmanned aerial vehicle dn itself may be displayed as it is on omnidirectional image GZ1 instead of displaying identification mark so as to be the same display or different display between omnidirectional image GZ1 and PTZ image GZ2, or identification mark mk may be superimposed and displayed on PTZ image GZ2.
In omnidirectional image GZ1, in addition to identification mark mk representing unmanned aerial vehicle dn, another identification mark mc representing the other sound source is superimposed. It is preferable that the other identification mark mc is rendered in a display form different from that of identification mark mk, and in
Furthermore, in omnidirectional image GZ1, a sound pressure map representing the sound pressure of each pixel is generated by output controller 35, and sound pressure heat map MP obtained by performing a color conversion processing is superimposed on an area in which the calculated value of the sound pressure exceeds the threshold value. Here, in sound pressure heat map MP, region R1 in which the sound pressure exceeds second threshold value th2 is rendered in red (large dot group in the figure), and region B1 in which the sound pressure is larger than first threshold value th1 and less than or equal to second threshold value th2 is rendered in blue (small dot group in the figure). Region N1 in which the sound pressure is less than or equal to first threshold value th1 is rendered transparent (nothing is displayed in the figure).
Further, even though the other identification mark mc representing the position of the other sound source is rendered on same omnidirectional image GZ1 as identification mark mk representing unmanned aerial vehicle dn, as sound pressure heat map MP is rendered, circumstances surrounding unmanned aerial vehicle dn become well understood. For example, when the sound source which has not yet been registered is flying as unmanned aerial vehicle dn, a user points a position of the sound source represented by the other identification mark mc or points red region R1 of sound pressure heat map MP from the display screen of monitor 50. Output controller 35 of monitoring device 10 causes PTZ camera CZ to zoom up the position of the sound source or the red region R1 to obtain PTZ image GZ2 after the zoom up and displays PTZ image GZ2 on monitor 50, therefore, it is possible to quickly and accurately ascertain unidentified sound sources. As a result, even if unregistered unmanned aerial vehicle dn exists, a user can detect unmanned aerial vehicle dn.
Note that, a display form in which only the other identification mark mc is rendered or a display form in which only sound pressure heat map MP is rendered may be set on same omnidirectional image GZ1 as identification mark mk. A user can select any display form of these display screens.
As described above, in unmanned aerial vehicle detection system 5 of the present exemplary embodiment, omnidirectional camera CA images monitoring area 8 (imaging area). Microphone array MA picks up sounds in monitoring area 8. Monitoring device 10 detects unmanned aerial vehicle dn appearing in monitoring area 8 by using the sound data picked up by microphone array MA. Signal processor 33 in monitoring device 10 superimposes identification mark mk (first identification information) obtained by converting unmanned aerial vehicle dn into visual information in the captured image of omnidirectional camera CA (that is, omnidirectional image GZ1) on omnidirectional image GZ1 of monitoring area 8 and displays superimposed identification mark mk on monitor 50. As a result, unmanned aerial vehicle detection system 5 can rapidly and accurately determine an existence and a position of desired unmanned aerial vehicle dn by using omnidirectional image GZ1 captured by omnidirectional camera CA.
In unmanned aerial vehicle detection system 5, PTZ camera CZ capable of adjusting the optical axis direction images monitoring area 8. Signal processor 33 outputs to PTZ camera CZ an instruction for adjusting the optical axis direction in a direction corresponding to the detection result of unmanned aerial vehicle dn. Based on the instruction, monitor 50 displays the image (that is, PTZ image GZ2) captured by PTZ camera CZ in which the optical axis direction is adjusted. Thereby, in unmanned aerial vehicle detection system 5, an observer who is a user can clearly view and specify an exact model of unmanned aerial vehicle dn from the image of undistorted unmanned aerial vehicle dn captured by PTZ camera CZ.
Monitor 50 displays omnidirectional image GZ1 of omnidirectional camera CA including identification mark mk of unmanned aerial vehicle dn and the captured image of PTZ camera CZ (that is, PTZ image GZ2) in contrast. As a result, the observer who is the user, for example, alternately compares omnidirectional image GZ1 and PTZ image GZ2, thereby it is possible to accurately grasp the model of unmanned aerial vehicle dn and the surrounding circumstances where unmanned aerial vehicle dn exists.
Signal processor 33 detects at least one other sound source in monitoring area 8 and displays as the other identification mark mc (second identification information), which is obtained by converting the other sound source into visual information in the captured image of the omnidirectional camera and different from identification mark mk, on monitor 50. As a result, the observer who is the user can grasp an unidentified sound source which is not desired unmanned aerial vehicle dn. The user can accurately check whether or not the unidentified sound source is an unregistered unmanned aerial vehicle.
Further, signal processor 33 calculates the sound pressure value of each pixel in the captured image of monitoring area 8, superimposes the sound pressure value of each pixel in the captured image on the omnidirectional image data of imaging area 8 so that it can be identified by a plurality of different color gradations according to the sound pressure value of each pixel as sound pressure heat map MP, and displays the superimposed sound pressure value of each pixel on monitor 50. Thus, a user can compare sound pressure of the sound emitted by unmanned aerial vehicle dn and surrounding sound pressure in contrast, accordingly the sound pressure of the unmanned aerial vehicle becomes relative and visually recognizable.
Fixed camera CF is a camera having a predetermined angle of view with an optical axis fixed in a specific direction, and is installed in advance so as to be able to image, for example, a space expected to fly by unmanned aerial vehicle dn. Here, angle of view ag1 of fixed camera CF is set above building group bLg.
When unmanned aerial vehicle dn is detected, a camera that changes an imaging direction to a directional direction of sound picked up by microphone array MA (that is, a direction from microphone array MA to unmanned aerial vehicle dn) and images unmanned aerial vehicle dn as a subject is PTZ camera CZ as in the first exemplary embodiment.
In unmanned aerial vehicle detection system 5A of the first modification example, omnidirectional camera CA is changed to fixed camera CF in the sequence diagram shown in
As described above, when unmanned aerial vehicle dn is detected and unmanned aerial vehicle dn enters angle of view ag1 monitored by fixed camera CF, unmanned aerial vehicle detection system 5A displays identification mark mk representing unmanned aerial vehicle dn on an image captured by fixed camera CF on monitor 50 and further displays a zoomed up image taken by PTZ camera CZ on monitor 50. When unmanned aerial vehicle dn does not exist at angle of view ag1 monitored by fixed camera CF, imaging by PTZ camera CZ in imaging direction requested from monitoring device 10 is not performed.
Here, in an image captured by angle of view ag1 displayed on monitor 50, as in the first exemplary embodiment, region R1 of pixels obtained sound pressure larger than second threshold th2 is rendered in red, for example. Region B1 of pixels obtained sound pressure larger than first threshold value th1 and less than or equal to second threshold value th2 is rendered in blue, for example. Further, region R0 of pixels obtained sound pressure larger than third threshold th3 (>th2), on which identification mark mk is superimposed, is rendered in purple, for example.
Therefore, in unmanned aerial vehicle detection system 5A of first modification example, the detection processing of unmanned aerial vehicle dn is performed only a limited area corresponding to the image captured by fixed camera CF, for example, unmanned aerial vehicle dn is expected to fly, so that load of the detection processing of unmanned aerial vehicle dn can be reduced and speed of the detection processing of unmanned aerial vehicle dn can be increased.
Monitor 50 displays captured image of fixed camera CF including identification mark mk of unmanned aerial vehicle dn and the captured image of PTZ camera CZ in contrast. As a result, the observer who is the user, for example, alternately compares captured image of fixed camera CF and captured image of PTZ camera CZ, thereby it is possible to accurately grasp the model of unmanned aerial vehicle dn and the surrounding circumstances where unmanned aerial vehicle dn exists.
PTZ camera CZ1 is a camera capable of imaging by changing a direction of an optical axis, and capable of imaging by changing an angle of view in a stepwise manner in a predetermined direction (preset direction) with respect to monitoring area 8 (see
When unmanned aerial vehicle dn is detected at certain angle of view ag2, a camera that changes an imaging direction to a directional direction of sound picked up by microphone array MA (that is, a direction from microphone array MA to unmanned aerial vehicle dn) and images unmanned aerial vehicle dn as a subject is PTZ camera CZ as in the first exemplary embodiment.
In unmanned aerial vehicle detection system 5n of the second modification example, similar to first modification example, omnidirectional camera CA is changed to PTZ camera CZ1 in the sequence diagram shown in
As described above, when unmanned aerial vehicle dn is detected at certain angle of view ag2 by PTZ camera CZ1, unmanned aerial vehicle detection system 5B displays identification mark mk representing unmanned aerial vehicle dn on an image of the angle of view ag2 captured by PTZ camera CZ1 on monitor 50 and further displays a zoomed up image captured by PTZ camera CZ on monitor 50. When unmanned aerial vehicle dn is not detected or when unmanned aerial vehicle dn does not exist in the angle of views ag2-1 to ag2-4 monitored by PTZ camera CZ1, imaging by PTZ camera CZ1 in the imaging direction requested from monitoring device 10 is not performed.
In unmanned aerial vehicle detection system 5B of second modification example, even when unmanned aerial vehicle dn is being searched, PTZ camera CZ1 images, thereby visibility of unmanned aerial vehicle dn appearing in the captured image is improved. In other words, when imaging is performed by omnidirectional camera CA, there is a case where it is difficult to accurately determine unmanned aerial vehicle dn since around the image is distorted, but with the captured image by PTZ camera CZ1, an outline of unmanned aerial vehicle dn can be correctly grasped. As a result, a detection accuracy of unmanned aerial vehicle dn is improved.
Monitor 50 displays captured image of PTZ camera CZ1 including identification mark mk of unmanned aerial vehicle dn and the captured image of PTZ camera CZ in contrast. As a result, the observer who is the user, for example, alternately compares captured image of PTZ camera CZ1 (that is, an image of a widely captured monitoring area) and captured image of PTZ camera CZ (zoomed up image focused on detected unmanned aerial vehicle dn), thereby it is possible to accurately grasp the model of unmanned aerial vehicle dn and the surrounding circumstances where unmanned aerial vehicle dn exists.
Monitoring device 10 superimposes and displays identification mark mk representing unmanned aerial vehicle dn on omnidirectional image GZ1 displayed on a screen of monitor 50 in procedure T16 and then, for example, selects fixed camera CF2-1. Here, it is assumed that identification mark mk is included in an angle of view captured by, for example, fixed camera CF2-1 among the plurality of fixed cameras CF2-1, CF2-2, . . . , CF2-N.
Monitoring device 10 selects fixed camera CF2-1 (T19) and performs an image distribution request to selected fixed camera CF2-1 (T20). In response to the image distribution request, fixed camera CF2-1 transmits image data of the captured image in a fixed optical axis direction to monitoring device 10 (T21). Similar to
Upon receiving the image data captured by fixed camera CF2-1, monitoring device 10 displays the image data on monitor 50 (T22). On the screen of monitor 50, an image showing unmanned aerial vehicle dn is displayed (see lower right of the page of
In this way, unmanned aerial vehicle detection system 5C of the third modification example further includes two or more fixed cameras CF2-1, . . . , CF2-N which have different optical axis directions and image the imaging area respectively. Signal processor 33 selects a fixed camera with a detection direction of unmanned aerial vehicle dn as an optical axis direction from two or more fixed cameras and requests distribution of a captured image to the selected fixed camera. Monitor 50 (displayer) displays captured image distributed from the selected fixed camera based on the request.
In unmanned aerial vehicle detection system 5C of the third modification example, only by selecting a fixed camera in which the imaging direction (optical axis direction) is fixed in advance, an image captured by the fixed camera and reliably grasped unmanned aerial vehicle dn is displayed on monitor 50. In this way, compared with a case of imaging with PTZ camera CZ, driving time for moving (rotating) PTZ camera CZ direction to a direction of unmanned aerial vehicle dn is eliminated. Therefore, the image promptly and reliably grasped unmanned aerial vehicle dn can be displayed on monitor 50. As a result, even if unmanned aerial vehicle dn moves at a high speed, it is possible to monitor unmanned aerial vehicle dn without losing sight by switching among a plurality of fixed cameras.
In the third modification example, the camera disposed on the same axis as microphone array MA is omnidirectional camera CA, however in the fourth modification example, fixed camera CF is disposed so as to coincide with a center axis of microphone array MA and its own optical axis. That is, unmanned aerial vehicle detection system 5D of the fourth modification example has a configuration corresponding to combination of the first modification example and the third modification example.
In unmanned aerial vehicle detection system 5D of the fourth modification example, omnidirectional camera CA is changed to fixed camera CF in the sequence diagram shown in
Therefore, in unmanned aerial vehicle detection system 5D of fourth modification example, the detection processing of unmanned aerial vehicle dn is performed only a limited area, for example, unmanned aerial vehicle dn is expected to fly, so that load of the detection processing of unmanned aerial vehicle dn can be reduced and speed of the detection processing of unmanned aerial vehicle dn can be increased. Furthermore, in the limited area, it is possible to reduce the number of plural fixed cameras imaging at each angle of view in respective different imaging directions. Therefore, an unmanned aerial vehicle detection system capable of processing at high speed can be constructed with low cost.
In unmanned aerial vehicle detection system 5E of the fifth modification example, omnidirectional camera CA is changed to PTZ camera CZ1 in the sequence diagram shown in
In unmanned aerial vehicle detection system 5E of fifth modification example, visibility of unmanned aerial vehicle dn appearing in the image is improved by imaging with PTZ camera CZ1. In other words, when imaging is performed by omnidirectional camera CA, there is a case where it is difficult to accurately determine unmanned aerial vehicle dn since around the image is distorted, but with the captured image by PTZ camera CZ1, an outline of unmanned aerial vehicle dn can be correctly grasped. As a result, a detection accuracy of unmanned aerial vehicle dn is improved. Furthermore, in the limited area, it is possible to reduce the number of plural fixed cameras imaging at each angle of view in respective different imaging directions. Thus, an unmanned aerial vehicle detection system capable of accurately detecting an unmanned aerial vehicle can be constructed with low cost.
In a second exemplary embodiment, an example will be described in which a plurality of sound source detection units UD shown in
Unmanned aerial vehicle detection system 5F of the second exemplary embodiment includes two sound source detection units UD1 and UD2 having the same configuration as sound source detection unit UD shown in
Detection device DT1 includes sound source detection unit UD1, monitoring device 10A, and monitor 50A, and performs the same movement as unmanned aerial vehicle detection systems 5 to 5E of the first exemplary embodiment. Since internal configuration of monitoring device 10A is the same as the internal configuration of monitoring device 10 of the first exemplary embodiment, the description of the internal configuration of monitoring device 10A will be omitted. In this case, monitoring device 10A and monitor 50A are integrated by a general purpose computer device, however they may be devices having separate casings. Similarly, detection device DT2 includes sound source detection unit UD2, monitoring device 10B, and monitor 50B, and performs the same movement as unmanned aerial vehicle detection systems 5 to 5E of the first exemplary embodiment. Since internal configuration of monitoring device 10B is the same as the internal configuration of monitoring device 10 of the first exemplary embodiment, the description of the internal configuration of monitoring device 10B will be omitted.
Distance calculation device 90 is a general purpose computer device, and based on detection information including a detection direction of unmanned aerial vehicle dn from detection device DT1 and detection information including a detection direction of unmanned aerial vehicle dn from detection device DT2, a distance from any one or both of detection devices DT1 and DT2 to unmanned aerial vehicle dn is calculated.
Distance calculation processor 91 is configured using, for example, a processor that is central processing unit (CPU), micro processing unit (MPU), or digital signal processor (DSP). Using two detection directions of unmanned aerial vehicle dn included in detection information received from detection devices DT1 and DT2 and a certain distance (known) between sound source detection units UD1 and UD2, distance calculation processor 91 calculates a distance from any one or both of sound source detection units UD1 and UD2 to unmanned aerial vehicle dn by performing a technique of triangulation method.
Memory 92 stores a program, or the like for calculating the distance from any one or both of sound source detection units UD1 and UD2 to unmanned aerial vehicle dn using a technique of triangulation method. Operator 93 is an input device such as a mouse or a keyboard. Note that, operator 93 and display 96 may be integrally formed with a touch panel.
Communicator 95 receives detection information from two monitoring devices 10A and 10B and also transmits distance information calculated by the distance calculation processor 91 to monitoring devices 10A and 10B based on these detection information.
Display 96 displays UI screen GM (see
Setting manager 94 uses a technique of the triangulation method and holds information necessary for calculating the distance from any one or both of sound source detection units UD1 and UD2 to unmanned aerial vehicle dn (for example, known certain distance information between sound source detection units UD1 and UD2). Here, the distance between sound source detection devices UD1 and UD2 is stored, for example, by a user performing measurement in advance and inputting the result via operator 93. The distance between sound source detection units UD1 and UD2 may be automatically measured by detection devices DT1 and DT2. For example, when a sound source such as a person emits a sound at a place where sound source detection unit UD1 is installed, time difference from a time when the sound is picked up by sound source detection unit UD1 to a time at which sound is picked up by sound source detection unit UD2 corresponds to the distance between sound source detection units UD1 and UD2. Accordingly, monitoring device 10A and 10B include respective picked up times in detection information and transmit the detection information to distance calculation device 90, so that distance calculation device 90 can calculate and store the distance between sound source detection units UD1 and UD2.
In unmanned aerial vehicle detection system 5F according to the second exemplary embodiment, distance calculation device 90 that measures a distance from any one or both of sound source detection units UD1 and UD2 to unmanned aerial vehicle dn is realized as a separate device from detection devices DT1 and DT2, however, it may be realized by one of monitoring devices 10A and 10B.
Each of signal processor 33 of monitoring devices 10A and 10B determines whether or not unmanned aerial vehicle dn is detected as a result of the detection processing (S53). When unmanned aerial vehicle dn is not detected (NO in S53), the processing of each signal processor 33 returns to step S51.
On the other hand, when unmanned aerial vehicle dn is detected (YES in S53), each signal processor 33 calculates a directional direction when sound directivity is formed in a direction from microphone array MA to detected unmanned aerial vehicle dn as the detection direction to unmanned aerial vehicle dn (S54). Each communicator 31 of monitoring devices 10A and 10B transmits the detection information including the detection direction calculated in step S54 to distance calculation device 90 (S55). Thereafter, the processing of detection devices DT1 and DT2 returns to step S51.
On the other hand, when detection information is received (YES in S61), communicator 95 determines whether or not the detection information from detection device DT2 is received (S62). When the detection information is not received (NO in S62), the processing of communicator 95 returns to step S61. In
Based on respective detection information received in steps S61 and S62, distance calculation processor 91 calculates a distance between the detection target (unmanned aerial vehicle dn) from any one or both of sound source detection units UD1 and UD2 using a technique of triangulation method (S63). Details of a calculation of a distance from any one or both of sound source detection units UD1 and UD2 to unmanned aerial vehicle dn will be described later. Distance calculation processor 91 displays UI screen GM (see
A detection direction of detection information of unmanned aerial vehicle dn detected by sound source detection unit UD1 is indicated by vertical angle v1 and horizontal angle h1. Similarly, a detection direction of detection information of unmanned aerial vehicle dn detected by sound source detection unit UD2 is indicated by vertical angle v2 and horizontal angle h2. In the xyz coordinate system shown in
When radius r is length dist of the perpendicular from unmanned aerial vehicle dn to the line segment between sound source detection units UD1 and UD2 and a foot of the perpendicular from unmanned aerial vehicle dn to the line segment is represented by coordinates (x, m/2, 0), the position Dp of unmanned aerial vehicle dn is at a horizontal angle h2 in circle CRC on the horizontal plane centered on the foot of the perpendicular.
From geometrical consideration using a trigonometric function, coordinates (x, y, z) of the position Dp of unmanned aerial vehicle dn are represented by Equations (1), (2), and (3) respectively.
When the position Dp of unmanned aerial vehicle dn is determined, that is, when coordinates of three points of the triangle are determined, each distance distR, distL from sound source detection units UD1 and UD2 to unmanned aerial vehicle dn and length dist of a perpendicular from unmanned aerial vehicle dn to a line segment between sound source detection units UD1 and UD2 are simply calculated. The calculation is performed by distance calculation processor 91.
Therefore, in a case of obtaining a distance from any one or both of sound source detection units UD1 and UD2 to unmanned aerial vehicle dn using a technique of a triangulation method, it may be sufficient to know certain distance m between detection directions (vertical angle v1, horizontal angle h1), (vertical angle v2, horizontal angle h2) when each sound source detection unit UD1 and UD2 detects unmanned aerial vehicle dn and sound source detection units UD1 and UD2.
Specifically, on UI screen GM, the following information is displayed. A vertical distance from unmanned aerial vehicle dn to a line segment between two sound source detection units UD1 and UD2 is displayed. Here, the vertical distance is 006.81 meters (M). In UI screen GM, distance L1 from sound source detection unit UD1 to unmanned aerial vehicle dn or elevation angle α1, and distance L2 from sound source detection unit UD1 to unmanned aerial vehicle dn or elevation angle α2, are displayed.
Further, button bn1 for setting a distance between two sound source detection units UD1 and UD2 is displayed. Here, a distance between two sound source detection units UD1 and UD2 is set to 2 meters (M) by pressing button bn1.
On UI screen GM, screen wd1 including detection information of sound source detection unit UD1 and screen wd2 including detection information of sound source detection unit UD2 are displayed.
On screen wd1, an IP address of sound source detection unit UD1, connection/disconnection switch SW1, and data table Td1 are displayed. In data table Td1, an alerting ID, a vertical angle, and a horizontal angle are registered. Here, “vertical angle: 72.1” and “horizontal angle: 246.8” indicate detection direction det1 of sound source detection unit UD1.
Similarly, on screen wd2, an IP address of sound source detection unit UD2, connection/disconnection switch SW2, and data table Td2 are displayed. In data table Td2, an alerting ID, a vertical angle, and a horizontal angle are registered. Here, “vertical angle: 71.05” and “horizontal angle: 240.26” indicate detection direction det2 of sound source detection unit UD2.
In UI screen GM, pull down menu bn2 for setting a distance between two sound source detection units UD1 and UD2, load button bn3 for environment setting, set button bn4, and default save button bn5 are placed.
As described above, in unmanned aerial vehicle detection system 5F, sound source detection unit UD1 (first detection unit) in which the omnidirectional camera and the microphone array are disposed coaxially and sound source detection unit UD2 (second detection unit) in which the omnidirectional camera and the microphone array are disposed coaxially are disposed at a certain distance from each other. Distance calculation device 90 derives a distance to unmanned aerial vehicle do and displays the distance on display 96 (second displayer) based on the certain distance, first direction information including a detection direction dell (vertical angle v1, horizontal angle h1) of unmanned aerial vehicle dn derived by the omnidirectional camera of sound source detection unit UD1, and second direction information including a detection direction det2 (vertical angle v2, horizontal angle h2) of unmanned aerial vehicle dn derived by the omnidirectional camera of sound source detection unit UD2.
In unmanned aerial vehicle detection system 5F according to the second exemplary embodiment, it is possible not only to display flying unmanned aerial vehicle dn on display 96, but also to obtain an actual distance up to unmanned aerial vehicle dn. By this, it is possible to grasp about an area where unmanned aerial vehicle dn is located, and about how much time it takes for unmanned aerial vehicle dn to arrive in a case where unmanned aerial vehicle dn moves toward unmanned aerial vehicle detection system 5F. Therefore, it can help prepare for unmanned aerial vehicle dn.
In a third exemplary embodiment, an example will be described in which a plurality of sound source detection units UD shown in
Four sound source detection units UD3, UD4, UD5, and UD6 are installed at four corners of water purification plant Upw (see
On the other hand, when detection information is received (YES in S71), similarly, communicator 95 determines whether or not the detection information from detection device DT4 is received (S72). When the detection information is not received (NO in S72), the processing of communicator 95 returns to step S71.
On the other hand, when detection information is received (YES in S72), similarly, communicator 95 determines whether or not the detection information from detection device DT5 is received (S73). When the detection information is not received (NO in S73), the processing of communicator 95 returns to step S71.
On the other hand, when detection information is received (YES in S73), similarly, communicator 95 determines whether or not the detection information from detection device DT6 is received (S74). When the detection information is not received (NO in S74), the processing of communicator 95 returns to step S71. In
On the other hand, when the detection information is received (YES in S74), distance calculation processor 91 of distance calculation device 90 calculates a distance from each sound source detection unit UD3, UD4, UD5, and UD6 to unmanned aerial vehicle dn (S75). Based on the distance from each of sound source detection units UD3, UD4, UD5, and UD6 to unmanned aerial vehicle dn, distance calculation processor 91 estimates block Blk in water purification plant Upw (see
Distance calculation processor 91 displays estimated block Blk (in other words, block Blk in water purification plant Upw in which unmanned aerial vehicle dn exists in the sky) on display 96 (S77).
In this way, in the unmanned aerial vehicle detection system of the third exemplary embodiment, four sound source detection units UD3, UD4, UD5, and UD6 are disposed so as to surround water purification plant Upw (monitoring area). Distance calculation device 90 estimates block blk (section within the monitoring area) of water purification plant Upw where unmanned aerial vehicle dn exists, based on distances to unmanned aerial vehicle dn respectively derived by the plurality of units.
As described above, in the unmanned aerial vehicle detection system according to the third exemplary embodiment, it is possible to easily specify a block (section within the monitoring area) where unmanned aerial vehicle dn flew in a relatively wide facility such as a water purification plant. Also, since blocks in the facility correspond to processing process in the facility, appropriate actions can be taken for each block estimated to have unmanned aerial vehicle dn in the sky. For example, in a case where there is nothing in the estimated block, it is possible to take action to take down unmanned aerial vehicle dn. On the other hand, in a case where a large amount of water in which disinfection such as filtration or sterilization is being carried out is stored in the estimated block, it is also possible to take action such as capturing so as to wrap unmanned aerial vehicle dn.
In this case, only when the detection information is received from all of four sound source detection units UD3, UD4, UD5, and UD6, distance calculation processor 91 of distance calculation device 90 calculates distances from each sound source detection unit UD3, UD4, UD5, and UD6 to unmanned aerial vehicle dn. As described in detail in the second exemplary embodiment, when there is at least two sound source detection units, distance calculation device 90 can acquire and specify a position of unmanned aerial vehicle dn, so that it is also possible to estimate the block in the facility where unmanned aerial vehicle dn exists. However, when there are three or more sound source detection units and are disposed along the shape of the monitoring area, estimation accuracy of the section (block) in which unmanned aerial vehicle dn exists in the monitoring area can be enhanced.
In the above description, the water purification plant as a monitoring area is assumed to be a rectangle, and four sound source detection units are disposed at four corners thereof, however, when the monitoring area is a circle such as a sports facility, a plurality of sound source detection units may be disposed at any places surrounding the circle so as to estimate a section where unmanned aerial vehicle dn exists.
As shown in
When corresponding sound source detection units UD3, UD4, UM, and UD6 are installed, GPS measurement devices Gp3, Gp4, Gp5, and Gp6 measure respective position information and transmit the respective position information to respective monitoring devices configuring detection devices DT3, DT4, DT5, and DT6. Each monitoring device transmits the respective position information measured by GPS measurement devices Gp3, Gp4, Gp5, and Gp6 to distance calculation device 90 (see
By this, it possible to save labor of manually measuring the respective distances of sound source detection units UD3, UD4, UD5, and UD6 when the installation of sound source detection units UD3, UD4, UD5, and UD6 is completed. In addition, even when an installation location of at least one sound source detection unit is changed, position information (that is, absolute position) of the installation location after the change of the sound source detection unit can be acquired, so that it is possible to calculate the distance of respective sound source detection units UD3, UD4, UD5, and UD6, and it is also possible to specify an absolute position (latitude and longitude) of unmanned aerial vehicle dn.
In a fourth exemplary embodiment, an example of determining a correctness of a detection of unmanned aerial vehicle dn detected by two sound source detection units UD1 and UD2 will be described.
On the other hand, when detection information is received (YES in S81), communicator 95 determines whether or not the detection information from detection device DT2 is received (S82). When the detection information is not received (NO in S82), the processing of communicator 95 returns to step S81. Note that, here, it is determined whether or not detection information is received in the order of detection devices DT1 and DT2, but it may be performed in reverse order. In other words, it may be executed in the order of step S82→step S81.
Based on the received detection information, distance calculation processor 91 superimposes and displays identification marks mk representing unmanned aerial vehicles dn on each omnidirectional image GZ11 and GZ12 (S83). Distance calculation processor 91 determines positions of two identification marks mk1 and mk2 superimposed and displayed on two omnidirectional images GZ11 and GZ12 (S84).
Distance calculation processor 91 determines whether or not azimuths of two identification marks mk1 and mk2 are aligned (S85). When the azimuths of two identification marks mk1 and mk2 are aligned, distance calculation processor 91 determines that unmanned aerial vehicle dn is correctly detected and displays an alarm on display 96 to alert (S86). Distance calculation device 90 may alert an alarm with a sound from a speaker (not shown). Thereafter, distance calculation processor 91 returns to step S81.
On the other hand, when the azimuths of two identification marks mk1, mk2 are not aligned (NO in S85), distance calculation processor 91 determines that unmanned aerial vehicle dn is false detected, notifies a user of nothing, and performs a processing of step S81.
Note that, in this case, an alerting is issued when the detection is correct and nothing is issued when the detection is incorrect, however a correct alerting may be issued when the detection is correct and an incorrect alerting may be issued when the detection is incorrect.
As described above, the unmanned aerial vehicle detection system according to the fourth exemplary embodiment determines a correctness of a detection of unmanned aerial vehicle dn and notifies the determination result depending on whether or not azimuths based on a directional direction at the time when sound source detection unit UD1 (first detection unit) and sound source detection unit UD2 (second detection unit) detect unmanned aerial vehicle dn are aligned.
In the unmanned aerial vehicle detection system according to the fourth exemplary embodiment, it is possible to easily determine whether unmanned aerial vehicle dn detected by two sound source detection units UD1 and UD2 is correct or not (correctness). Therefore, even if there are many sound sources in the sky, it is possible to easily determine the presence or absence of unmanned aerial vehicle dn.
Although various exemplary embodiments are described with reference to the drawings, needless to say, the present invention is not limited to the examples. Those skilled in the art will appreciate that various modification examples or amendment examples can be conceived within the scope described in the claims, and it is obvious that those belonging to the technical scope of the present invention are understood as well.
The present invention is useful as an unmanned aerial vehicle detection system and an unmanned aerial vehicle detection method that can easily determine an existence and position of an unmanned aerial vehicle from a captured image when detecting an unmanned aerial vehicle.
Number | Date | Country | Kind |
---|---|---|---|
2016-148471 | Jul 2016 | JP | national |
2017-107170 | May 2017 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2017/024476 | 7/4/2017 | WO | 00 |