This application claims the benefit of Korean Patent Application No. 10-2023-0156278, filed on Nov. 13, 2023, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein in its entirety by reference.
The present disclosure relates to a system for detecting a fire.
The convergence of information and communication technologies such as artificial intelligence (AI), deep learning, and the internet of things, which are driving the 4th industrial revolution, and video security is rapidly increasing. In the past, closed circuit television (CCTV) was only used for prevention of crimes, but in recent years closed circuit television has been integrated with artificial intelligence technology and has been expanded into a device that can prevent accidents.
Among these, one technology that is gaining attention is a fire detection system that detects fire. When detecting flames and sparks, which the system has been trained in advance to detect, the fire detection system provides the detected flames to a monitoring agent, thereby making efficient monitoring possible.
Currently, many of the developed fire detection systems have problems of not accurately detecting a fire and not excluding false fires.
Accordingly, when a fire breaks out in a mountain, it is not possible to accurately predict the exact occurrence of the fire and the progress speed of the fire such that the forest fire cannot be suppressed in the early stage, thereby resulting in damage.
[Related Art Document] Korean Patent Application Publication No. 10-2023-0147831 (Published on Oct. 24, 2023)
The present disclosure is to solve the problem of a forest fire not being suppressed in the early stage when the forest fire occurs because of failing both to accurately detect the fire and to exclude false fires, using a visible image camera module and a thermal image camera module.
The problems that the present disclosure seeks to solve may not be limited to those mentioned above, and other technical problems not mentioned will be clearly understood by those skilled in the art from the following description.
The wildfire detection system of the present disclosure to achieve the problem to be solved above includes a camera unit that is composed of a visible image camera module that photographs a certain area and then generates a first video image, and of a thermal image camera module that photographs the same area as the area photographed by the visible image camera module and then generates a second video image, and a main server unit that is provided with an object classification extraction module, wherein a pre-set reference flame object is included, the first video image and the second video image are received from the camera unit, a first video extraction flame object matching a pre-set reference flame object is detected in the first video image, an area that does not match the reference flame object is detected as a first video extraction background object, a second video extraction flame object matching a pre-set reference flame object is detected in the second video image, and an area that does not match the reference flame object is detected as a second video extraction background object, a temperature extraction module that extracts a temperature of the second video extraction flame object at a location that overlaps with the first video extraction flame object detected in the object classification extraction module, and a fire sensing module where a fire signal is generated when at least one of the temperatures extracted from the temperature extraction module is greater than a pre-set fire reference temperature and a non-fire signal is generated when both a temperature extracted from the first video extraction flame object and a temperature extracted from the second video extraction flame object are less than the fire reference temperature.
The main server unit includes a fire location extraction module that extracts in the first video image and the second video image a location information of an area where the first video image and the second video image are photographed when the fire signal is generated in the fire sensing module.
The main server unit includes a notification alarm module that outputs a pre-set message when the fire signal is generated, and a shelter location check module in which a location information of at least one shelter is pre-stored.
A wireless terminal unit is further included for installing a fire evacuation app, communicating data with the main server unit, and receiving a message from the notification alarm module of the main server unit.
The fire evacuation app includes a map image, and displays on the map image a location of the wireless terminal and a location of the shelter received from the shelter location check module when the wireless terminal is turned on after receiving the message from the notification alarm module.
The main server unit further includes a computation module that calculates a vegetation data including a vegetation index and a canopy temperature by receiving data observed from a satellite while communicating data with the satellite and calculates a soil moisture data by receiving a brightness temperature or a backscattering coefficient observed through a microwave sensor installed in the satellite, a variable data receiving module that receives a wind data at a location corresponding to a location information extracted from the fire location extraction module while communicating data with the satellite and receives the vegetation data and the soil moisture data calculated in the computation module, a movement route generation module that generates a plurality of routes from the location of the wireless terminal unit to the location of the shelter and transmits the same to the wireless terminal unit, and a fire spread route extraction module that extracts an estimated route through which a fire spreads in the fire location extraction module through an inner product calculation value of a soil spread potential index and a wind spread potential index by calculating the soil spread potential index, which indicates a degree of likelihood that the fire of a set section spreads, by inputting the vegetation data and the soil moisture data received from the variable data receiving module into the vegetation data/soil moisture data formula and by calculating the wind spread potential index by performing a vector product operation on the wind data with the location information extracted from the fire location extraction module.
The main server unit detects as a fire avoidance movement route a movement route whose wind spread potential index is small among a plurality of movement routes generated in the movement route generation module, that does not overlap with the estimated route generated in the fire spread route extraction module, that has the shortest distance from the wireless terminal unit to the shelter, and transmits the same to the wireless terminal unit.
The present disclosure accurately detects and identifies long-distance fire detection and fire risk factors through an object algorithm so that it is possible to check 24 hours a day whether a fire occurs. Furthermore, the present disclosure zooms in the area in which an object is detected from the segmented areas after segmenting a video image into a plurality of segmented areas, and performs a secondary segmentation again so that it is possible to identify the characteristics of the object. In addition, the present disclosure enables the main server unit to provide people in adjacent to the location of a fire with a safe route to shelter while avoiding the fire's spread route by calculating the location of the pre-set shelter, the movement route data between the fire occurrence location and the shelter, and the movement time according to the movement route, and by calculating the soil moisture data and vegetation data for each movement data route, and wind direction data over time after receiving the location data of the fire occurrence transmitted from the camera unit.
Advantages and features of the present disclosure and methods for achieving the same will become clear by referring to the exemplary embodiments described in detail below along with the accompanying drawings. However, the present disclosure may not be limited to the exemplary embodiments disclosed below, but may be implemented in various different forms, and only this exemplary embodiment may be provided to make the present disclosure complete and to fully disclose the scope of the present disclosure to one of ordinary skill in the art to which the present disclosure belongs.
The scope of the present disclosure may be defined by the claims and the description in support of the claims. In addition, the same reference numerals refer to the same elements throughout the specification.
Hereinafter, a wildfire detection system according to an exemplary embodiment of the present disclosure will be described in detail with reference to
Hereinafter, referring to
The wildfire detection system 1 may perform computing the characteristics of the object identified in this way and various data, and may provide an optimal route to a shelter to the wireless terminal adjacent to a location where the fire occurs, that is, the wireless terminal unit 30 preset in the main server unit 20, thereby making sure that people are safe and prepared.
Hereinafter, the components that constitute the wildfire detection system will be described in detail with reference to
Such a wildfire detection system 1 may include a camera unit 10, a main server unit 20, and a wireless terminal unit 30 as components.
The camera unit 10 may photograph a certain area from a place installed by the user, such as a building or a forest. Then, while photographing the certain area, a thermal image, that is, a first video image A10 and a visible image, that is, a second video image A20 may be generated in response to the corresponding area. As shown in
Likewise, both the first video image A10 and the second video image A20 may be images where the same area is photographed. The camera unit 10 may transmit the first video image A10 and the second video image A20 in real time to the main server unit 20 connected to the wireless network.
The location of such a camera unit 10 may be controlled by data transmitted from the main server unit 20.
Accordingly, the camera unit 10 may photograph areas with a high fire risk as targets of intensive management depending on local conditions. In addition, the latitude and longitude transmitted from the main server unit 20 may be received up to 10 locations and be monitored in order from the farthest to the nearest. In this case, the camera unit 10 may set a plurality of arbitrary reference locations in the surveillance area, accurately receive the longitude and latitude for each specified reference location, compare the measured distance of the fire occurrence with the actual distance to identify errors in the measured data, and photograph a certain area to generate the first video image A10 and the second video image A20 while correcting the occurrence location.
By applying YOLO's object detection technology, which detects the locations of several objects in the video, the main server unit 20 may collect a fire or non-fire image data, convert the data on the basis of the YOLO model by labeling and pre-processing (size, normalization) as fire and non-fire, and build a deep learning model based on the YOLO algorithm to proceed with learning. In addition, by processing the first video image A10 and the second video image A20 transmitted from the camera unit 10, the main server unit 20 may detect a flame object in the first video image A10 and the second video image A20 and transmit the same to the wireless terminal unit 30.
The main server unit 20 may include an object classification extraction module 201, a temperature extraction module 202, a fire sensing module 203, a fire location extraction module 204, a notification alarm module 205, and a shelter location check module 206.
The object classification extraction module 201 may include a pre-set reference base object and a pre-set reference flame object and receive the first video image A10 and the second video image A20 from the camera unit 10. The object classification extraction module 201 may include an object detector 2011, an area segmenter 2012, and an object aligner 2013 and more accurately classify base objects (A12, A22) and flame objects (A11, A11) in the received first video image A10 and the second video image A20.
Hereinafter, it may be described in detail that the object classification extraction module 201 detects a video extraction base object and a video extraction flame object in the first video image A10 and the second video image A20 through the object detector 2011 and the area segmenter 2012.
The object detector 2011 may perform a frame segmentation that segments a video into frame units for fire detection, a flame object extraction that accurately extracts the background by removing dynamic elements (such as moving objects other than fires) from the video, and a color space conversion to convert RGB to HSV, LAB, and other color spaces to utilize color information. In addition, the object detector 2011 may detect an object that matches a pre-set reference flame object in the first video image A10, calculate a matching value between the detected object and the reference flame object, and classify the detected object as a flame object when the matching value exceeds a reference value. An object that is not classified as a flame object or an object that matches a pre-set reference base object may be detected, and a matching value between the detected object and the reference base object may be calculated to classify the detected object as a base object when the matching value exceeds a reference value. For example, the object detector 2011 may detect a first video extraction flame object A11 that matches a pre-set reference flame object in the first video image A10 as shown in
The area segmenter 2012 may segment the first video image A10 and the second video image A20 into a plurality of segmentation areas. For example, the area segmenter 2012 may segment the first video image A10 into 3×3 to generate a first split video image A13, and may segment the second video image A20 into 3×3 to generate the second split video image A23. In this case, the first split video image A13 and the second split video image A23 may include a first segmentation area ({circle around (1)}) 1 to a ninth segmentation area ({circle around (9)}), as shown in
The object aligner 2013 may align the first video extraction flame object A11 and the first video extraction background object A12 classified in the visible image A10, and the second video extraction flame object A21 and the second video extraction background object A22 classified in the thermal image A20 so that the first video extraction flame object A11 and the second video extraction flame object A21 overlap each other.
The object classification extraction module 201 may enable the location of the first video extraction flame object to be accurately identified in the first video image A10 and enable the temperature extraction module 202 to accurately extract the temperature of the second video extraction flame object A21 that overlaps with the first video extraction flame object.
The temperature extraction module 202 may extract the temperature of the second video extraction flame object A22 at a location that overlaps with the first video extraction flame object A11 detected in the object classification extraction module 201. Moreover, the temperature extraction module 202 may extract the maximum and minimum temperature values in the second video extraction flame object A22 through temperature analysis and a set threshold value, and determine whether there is a fire by comparing with a pre-set fire reference temperature arbitrarily set when the temperature rises due to a fire. For example, the temperature extraction module 202 may extract the temperature of a first split part enlargement area ({circle around (5)})-({circle around (1)}) of the second split part enlargement image A24 as 25° C., the temperature of a second split part enlargement area {circle around (5)}-{circle around (2)} as 120° C., and the temperature of a third split part enlargement area {circle around (5)}-{circle around (3)} as 25° C. as shown in
The object classification extraction module 201 may include the object detector 2011, the area segmenter 2012, and the object aligner 2013 to accurately identify the occurrence of a fire, and may enable to accurately identify the location of the fire when it is determined that there is a fire. In addition, the temperature extraction module 202 may accurately determine the temperature of a portion of the flame object identified in the object classification extraction module 201.
The fire sensing module 203 may generate a fire signal when the temperature of at least one of the temperatures extracted by the temperature extraction module 202 is greater than a pre-set fire reference temperature. A non-fire signal may be generated when both the temperature extracted from the first video extraction flame object A11 and the temperature extracted from the second video extraction flame object A22 are smaller than the fire reference temperature. For example, the fire sensing module 230 may generate a fire signal when the temperature of any one of the split part enlargement areas from the first split part enlargement area {circle around (5)}-{circle around (1)} to the first split part enlargement area {circle around (5)}-{circle around (1)} is greater than the fire reference temperature while a non-fire signal is generated when all temperatures from the first split part enlargement area {circle around (5)}-{circle around (1)} to the first split part enlargement area {circle around (5)}-{circle around (1)} of the second split part enlargement image A24 are smaller than the fire reference temperature as shown in
When a fire signal is generated in the fire sensing module 203, the fire location extraction module 204 may extract from the first video image A10 and the second video image A20 a location information of an area where the first video image A10 and the second video image A20 are photographed. For example, the location information of the first video extraction flame object A11 or the second video extraction flame object A21 may be extracted through the location information of the camera unit 10 installed in a certain area, the location area photographed by the camera unit 10, and the location information identified by the object classification extraction module described above.
When a fire signal is generated in the fire detection module 203, the notification alarm module 205 may output a pre-set message. Herein, the message may be a message having the content of “There is a fire at location xx, please turn on the fire evacuation app” in the wireless terminal unit 30.
The shelter location check module 206 may include location information of at least one shelter (B). The shelter location check module 206 may pre-store location information of a first shelter and a second shelter located near a first location, and location information of a third shelter and a fourth shelter located near a second location.
The wireless terminal unit 30 may transmit and receive data to and from the main server unit 20 through wireless communication, and become a terminal on which various applications can be installed. As shown in
The map image 321 may show the current location (C) of the wireless terminal unit 30, a shelter (B) located near an area where the fire occurs, and various movement routes (D) from the location of the wireless terminal unit 30 to the shelter (B). More specifically, as shown in
The main server unit 20 may further include a computation module 207, a variable data receiving module 208, a movement route generation module 209, a fire spread route extraction module 210, and a database module 211 as components in order to provide an optimal route for avoiding fire, that is, a fire avoidance movement route (E) to the fire evacuation app 320.
The computation module 207 may communicate data with the satellite (F) to receive data observed from the satellite (F) and calculate vegetation data (H) including a vegetation index and a canopy temperature. In addition, brightness temperature or backscattering coefficients observed through microwaves installed in the satellite (F) may be received to calculate soil moisture data (I).
The variable data receiving module 208 may communicate data with the satellite (F) and receive wind data (G) of a location corresponding to the location information extracted from the fire location extraction module 204. Vegetation data (H) and soil moisture data (I) calculated in the computation module 207 may be received.
The movement route generation module 209 may generate a plurality of movement routes (D) from the location of the wireless terminal unit 30 to the location of the shelter and transmit the same to the wireless terminal unit 30. In this case, the plurality of movement routes (D) may be formed by a combination of various passageways on the basis of a pre-set passageway.
The fire spread route extraction module 210 may receive the wind data (G) received from the variable data receiving module 207, the vegetation data (H) and the soil moisture data (I) calculated in the computation module 207.
An estimated route (J) through which a fire spreads may be extracted by calculating the vegetation data (H) and the soil moisture data (I) using a pre-set formula. For example, the fire spread route extraction module 210 may extract an estimated route (J) generated by stopping and diverting the fire spread route according to the vegetation data (H) and the soil moisture data (I) in a predicted route by predicting that the fire will follow the northwesterly wind and spread to the northwest from the point where the fire started when the northwesterly wind blows. Herein, the wind data (G) may be wind data provided by the meteorological administration. As shown in
As shown in
As can be seen from the formula, the directional surface temperature (TR) and the soil surface temperature (TS) according to the view angle may be required in order to calculate the canopy temperature (Tc). The directional surface temperature (TR) according to the view angle may be calculated by the formula, and at this time TB(θ) is a directional brightness temperature measured by the satellite at a particular view angle (θ). ε(θ) is a directional emissivity measured at the view angle θ, which means the efficiency of energy emission from the surface of an object when radiating heat. TSKY is the hemispherical melting point of the sky, and 2.7K, which is the amount of the black body radiation that fills the universe, is applied. Herein, n may be calculated according to the formula through the Planck function and the wavelength (λ) of a sensor mounted on the satellite that observes the brightness temperature. To is the brightness temperature input to the Planck function, and n=1 may be applied in the range of brightness temperature TO=190˜315K measured at 21 cm wavelength conditions, that is, λ=210000 μm in the case of the Soil Moisture Active Passive (SMAP) satellite.
The soil moisture data may be data that means the amount of water contained in the soil. Such soil moisture data may be expressed as the volume of water among the total volume of soil, water, and air and may be expressed as %. Such soil moisture data may be calculated on a global scale through satellites. For example, soil moisture data may be data according to soil moisture calculated using brightness temperature obtained through a passive microwave sensor installed on the satellite and backscattering coefficients obtained through an active microwave sensor installed on the satellite.
Such soil moisture data may indirectly identify the moisture contained in the soil and may be utilized as information on the combustibility of vegetation near the land surface along with the vegetation data described above.
The movement route generation module 209 may generate a plurality of movement routes from the location of the wireless terminal unit 30 to the location of the shelter in the wireless terminal unit 30.
The fire spread route extraction module 210 may calculate a soil spread potential index, which indicates the degree of the possibility of fire spreading in a set section, by putting the vegetation data (H) and the soil moisture data (I) received from the variable data receiving module 207 into the formula of the vegetation data/soil moisture data. The soil spread potential index calculated in this way may indicate a high and low risk of wildfire occurrence according to the size of the value. Herein, when the value of the soil spread potential index is greater than or equal to 120K/%, it may be determined that the risk of wildfire occurrence is high. In addition, when the value of the soil spread potential index is less than or equal to 20K/%, it may be determined that the risk of wildfire occurrence is low. In addition, the fire spread route extraction module 210 may calculate a wind spread potential index that scores the possibility of movement according to the wind data by performing a vector product operation on the wind data (G) with the location information extracted from the fire location extraction module 204. In this way, the fire spread route extraction module 210 may calculate the wind spread potential index as data in three-dimensional space by performing vector product calculations. In addition, the fire spread route extraction module 210 may extract an estimated route (J) for the fire to spread in the fire location extraction module 204 by performing an inner product operation on the calculated soil spread potential index and the wind spread potential index. In addition, the fire spread route extraction module 210 may detect as a fire avoidance movement route (K) a movement route having the smallest total spread potential index among the total spread potential indices calculated from a plurality of movement routes and transmit the same to the wireless terminal unit 30. Herein, the soil moisture data, which is a denominator for the formula of vegetation data/soil moisture data, may be a value calculated using the brightness temperature or backscattering coefficients described above, and the vegetation data, which is the numerator, may be the canopy temperature. In this case, the canopy temperature may be a factor indicating the moisture level of the vegetation, and may be calculated in the fire spread route extraction module 210.
The database module 211 may store data observed by the satellite, vegetation information including canopy temperature, soil moisture, and the like. In addition, data observed by the satellite equipped with a microwave sensor may be stored. Of course, database unit 12 may store various data such as geography, weather, vegetation information, and soil moisture and the like in addition to data observed by microwave sensors.
Hereinafter, the operation of the wildfire detection system according to an exemplary embodiment of the present disclosure will be described in detail with reference to
The wildfire detection system 1 of the present disclosure may start with obtaining the first video image A10 and a second video image A20 from the camera unit 10, as shown in
As shown in
In this case, when the wireless terminal unit 30 receives a fire signal, the fire evacuation app 320 may be activated by a user as shown in
The inner product calculation value ‘D’ may be calculated for the first compartment area ({circle around (2)}-{circle around (1)}) of the second movement route (D2), the inner product calculation value ‘B’ may be calculated for the second compartment area ({circle around (2)}-{circle around (2)}), the inner product calculation value ‘A’ may be calculated for the third compartment area ({circle around (2)}-{circle around (3)}) of the second movement route (D2), and the inner product calculation value ‘B’ may be calculated for the fourth compartment area ({circle around (2)}-{circle around (4)}). The inner product calculation value ‘D’ may be calculated for the fifth compartment area ({circle around (2)}-{circle around (5)}). The inner product calculation value ‘D’ may be calculated for the first compartment area ({circle around (3)}-{circle around (1)}), the second compartment area ({circle around (3)}-{circle around (2)}), the third compartment area ({circle around (3)}-{circle around (3)}), the fourth compartment area ({circle around (3)}-{circle around (4)}), the fifth compartment area ({circle around (3)}-{circle around (5)}), the sixth compartment area ({circle around (3)}-{circle around (6)}), and the seventh compartment area ({circle around (3)}-{circle around (7)}) of the third movement route (D3). The inner product calculation value ‘D’ may be calculated for the first compartment area ({circle around (4)}-{circle around (1)}), the second compartment area ({circle around (4)}-{circle around (2)}), the third compartment area ({circle around (4)}-{circle around (3)}), the fourth compartment area ({circle around (4)}-{circle around (4)}), the fifth compartment area ({circle around (4)}-{circle around (5)}), the sixth compartment area ({circle around (4)}-{circle around (6)}), the seventh compartment area ({circle around (4)}-{circle around (7)}), the eighth compartment area ({circle around (4)}-{circle around (8)}), and the ninth compartment area ({circle around (4)}-{circle around (9)}) of the fourth movement route (D4). Herein, when the calculated inner product value is ‘A’, it may mean that the risk of wildfire spread is very high, ‘B’ may mean that the risk of wildfire spread is high, ‘C’ may mean that the risk of wildfire spread needs attention, and ‘D’ may mean that the probability of wildfire spread is low.
When the main server unit 20 calculates the inner product value for the compartment areas of the first movement route to fourth movement route, on the basis of this, the movement route whose wind spread potential index is low, that does not overlap with the estimated route (J) generated in the fire spread route extraction module 210, and that has the shortest distance from the wireless terminal unit 30 to the shelter may be detected as the fire avoidance movement route (K) among the plurality of movement routes generated by the movement route generation module 209. And, the detected fire avoidance movement route (K) may be transmitted to the wireless terminal unit 30 and may be displayed on the map image 321 as shown in
Accordingly, the wildfire detection system 1 may enable the user using the wireless terminal unit 30 to safely reach the shelter along the route displayed on the map image 321.
As such, the wildfire detection system 1 may secure high reliability in detecting flames. Moreover, it is possible to more accurately determine whether there is a fire by extracting information about the temperature, size, and direction of the detected flame. Also the evacuee (L) may not move along the route where the fire spreads, but move along the safe route to the shelter (B) without being involved in a fire accident by calculating the point where the fire spreads at the point where the fire occurs using the wind data (G) obtained from the satellite, the vegetation data (H), and the soil moisture data (I) calculated from the computation module.
Although exemplary embodiments of the present disclosure have been described above with reference to the accompanying drawings, those of ordinary skill in the art to which the present disclosure pertains will understand that the present disclosure may be implemented in other specific forms without changing its technical idea or essential features. Therefore, it should be understood that the exemplary embodiments described above are exemplary and not limited in all respects.
Number | Date | Country | Kind |
---|---|---|---|
10-2023-0156278 | Nov 2023 | KR | national |
Number | Name | Date | Kind |
---|---|---|---|
10045187 | Soleimani | Aug 2018 | B1 |
11295131 | Dhawan | Apr 2022 | B1 |
11308595 | Wheeler | Apr 2022 | B1 |
20120261144 | Vian | Oct 2012 | A1 |
Number | Date | Country |
---|---|---|
10-2014-0093840 | Jul 2014 | KR |
102016245 | Aug 2019 | KR |
10-2023-0080522 | Jun 2023 | KR |
10-2023-0147831 | Oct 2023 | KR |
Entry |
---|
“Request for the Submission of an Opinion” Office Action issued in KR 10-2023-0156278; mailed by the Korean Intellectual Property Office on Feb. 5, 2024. |
“Written Decision on Registration” Office Action issued in KR 10-2023-0156278; mailed by the Korean Intellectual Property Office on Jun. 25, 2024. |