This disclosure relates to the technical field of information processing, and more particularly relates to a method and apparatus for assessing an insured loss, a computer device, and a storage medium.
The damage caused by hail and high winds (including hurricanes, typhoons, tornadoes, etc.) on the roofs of houses has always been tricky and costly in the context of insurance claims. The damage caused by hail on a roof is mostly semi-circular or crescent-shaped, and its color and shape presented on different roofing materials are also quite different. It can be easily confused with the damage caused by a blister that is formed due to high temperatures and ruptures on asphalt roofing materials. In the past claims process, whether the photos are taken with a camera or the camera that comes with a mobile phone, only a small portion of the damages caused by the hail on the roof can be clearly identified. Damage caused by high winds generally appears as missing tiles or misalignment of tiles. The human eye can quickly recognize the damage from the photo, even if the photo is not clear enough. However, the recognition of such damage by machine is not satisfactory, especially when the photo is not clear enough. With the promotion and application of drone photography in the insurance industry, the high definition drone photos provides the possibility of automatically identifying hail and wind damage, while machine learning and deep learning provide the necessary methods and tools.
At present, the industry typically needs to rely on an experienced claims adjuster to mark out the damaged tiles on the roof with chalk or other tools through on-site surveys, and find a certain number of damage points within an area to determine whether the claim meets the requirements. In contrast, a recognition system automatically recognizes damaged tiles based on photos captured by a drone, and then projects the detected damage onto the roof for global assessment. This technology eliminates the need for the claims adjuster to climb onto the roof and walk back and forth on the roof for 30-75 minutes to find the damage points one by one. It also avoids misjudgments caused by human factors.
In terms of the application of drone photography in the insurance industry, Loveland Innovation LLC and Kespry in the United States are recognized as leading companies in the industry. However, the solutions they provide for damage recognition still rely on the users themselves to confirm and modify the results given by the machine. The accuracy of the results given by the machine is less than 50%, and the accuracy will be significantly reduced on an old roof. Moreover, the modification tools they provide users are not convenient enough. Users need to enter and exit the picture modification interface and picture preview interface multiple times to complete the modification of a picture. Therefore, this disclosure is mainly aimed at high-precision automatic identification. Of course, even in the case of high precision, there may be cases of misidentification.
In view of the above problems, this disclosure aims to provide a method and apparatus for assessing an insured loss, a computer device, and a storage medium.
In order to solve the above technical problems, a technical solution adopted by this disclosure is to provide a method for assessing an insured loss, which includes the following operations:
Further, the operation S2 may include:
Further, the operation S3 may include:
As an improvement, the operation S4 may include:
As a further improvement, the operation S4 may further include:
As a further improvement, the operation S4 may further include:
There is further provided a system for assessing an insured loss, the system including:
There is further provided a computer device that includes a memory and a processor, the memory storing a computer program, which when executed by the processor causes the operations of any one of the above methods to be performed.
There is further provided a computer-readable storage medium having a computer program stored therein, which when executed by a processor causes the operations of any one of the above-mentioned methods to be performed.
According to the method and apparatus for assessing an insurance loss, the computer device, and the storage medium provided by this disclosure, by constructing a damage artificial intelligence model and algorithm, high-precision recognition by the machine is realized. Furthermore, the automatic damage recognition process is standardized and simplified, which greatly reduces the processing cycle and the required manpower. In the case of errors or omissions, a complete and easy-to-use set of identification tools is designed, and an automatic screenshot function is innovatively added to it, allowing the user to easily modify a damage, and after correcting the damage, the user only needs to click save thus completing the picture modification and automatically putting the screenshots into the report. As a result, the loss assessment cycle has been reduced from a few hours in the current common schemes to a few minutes, which greatly shortens the time for making reports, improves the work efficiency of claims adjusters, and saves insurance companies' claims settlement costs.
For a better understanding of the objectives, technical solutions, and advantages of the present application, hereinafter the present application will be described in further detail in connection with the accompanying drawings and some illustrative embodiments. It is to be understood that the specific embodiments described here are intended for mere purposes of illustrating this application, rather than limiting.
Hereinafter, the method and apparatus for assessing an insured loss (also interchangeably referred to as loss assessment method and apparatus for sake of brevity), the computer device, and the storage medium provided by the present disclosure will be illustrated in detail in connection with
The user first needs to decide to choose a real-time loss assessment report or a complete loss assessment report, where the main difference lies in that whether it is combined with the measurement results. If the user chooses the real-time damage report, then after the drone achieves the automatic flight and automatically uploads the picture, the system will automatically run the damage detection algorithm to identify the damage caused by hail or strong wind, and mark it on the picture. The real-time loss assessment report will be completed within 1-3 minutes after the picture is uploaded and submitted directly to the user.
If the user chooses the complete loss assessment, the drone will automatically capture a more comprehensive set of images and automatically upload the images to the platform. When receiving the picture, the system will launch a three-dimensional measurement and calculation and automatic damage identification algorithm at the same time. After the three-dimensional measurement result is completed, the identified damage will be automatically projected onto the overview map of the roof, so that the degree of damage can be further evaluated to ascertain whether it meets the insurance company's claim settlement conditions.
After determining the required report type, the loss assessment is started. As illustrated in
This disclosure provides an insurance loss assessment method that allows users to choose different flight modes and loss assessment modes depending on their own needs, which greatly improves efficiency and reduces costs. In the assessment of damage, human interference factors and the limitation of experience are eliminated to the greatest extent, and the consistency of the assessment of damage is also a revolutionary breakthrough. For insurance, a high degree of consistency and minimal human interference are the most important criteria in the claims settlement process. This embodiment can meet these two requirements of insurance companies at the same time.
In terms of establishing a usable damage recognition model, using the ready-made deep learning model or machine learning model cannot achieve this goal. The main reason is that the characteristics of hail and wind damages are very different from most of the image data that can be collected in life. And before UAVs were commonly used, the quality of most photos could not meet the requirements of model training. After a period of exploration and trials of various deep learning models and machine learning models, according to this embodiment enough data is first collected for model training, before further improving the algorithm of the model.
After 9 months of data collection, the drone data of nearly 5,000 sets of houses of different types is collected in this embodiment, and the acquired data is then processed, selected, and labeled.
As an exemplary embodiment, the operation S2 may include:
As another exemplary embodiment, the operation S3 may include:
First, identify the damaged area from all available data pictures, so that the labeling box can just cover the damaged area.
After summarizing all the data points (the first 5000 houses), we made a preliminary statistical table, as shown in the following Table 1.
In the model training process, it is found that the recognition effect of a single model is not sufficient to achieve an accuracy higher than 90%. Therefore, in this embodiment, multiple classification models are selected at the same time for training and classification, and then the classification results are weighted and sorted, so as to select the most confident 5-10 damage points of the model and mark them on the original photos obtained by the drone. This leads to greatly improved effects. Model selection considers detection accuracy and speed. The main training direction is to increase the fit between the data and the model and minimize the generalization error. At the same time, the integration of detection models of different scales, and the cascade of detection models and classification models are adopted to increase the recall rate, so that the detected hail damage is highly likely to be a real hail damage.
As illustrated in
As illustrated in
As illustrated in
There is further provided a system for assessing an insured loss, the system including:
There is further provided a computer device that includes a memory and a processor, the memory storing a computer program, which when executed by the processor causes the operations of any one of the above methods to be performed.
There is further provided a computer-readable storage medium having a computer program stored therein, which when executed by a processor causes the operations of any one of the above-mentioned methods to be performed.
According to the method and apparatus for assessing an insurance loss, the computer device, and the storage medium provided by this disclosure, by constructing a damage artificial intelligence model and algorithm, high-precision recognition by the machine is realized. At the same time, the automatic damage recognition process is standardized and simplified, which greatly reduces the processing cycle and the required manpower. In the case of errors or omissions, a complete and easy-to-use set of identification tools is designed, and an automatic screenshot function is innovatively added to it, allowing the user to easily modify a damage, and after correcting the damage, the user only needs to click save thus completing the picture modification and automatically putting the screenshots into the report. As a result, the loss assessment cycle has been reduced from a few hours in the prevailing schemes to a few minutes, which greatly shortens the time for making reports, improves the work efficiency of claims adjusters, and saves insurance companies' claims settlement costs.
Embodiments of this application conducted data collection, labelling, and proofreading in the United States for a continuous period of 9 months, carefully deleted and selected from more than 100,000 photos, and built our own deep learning architecture from the bottom up, which basically achieves real-time high-precision recognition, and has attained recognition by American insurance companies.
The foregoing merely portrays some illustrative embodiments of the present disclosure. It should be noted that those of ordinary skill in the art will be able to make multiple improvements and modifications without departing from the principle of this disclosure, and these improvements and modifications should all be regarded as falling in the scope of protection of this disclosure.
Number | Date | Country | Kind |
---|---|---|---|
201910462715.1 | May 2019 | CN | national |
This application is a U.S. continuation of co-pending International Patent Application Number PCT/CN2020/093389, filed on May 29, 2020, which claims the priority of Chinese Patent Application Number 201910462715.1, filed on May 30, 2019 with China National Intellectual Property Administration, the disclosures of which are incorporated herein by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
9129355 | Harvey | Sep 2015 | B1 |
9152863 | Grant | Oct 2015 | B1 |
9609288 | Richman | Mar 2017 | B1 |
10635903 | Harvey | Apr 2020 | B1 |
10832476 | Nussbaum | Nov 2020 | B1 |
20140267627 | Freeman | Sep 2014 | A1 |
20140324405 | Plummer | Oct 2014 | A1 |
20150325064 | Downey | Nov 2015 | A1 |
20150343644 | Slawinski | Dec 2015 | A1 |
20150377405 | Down | Dec 2015 | A1 |
20160176542 | Wilkins | Jun 2016 | A1 |
20170270612 | Howe | Sep 2017 | A1 |
20180182039 | Wang et al. | Jun 2018 | A1 |
20190114717 | Labrie | Apr 2019 | A1 |
Number | Date | Country |
---|---|---|
108491821 | Sep 2018 | CN |
108921068 | Nov 2018 | CN |
109344819 | Feb 2019 | CN |
Entry |
---|
Information Extraction From Remote Sensing Images for Flood Monitoring and Damage Evaluation; Proceedings of the IEEE (vol. 100, Issue: 10, pp. 2946-2970); Sebastiano B. Serpico, Silvana Dellepiane, Giorgio Boni, Gabriele Moser, Elena Angiati, Roberto Rudari; Aug. 2, 2012. (Year: 2012). |
Unmanned Aircraft Systems (UAS) research and future analysis; 2014 IEEE Aerospace Conference (pp. 1-16); Chris A. Wargo, Gary C. Church, Jason Glaneueski, Mark Strout; Mar. 1, 2014. (Year: 2014). |
International Search Report issued in corresponding International application No. PCT/CN2020/093389, dated Aug. 26, 2020. |
Written Opinion of the International Searching Authority for No. PCT/CN2020/093389. |
Number | Date | Country | |
---|---|---|---|
20220084132 A1 | Mar 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2020/093389 | May 2020 | US |
Child | 17537349 | US |