The following relates to a method and a system for computer-implemented determination of blade-defects of a wind turbine and a computer program product. In particular, the following relates to the visual inspection of blades of a wind turbine.
Over a period of use, damages to the rotor blades (short: blades) of a wind turbine, such as erosion, occur. To find such blade-defects, a number of high-resolution images is taken, for example, by a drone. Blade defect classification and localization in these images has been done up to now manually by annotators which visually analyze the images one by one. The annotators identify and mark positions of defects in the images. The so gathered information is stored in a database.
A major drawback of manually inspecting a plurality of images is that the detection accuracy sometimes is poor. In addition, the time required for the visual inspection is very high. This can take up to an hour to evaluate an image. As a result, this analysis is not cost-efficient.
Hence, there is a need for an easier method for the determination of blade-defects of a wind turbine.
It is therefore an aspect of the present invention to provide a method which allows a reliable and easy determination of blade-defects of a wind turbine. It is another aspect of the present invention to provide a system which allows a reliable and easy determination of blade-defects of a wind turbine.
According to the embodiment of the present invention, a method for computer-implemented determination of blade-defects of a wind turbine is suggested. The method comprises the following steps: S1) receiving, by an interface, an image of a wind turbine containing at least a part of one or more blades of the wind turbine, the image having a given original number of pixels in height and width; S2a) analyzing, by a processing unit, the image to determine an outline of the blades in the image; S2b) creating, by the processing unit, a modified image from the analyzed image containing image information of the blades only; and S3) analyzing, by the processing unit, the modified image to determine a blade defect and/or a blade defect type of the blades.
This embodiment of the present invention is based on the consideration that by applying deep learning models a computer-implemented, and therefore automated, determination of blade-defects of a wind turbine is enabled. Therefore, blade inspection takes less time and is more cost efficient. In addition, it does not require skilled image annotators.
The method uses a trained deep learning model that can run automatically on large amount of image data. The cost for annotation can be essentially decreased and quality of image annotation increased with further development of the deep learning models.
A major advantage of the method described is that blade-defect determination may be done on pixel-level which provides a high accuracy.
The method basically consists of the two steps of detecting the outline the blades in an image and creating a modified image which has any irrelevant information removed besides the blades. In other words, result of the first step is a modified image with simplified/reduced information as background information of the image is removed. This simplified image, called modified image, forms the basis for determining blade defects in the second step. This second step allows for gathering further information about the location of the defect as well as a type (also referred to as a class) of the identified defect.
Supervised machine learning models using fully convolutional neural networks (also known as CNNs) may be applied for both, the determination of the blade outline in steps S2a) and S2b)-and the blade defect localization and classification in step S3). As known to the skilled person training and testing data is necessary to conduct supervised machine learning models. For training and testing of the models, images with precise annotations are used where the annotations are done manually. To enhance accuracy of the supervised machine learning models, patches of smaller size are produced from the original blade images (i.e. the images that are received at the interface) and used for model training. Implementation, training, testing, and deployment of the models may be made with open source tools.
According to an exemplary embodiment, steps S2a) and S2b) are carried out using a convolutional neural network (CNN) being trained with training data of manually annotated images of wind turbines. The annotation may be made with predefined object classes to structure image information. For example, four object classes may be used for annotation: blade, background, the same turbine in background, a different turbine in background. However, it is to be understood that the amount of classes and the content of the classes may be chosen in another way as well.
The CNN may conduct a global model for global image segmentation and a local model for localized refinement of the segmentation from the global model. Both, the global model and the local model may use the same neural network architecture. The global model enables a rough identification of those parts in the image which show a blade to be assessed. The local model enables finding all those pixels in the image which relate to the blade of the wind turbine to be assessed.
In the global model and the local model, a number of predefined object classes are assigned to pixels or blocks of pixels in the annotated image, wherein the number of object classes relate to relevant and irrelevant image information necessary or not for determining the outline of the blades to be assessed. For example, the above mentioned four object classes may be used for annotation: blade, background, the same turbine in background, a different turbine in background. The predefined object classes may be used in an identical manner in both the global and the local model.
In the global model, during execution of the already trained CNN, the received image is resized to a resized image having a smaller second number of pixels in height and width as the resized image before proceeding to analyze, by a processing unit, the image to determine an outline of the blades in the image (step S2a)). Resizing the received image has the advantage that the amount of data to be processed can be reduced. This helps to speed up the determination of the blade outline.
According to a further exemplary embodiment, as an output of the global model to be processed in step S2b), the resized image is annotated with the predefined object classes and up-scaled to the original number of pixels. Up-scaling enables a combination with processing within the local model.
In the local model, the received image and the up-scaled and annotated resized image are combined by execution of the already trained CNN to provide the modified image, which has the image information of the blades in the quality of the received image together with the annotation with the predefined object classes. This high-resolution image enables localization and classification of the blade-defect by means of a further already trained neural network in step S3).
In step S3), another neural network being trained with training data of manually annotated patches of modified images is executed to localize and classify blade-defects. For the development of blade defect classification and localization models, an erosion blade defect type (class) may be selected. A neural network architecture, e.g. an “erosion model” implemented in Keras or, as a further example, an “alternative erosion model” implemented in TensorFlow may be used. Keras and TensorFlow are known neural network tools (See [3] or [4], for example).
In step S3), the modified image may be resized to a resized modified image having a smaller second number of pixels in height and width as the modified image before annotating with a predefined defect class.
As an output, a resized and annotated modified image is up-scaled to the original number of pixels. For model training, the images and annotations are augmented by random flips and random changes in brightness and color saturation. Patches are only taken of the blade area. Images with no erosion are not used for training.
According to a further aspect, a computer program product, (non-transitory computer readable storage medium having instructions, which when executed by a processor, perform actions).
According to a further aspect, a system for computer-implemented determination of blade-defects of a wind turbine, is suggested. The system comprises an interface for receiving an image of a wind turbine containing at least a part of one or more blades of the wind turbine, the image having a given original number of pixels in height and width, and a processing unit. The processing unit is adapted to analyze the image to determine an outline of the blades in the image. The processing unit is adapted to create a modified image from the analyzed image containing image information of the blades only. Furthermore, the processing unit is adapted to analyze the modified image to determine a blade defect and/or a blade defect type of the blades.
The system has the same advantages as they have been described in accordance with the method described herein.
Some of the embodiments will be described in detail, with reference to the following figures, wherein like designations denote like members, wherein:
To avoid time consuming and not cost-efficient manual determination of blade-defects of a wind turbine, a method of automatic blade defect classification and localization in images is described below.
The method uses supervised machine learning models using fully convolutional neural networks CNN. CNNs are used for both steps of blade detection and localization (corresponding to finding a blade outline) and removal of the background such that the blade outline remains as only image information in so-called modified images as well as blade defect classification and localization in images with outlined blades and removed background. The step of blade defect classification and localization may be done on pixel level which results in a high accuracy of determined blade-defects.
To be able to conduct CNNs, training with suitable training data is necessary. For this purpose, a plurality of images is manually annotated with predefined object classes for training and testing the models. The purpose of annotation with predefined object classes is to structure image information of images received by an interface IF of a computing system CS (
To enhance and speed-up training, patches of smaller size may be produced from original blade images and used for model training. Implementation, training, testing, and deployment of the models may be made with open source tools.
It is known from [1] to use fully CNNs for semantic image segmentation. The method described in [1] allows the detection and localization of objects in images simultaneously at most precise level, i.e. pixel level. A CNN as proposed in [1] may be used as a basis to implement the method as described below.
For training of the deep learning models precise blade image annotations are needed. The annotations are prepared manually by annotators.
The right side of
A plurality of manually annotated images MAI as shown in
The determination of blade-defects, such as erosion, comprises localization of blade-defects as well as classification of the localized blade-defects. The latter refers to finding a (predefined) type of the localized blade-defect. The below described automated method for detecting and localizing blade-defects of a wind turbine basically consists of two stages, namely the localization of the blade in the received (original) image and the determination of its outline in the image and the localization and classification of blade-defects.
The blade outline model is illustrated in
Training of the global model works as follows. As an input, the global model GM receives the original image OI in a resized size as resized image RI. While the original image OI has a number h of pixels in height and a number w of pixels in width (in short notation: OI(h,w)), the resized image RI has a number rh of pixels in height and a number rw of pixels in width (in short notation: RI(rh,rw)), where rh<h and rw<w. The global model is trained to add annotations to the resized image, resulting in an annotated resized image ARI having the same size as the resized image, i.e. ARI(rw,rh,c), where c denotes the number of predefined object classes OCi used for annotations, where i=1 . . . c. According to the above chosen example is c=4 as four object classes are used: blade, background, the same turbine in background, a different turbine in background. The annotated resized image ARI together with its annotations is up-scaled, resulting in an up-scaled and annotated image UAI having a number ah of pixels in height and a number aw of pixels in width (in short notation: UAI(ah,aw,c)), where ah=h and aw=w. In other words, the size of the up-scaled and annotated image UAI corresponds to the size of the original image OI or UAI(ah,aw,c)=UAI(h,w,c). For training, augmentation of the images and annotations is made by random flips and random changes in brightness and color saturation. For training purposes, patches of the blade area are considered only. Images with no erosion are not used for training.
After training of the global model GM is finished, the global model is executed as follows: As an input, the local model GM receives the original image OI(h,w) in a resized (downsized) size as resized image RI(rh,rw). As an output, an annotated resized image ARI(rw,rh,c) having the same size as the resized image RI is generated. The annotated resized image ARI(rw,rh,c) is up-scaled (augmented) to the up-scaled and annotated image UAI(ah,aw,c), as shown in
Training of the local model LM works as follows. As an input, the local model LM receives patches of the original image OI(h,w) in full resolution and their annotations from the up-scaled and annotated image UAI(ah,aw,c)=UAI(h,w,c) provided by the global model GM. The object classes OCi of the annotation are defined the same way as for the global model training. Further, the four probabilities from the output of the global model (UAI (ah,aw,c) in
After training of the global model GM is finished, the global model is executed as follows: As an input, the local model LM receives the original image OI(h,w) in full resolution and the probabilities of their annotations from the up-scaled and annotated image UAI(ah,aw,c)=UAI(h,w,c) provided by the global model GM. As an output, the annotated image AI(ah,aw,c) is provided. Annotations are defined the same way as for the local model training. The annotated image AI constitutes a modified image being the input for further assessment in the next, second step.
For the development of blade defect classification and localization models, an erosion blade defect type is selected. Two different neural network architectures may be utilized, a first model called “erosion model” and an alternative or second model called “alternative erosion model”. For example, the erosion model may be implemented in Keras (see [3]) and the alternative erosion model may be implemented in TensorFlow (see [4]).
The erosion model architecture consists of 4 blocks of convolution and max pooling layers and then 4 blocks up-sampling and convolution layers. For training of the neural network, patches POI from original images OI with POI(eh,ew) pixels (eh<h and ew<w) and annotations are resized to POI(reh, rew) pixels (reh>eh and rew>ew) with the removed background. In annotation, predefined blade and defect type (erosion in this case) classes are used.
After training of the erosion model is finished, it is executed as follows: As input, the erosion model receives the original image OI(h,w) which is resized to RMI(rh,rw) pixels. In the resized original image RMI(rh,rw), the background is removed using the information of the modified image AI. The erosion model outputs an up-scaled and annotated image RAMI of the size (h,w), i.e. RAMI(h,w), which results from upscaling RMI(rh,rw).
The alternative erosion model uses a fully convolutional network architecture described in [1] and may be implemented using TensorFlow described in [4]. Two classes are considered: erosion and no erosion that includes background, blades, and other defect types. The alternative erosion model is trained on patches of predetermined pixel size (which is of course smaller than the size of the original image) produced using random positions, random rotations, random horizontal flips and random vertical flips.
Examples of results of the blade outline model and the erosion model are shown in
The results of the models performance are presented in Table 1, where TP, FP, and FN are defined as true positives, false positives, and false negatives, respectively. The results demonstrate good performance of the blade outline model as well as both models for erosion blade defect detection and localization.
Summarizing, the method basically consists of the two steps of detecting the outline of the blades in an image and creating a modified image which has any irrelevant information removed besides the blades. In other words, result of the first step is a modified image with simplified/reduced information as background information of the image is removed. This simplified image, called modified image, forms the basis for determining blade defects in the second step. This second step allows for gathering further information about the location of the defect as well as a type (also referred to as a class) of the identified defect.
The proposed method enables a computer-implemented, and therefore automated, determination of blade-defects of a wind turbine. Therefore, blade inspection takes less time and is more cost efficient. In addition, it does not require skilled image annotators after neural networks have been trained.
The method uses a trained deep learning model that can run automatically on large amount of image data. The cost for annotation can be essentially decreased and quality of image annotation increased with further development of the deep learning models.
A major advantage of the method described is that blade-defect determination may be done on pixel-level which provides a high accuracy.
Although the present invention has been disclosed in the form of preferred embodiments and variations thereon, it will be understood that numerous additional modifications and variations could be made thereto without departing from the scope of the invention.
For the sake of clarity, it is to be understood that the use of “a” or “an” throughout this application does not exclude a plurality, and “comprising” does not exclude other steps or elements.
Number | Date | Country | Kind |
---|---|---|---|
18187326.6 | Aug 2018 | EP | regional |
This application is a national stage entry of PCT Application No. PCT/EP2019/069789 having a filing date of Jul. 23, 2019, which claims priority to European Patent Application No. 18187326.6, having a filing date of Aug. 3, 2018, the entire contents of which are hereby incorporated by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2019/069789 | 7/23/2019 | WO | 00 |