This application claims the benefit of Japanese Patent Application No. 2023-199346 filed on Nov. 24, 2023, the content of which is incorporated herein by reference in its entirety.
The present disclosure relates to a generation method and a generation device of a discriminant model for discriminating a growth state of a plant, a discrimination method and a discrimination device for discriminating a growth state of a plant, and a computer program for executing these methods.
A growth state of a plant may be observed from a raw leaf of the plant. For example, JP2013-36889 A describes an apparatus that irradiates a leaf of a plant with excitation light to diagnose whether the plant is infected with pathogenic bacteria.
However, there is still room for improvement in terms of easily observing and accurately observing the growth state of a plant.
Therefore, an object of the present disclosure is to provide a generation method and a generation device of a discriminant model for discriminating a plant growth state, a discrimination method and a discrimination device for discriminating a plant growth state, and a computer program for executing these methods, which enable easy and/or accurate observation of a plant growth state.
A generation method according to the present disclosure generates a discriminant model for discriminating a growth state of a plant executed by a computer including an arithmetic circuit accessible to a storage device. The storage device stores a chloroplast density image which is an image reflecting a difference between absorption spectra of chlorophyll and carotenoids contained in the plant leaf obtained from a plurality of images of plant leaves for training. The generation method is executed by the arithmetic circuit, reads the chloroplast density image from the storage device, and generates a discriminant model of a plant growth state by using the chloroplast density image as training data for machine learning.
Such general and specific aspects may be implemented by a system, a method, a computer program, and a combination thereof.
A generation method and a generation device of a discriminant model for discriminating a growth state of a plant according to the present disclosure, a discrimination method and a discrimination device for discriminating a growth state of a plant, and a computer program for executing these methods provide easy and/or accurate observation of the growth state of the plant.
Hereinafter, exemplary embodiments of the present disclosure will be described with reference to the drawings. However, in the detailed description, unnecessary parts in the description of the conventional technique and substantially the same configuration may be omitted. This is to simplify the description. In addition, the following description and the accompanying drawings are disclosed so that those skilled in the art can fully understand the present disclosure, and are not intended to limit the subject matter of the claims.
The present disclosure relates to a generation method and a discrimination method for a discriminant model for discriminating a growth state of a plant. Hereinafter, as a growth state of a plant, generation and use of a discriminant model for discriminating a healthy growth state of a plant will be described as an example. More specifically, an example in which a disease of a plant appears as leaf yellowing will be described.
A growth state of a plant may appear on a leaf of the plant. For example, whether a plant is in a healthy state or a state of being infected with a disease may be determined by the external appearance of the leaf. On the other hand, even when the plant is healthy, at first glance, the plant may appear in a growth state similar to when the plant is infected with a disease.
For example, when a plant is infected with a disease, the leaves of the plant may turn yellow. Thus, a plant grower may determine that the plant is infected with a disease when the leaves of the plant turn yellow. On the other hand, even when the plant is not infected with a disease, the leaves of the plant may turn yellow. For example, when a plant is under conditions such as nutritional deficiency and insufficient sunlight, the leaves may turn yellow even when the plant is not infected with a disease. In such a case, it may be difficult for even an expert to determine whether the plant is infected with a disease or not.
Dealing with the plant is different depending on when the plant is not infected with a disease and when the plant is infected with a disease. Therefore, it is desirable to improve the discrimination accuracy of whether a plant has been infected with a disease.
As the amount of chlorophyll changes, the leaf of the plant has a relatively high carotenoid density and yellows. There may be a difference in shape, distribution, shade, and the like between yellowing due to nutritional deficiency and yellowing due to disease damage. In some cases, only a skilled agricultural worker can recognize the difference in the yellowing portion.
Since plants perform photosynthesis, the mesophyll of a leaf contains photosynthetic pigments such as chlorophyll and carotenoids possessed by chloroplasts. The chlorophyll possessed by chloroplasts is one of the main photosynthetic pigments, and the carotenoid is one of the auxiliary photosynthetic pigments.
When the plant is not affected by diseases due to viral infection or the like and the growth is good, that is, when the plant is healthy, the mesophyll of the leaf contains sufficient chlorophyll and carotenoid. Hereinafter, a portion sufficiently containing the chlorophyll and the carotenoid as described above is also referred to as a “healthy portion”.
When a plant is affected with disease damage or nutritional deficiency, the development of chloroplasts in the mesophyll deteriorates, the chlorophyll decreases, and the leaves turn yellow. Hereinafter, such a portion is referred to as a “yellowing portion”. That is, the yellowing portion may be “caused by being affected with disease damage” or “caused by something other than being affected with disease damage”.
It should be noted that the leaf vein of the leaf is normally low in or free of chlorophyll and carotenoids.
Therefore, the amount of chlorophyll and carotenoids in each portion is as in Table 1 below.
Here, as described above, the yellowing portion may be “caused by being affected with disease damage” or “caused by something other than being affected with disease damage”. Depending on this cause, the distribution of chlorophyll on the surface of the leaf containing the yellowing portion varies. In other words, the shape, shade, and distribution of the yellowing portion caused by the decrease in the chlorophyll amount are different. If the aspect of the distribution of chlorophyll, which cannot be easily distinguished by such appearance, can be discriminated, it can be distinguished whether the cause of yellowing is “disease damage” or “other”. For example, when the shape of the yellowing portion, the distribution in the leaves, and the shade can be easily distinguished, the accuracy of discrimination can be improved.
Next, light transmission of chlorophyll and carotenoids will be described. As an example, a case where a white LED is used for a light source will be considered.
However, the absorbance shown in
As described above, since the absorption effect of green light largely depends on internal scattering in leaves, it is known that a thin plant such as sea lettuce has a photosynthesis action spectrum close to the ideal absorption spectrum of chlorophyll. Whatever the substance that absorbs the green light, since the green light absorption effect is largely due to an increase in the encounter probability between the incident light and the light-absorbing substance brought about by the internal scattering effect due to the presence of the chloroplast, when the chloroplast disappears due to yellowing of the leaf, the green light absorption effect is also weakened. In that sense, here, the action spectrum in
According to Table 1, an absorption spectrum is caused to act on the light of a light source under each condition. That is, the absorption spectrum of chlorophyll shown in
Next, the spectral sensitivity curve of the digital camera is caused to act on these spectral groups. That is, these spectral groups are divided into three colors of RGB, but there is no wavelength information at the time of spectral separation, and an integrated pixel value remains. It should be noted that a minimum element that is arranged in a grid shape and constitutes an image is referred to as a pixel. Each pixel represents the intensity and color of light by a numerical value. In addition, a value representing the intensity and color of light of each pixel is referred to as a pixel value.
Table 2.1 shows an example of pixel values representing the intensity of each light of the healthy portion, the yellowing portion, and the leaf-vein portion when each light of R, G, and B is acted. According to Table 2.1, it can be seen that the leaf-vein portion has the highest value in any case of R, G, and B, and the yellowing portion cannot be emphasized. In addition, when comparing the respective pixel values, it can be seen that the leaf-vein portion has a high value in all the bands of the R image, the G image, and the B image, whereas the B image has a lower value than the R image and the G image in the healthy portion and the yellowing portion.
From the above contents, the following two points can be said.
The difference between the yellowing portion and the healthy portion appears in the pixel value of the wavelength band extending across R and G.
The leaf-vein portion has high pixel values in wavelength bands of all RGB.
From this, it can be said that Expression (1.1) is the optimum calculation expression to maximally emphasize the “difference between the yellowing portion and the healthy portion”, that is, the “density of chloroplasts”.
(Expression for blending R image and G image with any weighting)−(Expression using B image) (1.1)
where, (Expression for blending R image and G image with any weighting) includes an arithmetic mean, a geometric mean, and other expressions capable of weighting with two elements. A coefficient or a correction term may be used in (Expression using B image) so that the effect of removing the leaf vein is enhanced, for example. To finally output the chloroplast density image as a monochrome image, (Expression for blending R image and G image with any weighting) and (Expression using B image) may be normalized to equalize the maximum and minimum ranges of both.
In the first term of the above Expression (1.1), the difference between the yellowing portion and the healthy portion becomes clear. In the second term of Expression (1.1), information on the leaf-vein portion is cut. Here, the maximum and minimum ranges of the first term and the second term in Expression (1.1) need to be the same. In general, the pixel value of each of RGB takes values from 0 to 255. Therefore, the first term of Expression (1.1) is Expression (1.2).
Here, the reason why the values of x and y are not uniquely determined is that the action spectrum of photosynthesis varies depending on plant leaves. It varies depending on the type of plant that which of R and G to be weighted further emphasizes the difference between the yellowing portion and the healthy portion.
Table 2.2 shows an example of the integrated pixel values of the healthy portion, the yellowing portion, and the leaf-vein portion when the respective x and y values of the following Expression (1.3) are (0, 1), (1, 0), and (1, 1). In all cases, it can be seen that the yellowing portion has the maximum pixel value, and the yellowing portion is most emphasized in the monochrome image.
The image obtained by Expression (1.3) is an example of the “chloroplast density image” in the present disclosure.
Here, the chloroplast density image used in the discriminant model of the present disclosure will be described with reference to
Also in the image in
A discriminant model M for discriminating the growth state of a plant will be briefly described with reference to
In the learning device, the relationship between each piece of image data included in the training data D13 and the growth state of the plant is learned by machine learning, and the discriminant model M is generated. When an image indicating the density of a plant is newly input, the discriminant model M outputs a growth state of the plant which is a discrimination result. If the growth state of the plant, which is the label, is “good” and “poor”, the discriminant model M outputs, for example, a possibility that the plant is “good” or “poor” to the input image indicating density of the plant. The label indicating the growth state of the plant is not limited to the two types of “good” and “poor”. For example, the growth state of the plant may be represented by three or more types of labels. By using the discriminant model M, it is possible to easily discriminate a plant that is difficult only by visual observation.
It should be noted that each piece of image data (chloroplast density image) is, for example, data generated from image data obtained by photographing the surface of a leaf of a plant. It should be noted that the image data included in one training data set is preferably photographed and generated under the same condition. For example, image data (chloroplast density image) included in one training data set can be generated by image data obtained by photographing under the same or similar light irradiation conditions. More specifically, the image data can be image data captured using a transmission-type optical system incorporating a surface light source having luminance substantially uniform with respect to a surface. In addition, the image data (chloroplast density image) included in one training data set is generated from the photographed data by the same method (arithmetic expression). As described above, the same or similar training data and discrimination data are captured under the same condition, thereby enabling accurate discrimination.
If images are photographed under a plurality of photographing conditions different in light irradiation conditions, there is a possibility that the light source spectrum and the spectral sensitivity curve are different. From the above discussion of the model spectrum, when the light irradiation conditions are different, it is not possible to uniquely determine the optimum values of x and y, which is inefficient. Therefore, by setting the training data and the discrimination data to the same or similar photographing conditions, more accurate discrimination can be made.
In addition, the above discussion of the model spectrum is based on the premise that imaging is performed so that a healthy (mesophyll) portion, a yellowing portion, and a leaf-vein portion of a leaf can be clearly distinguished. The present invention is a technique for further improving the discrimination accuracy of the discriminant model M on the premise that high-quality plant leaf photographed data is used. For example, by adopting a high-accuracy photographing environment such as a transmission-type optical system, it is possible to photograph a high-quality image of a plant leaf.
The configuration of the generation device 1 that generates the discriminant model M will be described with reference to
The arithmetic circuit 11 is a controller that takes charge of controlling the entire generation device 1. For example, the arithmetic circuit 11 reads and executes the generation program P1 stored in the storage device 15, thereby implementing various types of processing related to generation of the discriminant model M. The arithmetic circuit 11 may be various processors such as a CPU, an MPU, a GPU, an FPGA, a DSP, and an ASIC, or a dedicatedly designed hardware circuit.
The input device 12 is used for an operation or data input by the user. The input device 12 may be, for example, an operation button, a keyboard, a mouse, a touch panel, a microphone, or the like. The output device 13 is used for outputting processing results and data. The output device 13 may be an output means such as a display or a speaker, for example.
The communication circuit 14 performs data communication with other devices. The data communication is performed in a wired and/or wireless manner according to a known communication standard, for example. For example, wired data communication may be performed by using, as the communication circuit 14, a communication controller of a semiconductor integrated circuit that operates in conformity with the Ethernet (registered trademark) standard, the USB (registered trademark) standard, and/or the like. In addition, wireless data communication may be performed by using, as the communication circuit 14, a communication controller of a semiconductor integrated circuit that operates in conformity with IEEE 802.11 standards related to a wireless local area network (LAN) and/or a fourth/fifth/sixth generation mobile communication system, what is called 4G/5G/6G, related to mobile communication.
The storage device 15 is a recording medium that records various types of information. The storage device 15 is implemented by, for example, a RAM, a ROM, a flash memory, a solid-state drive (SSD), a hard disk drive, another storage device, or an appropriate combination thereof. The storage device 15 stores a generation program P1, photographed data D11, light intensity data D12, training data D13, discriminant model M, and the like. The generation program P1 is a computer program executed by the arithmetic circuit 11. By executing the generation program P1, various types of processing for generating the discriminant model M are executed.
The photographed data D11 is image data obtained by photographing a leaf of a plant.
The light intensity data D12 is data on the R image, the G image, and the B image generated from the photographed data D11.
As described above with reference to
The discriminant model M is generated by the generation device 1 using the training data D13 as briefly described with reference to
By learning using the chloroplast density image to which the label indicating each state is attached, the generation device 1 can generate the discriminant model M for distinguishing and discriminating between “yellowing caused by being affected with disease damage” and “yellowing caused by something other than being affected with disease damage” described above in relation to Table 1. That is, “yellowing caused by being affected with disease damage” and “yellowing caused by something other than being affected with disease damage” are difficult to distinguish only by visual observation of the color image. However, by using the chloroplast density image indicating the density of chloroplasts obtained according to the amount of chlorophyll as training data, it is possible to distinguish “the degree of yellowing caused by being affected with disease damage” and “the degree of yellowing caused by something other than being affected with disease damage” which are difficult to distinguish by visual observation.
The configuration of the discrimination device 2 that discriminates the growth state of a plant using the discriminant model M will be described with reference to
The arithmetic circuit 21, the input device 22, the output device 23, the communication circuit 24, and the storage device 25 are implemented by specific means similar to the arithmetic circuit 11, the input device 12, the output device 13, the communication circuit 14, and the storage device 15 described above with reference to
The storage device 25 stores a discrimination program P2, a discriminant model M, photographed data D21, light intensity data D22, discrimination data D23, and the like.
The discrimination program P2 is a computer program executed by the arithmetic circuit 21. By executing the discrimination program P2, various pieces of processing for discrimination are executed.
The discriminant model M is a trained model generated by the generation device 1 described above.
The photographed data D21 is image data of a leaf of a plant for discriminating a growth state.
The light intensity data D22 is data on the R image, the G image, and the B image generated from the photographed data D21.
The discrimination data D23 is a chloroplast density image obtained from the photographed data D21.
It should be noted that the discrimination device 2 shown in
Processing of generating the discriminant model M executed by the generation device 1 will be described using a flowchart shown in
First, the arithmetic circuit 11 acquires photographed data D11 of leaves of a plurality of plants (S01). The arithmetic circuit 11 stores the acquired plurality of pieces of photographed data D11 in the storage device 15. Here, a label indicating the growth state of the plant may be associated in advance with each piece of the photographed data D11. For example, labels such as “healthy”, “early stage of disease”, “middle stage of disease”, and “late stage of disease” are associated with each piece of photographed data D11. In addition, each piece of photographed data D11 is stored in the storage device 15 in association with a label indicating the growth state of the plant. It should be noted that when the label indicating the growth state of the plant is not associated with the photographed data D11 acquired in step S01, the arithmetic circuit 11 executes the labeling processing for each piece of photographed data D11.
The arithmetic circuit 11 generates light intensity data D12 using photographed data D11 acquired in step S01 (S02). Specifically, the arithmetic circuit 11 decomposes each piece of photographed data D11 into an R component, a G component, and a B component, and generates an R image indicating the intensity of red light, a G image indicating the intensity of green light, and a B image indicating the intensity of blue light, thereby generating a plurality of pieces of light intensity data D12. In addition, the arithmetic circuit 11 stores the generated light intensity data D12 in the storage device 15.
The arithmetic circuit 11 generates a chloroplast density image using light intensity data D12 generated in step S02 (S03). For example, the arithmetic circuit 11 generates a chloroplast density image by applying the following Expression (1.3) for each corresponding pixel of the image. The processing from step S02 to step S03 is what: is called preprocessing. Here, when another preprocessing is required before and after the processing of step S02 or step S03, the other preprocessing may be executed. For example, examples of preprocessing may include processing such as image trimming and image rotation.
In one discriminant model, a chloroplast density image is acquired using the same arithmetic expression. Here, in Expression (1.3) or Expression (1.2) described above, all combinations of x and y that have the same expression when the fraction is reduced are defined to be the same. In other words, when the ratio of x and y is the same, it is defined as being the same. For example, combinations of x:y of 1:2, 2:4, and 3:6 are defined to be the same.
The arithmetic circuit 11 associates the corresponding label with each chloroplast density image generated in Step S03 to generate training data D13 (S04). In addition, the arithmetic circuit 11 stores the generated training data D13 in the storage device 15.
The arithmetic circuit 11 generates a discriminant model M using training data D13 generated in step S04 (S05). Specifically, the arithmetic circuit 11 learns a relationship between an image indicating the density of chloroplasts included in the training data D13 and a plant growth state to generate the discriminant model M. In addition, the arithmetic circuit 11 stores the generated discriminant model M in the storage device 15.
In this way, the generation device 1 can generate the discriminant model M for discriminating the growth state of the leaf of the plant from the image of the leaf of the plant.
With reference to the flowchart shown in
The arithmetic circuit 21 first acquires photographed data on a leaf of a plant for determining a growth state (S11). The arithmetic circuit 21 stores the acquired photographed data D21 in the storage device 25.
The arithmetic circuit 21 generates light intensity data D22 using photographed data D21 acquired in step S11 (S12). The generation of the light intensity data D22 can be executed by processing similar to step S02 described above with reference to
The arithmetic circuit 21 generates a chloroplast density image using light intensity data D22 generated in step S12 (S13). The generation of the chloroplast density image can be executed by the same processing as step S03 described above with reference to
The arithmetic circuit 21 discriminates the growth state of the target plant (S14) using the discrimination data D23 generated in step S13 as an input of the discriminant model M.
The arithmetic circuit 21 outputs the determination result in step S14 to the output device 23 (S15).
In this way, the discrimination device 2 can discriminate the growth state of the leaf of the plant from the image of the leaf of the plant using the discriminant model M.
The optimum values of x and y used in Expression (1.3) can be determined by feeding back and optimizing the discrimination accuracy obtained at that time. For example, the determination of the values of x and y used in Expression (1.3) may be executed in the generation device 1. At this time, the arithmetic circuit 11 can determine the values of x and y using a method used in a general optimization problem. The data used for determining the values of x and y is, for example, data associated with “photographed data” labeled with “healthy”, “early stage of disease”, “middle stage of disease”, and “late stage of disease” by an expert.
A generation device 1A according to a second exemplary embodiment will be described with reference to
As shown in
The clustering program P3 is a computer program executed by the arithmetic circuit 11. By executing the clustering program P3, clustering processing is executed in the generation device 1A. The clustering program P3 generates a cluster by using hard clustering such as hierarchical clustering and non-hierarchical clustering or soft clustering, for example.
As shown in
The clustering result data D32 is a result of clustering the clustering data D31. For example, as shown in
The training data D13 is data generated by labeling each of the clusters A to E of the clustering result data D32 by the user. The training data D13 is used for generating the discriminant model M.
Processing of clustering and generating the discriminant model M executed by the generation device 1A will be described with reference to a flowchart shown in
First, the arithmetic circuit 11 acquires photographed data D11 of leaves of a plurality of plants (S01). The arithmetic circuit 11 stores the acquired plurality of pieces of photographed data D11 in the storage device 15. It should be noted that, in the generation device 1A, since a label is attached after clustering, no label is attached in step S01.
The arithmetic circuit 11 generates light intensity data D12 using photographed data D11 acquired in step S01 (S02). The generation method for the light intensity data D12 is similar to that of the generation device 1 according to the first exemplary embodiment. In addition, the arithmetic circuit 11 stores the generated light intensity data D12 in the storage device 15.
The arithmetic circuit 11 generates a chloroplast density image using light intensity data D12 generated in step S02 (S03). The generation method for the chloroplast density image is similar to that of the generation device 1 according exemplary embodiment. In addition, the arithmetic circuit 11 stores the generated plurality of chloroplast density images in the storage device 15 as the clustering data D31.
The arithmetic circuit 11 performs clustering by unsupervised learning using the clustering data D31 stored in step S03 (S21). The arithmetic circuit 11 stores the result of clustering in the storage device 15 as clustering result data D32.
The arithmetic circuit 11 receives the label of each cluster obtained in step S21 (S22). For example, the label of each cluster is input by the user through the input device 12.
The arithmetic circuit 11 associates the corresponding label received in step S22 with each chloroplast density image of the clustering data D31, and generates the training data D13 (S23). In addition, the arithmetic circuit 11 stores the generated training data D13 in the storage device 15.
The arithmetic circuit 11 generates a discriminant model M using training data D13 generated in step S04 (S05). The generation method for the discriminant model M is similar to that of the generation device 1 according to the first exemplary embodiment. In addition, the arithmetic circuit 11 stores the generated discriminant model M in the storage device 15.
In this manner, the generation device 1A performs clustering from the image of the leaf of the plant. In addition, the generation device 1A can generate a discriminant model M for discriminating the growth state of a leaf of a plant using a result of clustering.
It should be noted that, in the above description, an example has been described in which the generation device 1A executes all of the generation of the chloroplast density image, the clustering, and the generation of the discriminant model M. However, some pieces of processing may be executed by another information processing device. In addition, each piece of processing may be executed by different information processing devices. For example, clustering may be performed using a chloroplast density image generated by another information processing device. In addition, clustering and generation of the discriminant model M may be executed by different information processing devices.
The discrimination by the discriminant model M generated as described above in the second exemplary embodiment is executed similarly to the description described above in the first exemplary embodiment. It should be noted that in
In the first exemplary embodiment described above, an example of discriminating whether being healthy or a disease in which yellowing has occurred has been described. In addition, discrimination has been performed using a discriminant model learned using a label indicating whether the disease is in the early stage of disease, the middle stage of disease, or the late stage of disease among diseases. However, as a matter of course, the discriminant model may use training data to which labels other than these are attached. For example, the discriminant model may be generated using training data labeled such as “nutritional deficiency” or “overwatering” in addition to the above labels or instead of the above labels.
In the above example, the leaf orientation in the photographed data is not mentioned. On the other hand, there is a plant in which the orientation of the leaf vein is easily understood, such as a bamboo leaf. In such a case, the discrimination accuracy can be improved by adjusting the orientation of the leaf to a predetermined orientation using the training data and the discrimination data.
In the second modification, the orientation of the leaf is adjusted regarding the plant in which the orientation of the leaf vein is easily understood. However, in the case of the leaf in which the direction of the leaf vein is not aligned, the number of training data may be increased by rotating one piece of photographed data. By learning the image data rotated with respect to one piece of photographed data in this manner, it is possible to perform discrimination without depending on the photographing orientation of the leaf, and it is possible to facilitate photographing of photographed data for discrimination. Furthermore, even if gradation is applied in the image, learning can be performed without depending on the directionality of the gradation. Therefore, over-training due to gradation of the training data can be prevented, and the accuracy of discrimination can be improved.
In the above example, the photographed data D11 has been described as an example of photographing using the “transmission type optical system having substantially uniform luminance”. However, the photographing optical system used for photographing the photographed data D11 is not limited to the transmission type optical system. Specifically, the photographed data D11 can be data that can separate a yellowing portion, a healthy portion, and a leaf-vein portion. In addition, the photographing optical system used for photographing the photographed data D11 may be a photographing optical system capable of photographing under a uniform condition in each sample. For example, in addition to the transmission type optical system that is portable outdoors in the field, a photographing optical system in which an indoor photographing environment is uniformly arranged may be used.
In the above-described example, an example of discriminating the state of a plant by machine learning has been described, but the generative AI can be implemented using, for example, generative adversarial networks based on the same training data. Specifically, an RGB image of a leaf is restored in a pseudo manner by training a chloroplast density image, generating a non-existent chloroplast density image under any condition, and coloring a portion having a high pixel value to a portion having a low pixel value from yellow to green, for example. Such generative AI can generate a desired conditional image such as a virtual “healthy” leaf image, or a virtual “early stage of disease+middle stage of nutritional deficiency” leaf image. By using such generative AI, for example, in the case of citrus greening disease, when citrus fruits in an area where the disease has not spread yet are affected with the disease, it is possible to predict what kind of yellowing aspect it will be, which can be useful for early detection.
An experimental result of the generation of the above-described discriminant model M will be described below. In the experiment, 841 pieces of photographed data including healthy plant leaves and diseased plant leaves were prepared. Each piece of photographed data is an image photographed using a transmission type optical system incorporating a surface light source having substantially uniform luminance with respect to a surface. Plants are leaves of citrus plants such as shikuwasa, and the disease is the citrus greening disease. As shown in
As shown in
Specifically, generation and verification of the discriminant model were performed using images obtained from various pieces of processing or calculation results. Here, among them, in addition to the photographed data itself (Table 3.1), a result with high evaluation accuracy is shown. Specifically, the results of generating and verifying the discriminant model using the image (R image−B image) calculated with the R image and the B image (Table 3.2), the image (G image−B image) calculated with the G image and the B image (Table 3.3), the image (Table 3.4) obtained by (G image+R image)/2−B image, and the image (Table 3.5) obtained by rotating the image in Table 3.3 by every 45° and increasing by 8 times are shown.
In the following tables, “early stage of disease” is simply referred to as “early stage”, “middle stage of disease” is simply referred to as “middle stage”, and “late stage of disease” is simply referred to as “late stage”.
In addition, in the following description, the “correct answer rate” is a correct answer rate of the classification result classified into each label from the early stage to the middle stage of the disease. In addition, the “disease damage leaf correct answer rate” is a correct answer rate of the classification result of whether to be healthy or diseased. For example, the boundary between the early stage and middle stage of disease is ambiguous. In addition, the boundary between the middle stage and late stage of disease is ambiguous. Therefore, the early stage to the late stage of disease are regarded as one discrimination result of “disease”, and if classified into any one of them, it is a correct answer rate when it is considered as correct. Hereinafter, for the sake of simplicity of description, it is described assuming that “the accuracy of the evaluation result is improved” when both the “disease damage leaf correct answer rate” and the “correct answer rate” tend to be increased centering on the “disease damage leaf correct answer rate”.
Furthermore, the following results indicate the sum of the results of the first to ninth trainings and evaluations performed in the combination shown in
Table 3.2 shows an example in which training and evaluation are performed using an image obtained by applying the following Expression (2.1) for each pixel using the R image and the B image obtained from the photographed data.
R image−B image (2.1)
The accuracy of the evaluation result is improved as compared with the case of the photographed data in Table 3.1.
Table 3.3 shows an example in which training and evaluation are performed using an image obtained by applying the following Expression (2.2) for each pixel using the G image and the R image obtained from the photographed data.
G image−B image (2.2)
The accuracy of the evaluation result is improved as compared with the case of the photographed data in Table 3.1.
Table 3.4 shows an example in which training and evaluation are performed using images obtained by the following Expression (2.3) for each pixel using the G image and the R image obtained from the photographed data.
Expression (2.3) is a case where the values of x and y in the above Expression (1.3) are (1,1).
The accuracy of the evaluation result is improved as compared with the case of the photographed data in Table 3.1.
Table 3.5 shows an example in which training and evaluation were performed using images obtained by rotating the image obtained by the above-described Expression (2.2) by every 45° and increasing the image by 8 times.
The accuracy of the evaluation result is improved as compared with the case of the photographed data in Table 3.1. In addition, it can be seen that the evaluation results shown in Table 3.5 have higher accuracy than all the other evaluation results in Tables 3.2 to 3.4 described above.
In Experimental Example 2, images of a hue (H) image, a saturation(S) image, a brightness (V) image, an R image, a G image, and a B image were used. Also in Experimental Example 2, in the group shown in
Table 4 shows an example in which training and evaluation were performed on an image obtained by converting photographed data of a color image into a hue image. The accuracy of the evaluation result is lower than that in the case of Table 3.5. In addition, the disease damage leaf correct answer rate is deteriorated as a whole as even compared with the result of Table 3.1 which is the original image.
In the H image, since the information on the dead part of the leaf was treated equally to other yellowing portion and healthy portion, it is considered that the evaluation result was deteriorated. Accordingly, it can be seen that yellowing is a difference in tint, but that the accuracy of discrimination is not increased by using a discriminant model using an H image showing a difference in tint. In addition, it can also be seen that a discriminant model using the image obtained by Expression (1.3) is useful.
In Experimental Example 3, the transition of the discrimination accuracy when the values of x and y in Expression (1.3) were changed was evaluated. Experimental Example 3 is the discrimination accuracy when the image obtained by Expression (1.3) is generated from the image used in Experimental Example 1 and the discriminant model is generated. Table 5 shows the results of Experimental Example 3. Table 5 shows representative values of the disease damage leaf correct answer rates when the ratio (x/(x+y)) of R in R:G takes each value. The representative value of the disease damage leaf correct answer rate is a numerical value of the sum of the disease damage leaf correct answer rates.
In addition, the results of Table 5 are shown in
(1) A generation method of the present disclosure is a generation method for a discriminant model for discriminating a growth state of a plant executed by a computer including an arithmetic circuit configured to access a storage device, the generation method including:
(2) In the generation method according to (1), the chloroplast density image may be an image obtained by the following Expression (1) for each value of corresponding pixel using two or more among an R image, a G image, and a B image obtained from an image of a leaf of a plant:
(Expression for blending R image and G image with any weighting)−(Expression using B image) (1),
(3) In the generation method according to (2), the Expression (1) may be the following Expression (2).
(4) In the generation method according to (1) to (3), a growth state of the plant may include at least one selected from a group including a healthy state, a disease damage affected state, a nutritional deficiency or excess state, a poor growth environment state, and an insect-damage state.
(5) In the generation method according to (1) to (4), a selected level among levels of different degrees may be included.
(6) In the generation method according to (3), a step of optimizing values of x and y in the Expression (2) to a combination having highest discrimination accuracy may be included.
(7) In the generation method according to (1) to (6), a transmission type optical system incorporating a surface light source having luminance substantially uniform with respect to a surface may be used to acquire an image of a leaf of the plant.
(8) In the generation method according to (1) to (7), the machine learning may include at least one type of supervised learning and unsupervised learning.
(9) In the generation method according to (1) to (8), the storage device may store each of the chloroplast density images in association with a label indicating a state of the plant, and
(10) In the generation method according to (1) to (9), the arithmetic circuit may
(11) A discrimination method of the present disclosure is a discrimination method for discriminating a state of a plant executed by a computer including an arithmetic circuit configured to access a storage device, the discrimination method including:
(12) A computer program of the present disclosure executes the generation method according to any one of (1) to (10).
(13) The computer program of the present disclosure executes the discrimination method according to (11).
(14) A generation device of the present disclosure is a generation device including an arithmetic circuit configured to access a storage device, the generation device generating a discriminant model for discriminating a state of a plant, in which the storage device stores a chloroplast density image which is an image reflecting a difference between absorption spectra of chlorophyll and carotenoids contained in the plant obtained from a plurality of images of leaves of plants for training, and the arithmetic circuit reads the chloroplast density image from the storage device, and generates a discriminant model of a growth state of a plant using the chloroplast density image as training data for machine learning.
Number | Date | Country | Kind |
---|---|---|---|
2023-199346 | Nov 2023 | JP | national |