Embodiments of this invention pertain to the imaging of fruiting bodies, such as corn ears, on a live plant to detect characteristics that may be used for plant phenotyping and for automated pollination of crops such as maize. Embodiments of this invention include dual side applicators and on-board real time graphic processing that allows multiple plant fruiting bodies on a single plant to be automatically pollinated in one pass.
Plant breeders, seed producers and grain producers have a need to determine the developmental phase of plant fruit, measure their attributes, or fertilize with pollen for seed. For crops grown in rows, the upper leaves of the plant may form a canopy, obscuring the fruit or flowers from aerial viewing. Even in the absence of a plant canopy, the phenotyping and artificial pollination of plants can be time consuming and prone to human error. An additional complication is that a single plant may have multiple fruiting bodies, each of which must be pollinated.
Thus, there is a need for automated phenotyping that provides automated imaging of plant fruit or flowers not easily seen from an aerial view in order to obtain an objective and unbiased measurement of aspects of the plant, such as the height, size and location, as well as to apply pollen to the one or more fruiting bodies present on different portions of the plant.
Embodiments described herein involve an imaging system for identifying the location and/or other phenotypic characteristics of the plant fruit or flowers. The imaging system may assess yield, yield potential (quantity), disease and overall health. In some embodiments, the imaging system is designed to account for image distortion and poor lighting as the imaging system is transported between rows of plants. In some embodiments, the image and location of the plant flower or fruit, such as an ear in the process of silking, may be utilized to direct automated pollination of the plants. In some embodiments the plant will have two or more plant fruiting bodies, each of which may be pollinated in one pass by automated imaging and pollinating units on each side of the row of plants.
The disclosure can be more fully understood from the following detailed description and the accompanying drawings, which form a part of this application.
Embodiments described herein involve an imaging system for identifying the location and/or other phenotypic characteristics of the plant fruit or flowers, such as the fruiting bodies of hybrid cereal crops. Hybrid cereal crops include, but are not limited to, wheat, maize, rice, barley, oats, rye and sorghum. In one embodiment, the imaging system is transported between rows of cereal crop plants. For example, in the case of corn, corn is typically planted in rows spaced 15 to 30 inches from adjoining rows, although greater or lesser corn row spacing is also possible. Proper row spacing allows plants room to explore for nutrients and minimizes the adverse effects of competition from neighboring plants. In Iowa, and in most regions of the midwest, 20 inches and 30 inches are the most common row spacing configurations.
Accordingly, with typical spacing, an imaging system transported between the rows would be about 15 inches from the row of corn plants on each side, which tends to result in a limited field of view when a standard camera lens is used. Additional difficulties for imaging corn ears arise as a result of low or inconsistent lighting conditions that can be caused by clouds, time of day or night, the canopy formed by the uppermost leaves of the plant, by other leaves that obscure the camera's view of the corn ear and its silks, by the need to image multiple ears of corn, and by movement of the camera as it is transported between the rows.
In one embodiment, a semi-hemispherical lens is used to provide an adequate field of view to identify one or more fruiting bodies on a plant. However, this lens causes significant distortion of the image which makes it especially difficult to determine the ear height and location of the fruiting bodies. To overcome this distortion, the image is flattened, followed by object recognition within the image. In order to use such flattened image to distinguish the plant fruiting body from the leaves, stem and other plant parts, and corn ear identification such image flattening and recognition must occur in real time. Accordingly, in some embodiments, an on-board image processing device is utilized for immediate recognition and location of the plant fruiting body.
Such image processing may be further complicated by the fact that plant fruiting bodies may be obscured by leaves or other plant parts. In some cases, the plant fruiting bodies may not be visible from one side of the row. Accordingly, a dual camera system has been developed, wherein a boom is used to position a camera on each side of a row (see
One embodiment is a multi-row camera system. Each camera in each row will have a left and right camera, and there may further be a plurality of such camera systems across several rows. The camera may be mounted on a transport device that fits between the rows, or may be suspended from a boom (see
The imaging system may further assess plant characteristics such as yield, yield potential (quantity), disease and overall health. The image and location of the plant flower or fruit, such as an ear in the process of silking, may be utilized to direct automated pollination of the plants. In this embodiment, the location information from the imaging device would be utilized to direct a pollen application device to deliver pollen to the corn ear silks. The imaging permits a precise application of pollen that results in less waste and a more efficient pollen application that leads to improved seed set.
For example, in corn plants with a second ear, the second ear is commonly located a few nodes from the first ear and oriented at a rotational axis of 90 to 180 degrees on the stalk and positioned lower on the plant, and therefore deeper in the canopy where pollen may not adequately shed. While dual ear corn often doesn't result in a significant grain yield increase when it does occur in hybrid grain production, some inbred varieties with proper spacing and growing conditions may be managed in a way to optimize the production of a second ear. In the past this is not done in the regular course of seed production due to the difficulty of obtaining sufficient seed yield on the second ear. However, this invention, by assuring that the second ear receives sufficient pollen, serves as a potential enabler of dual ear seed production. This can be of value in seed production, especially when seed quantities are low, such as when inbred breeder seed is scarce and every seed is needed for plant propagation and seed multiplication.
To help achieve maximum seed production for dual ear corn plants, the dual side imaging system was developed, as illustrated in
Images are captured with a semi-hemispherical lens (14) as shown in
The images may be captured at a suitable rate for the speed of the activity. In the image capture device described in more detail below, rates of image capture of up to 30 frames per second were achieved using an NVIDIA graphics card. One graphics card per imaging device was used, although it is also possible to feed the images from two or more imaging devices into a single graphics card, which may be preferable when a 360-degree view of an individual plant or row of plants is desired. Positional data associated with the images from the dual cameras on each side of a row may be used to construct a series of photos of the plant that represent a nearly 360-degree view of the individual plant or row of plants, and the graphics card and data structure may be optimized for this task. Following image capture, raw hemi-spherical images are flattened (e.g. using open CV software to determine the camera's intrinsic matrix and distortion coefficients model) and, in embodiments with an inertia measurement unit (IMU), adjusted based on camera angle which can change as the device moves across land. In some embodiments, such as a boom mounted system (see
In some embodiments a laser distance sensor (or ultrasonic sensor, lidar, multidimensional camera or radar) may be used to detect distance to stalks, and optionally, to determine distance to ground. The latter may be particularly useful on boom mounted systems.
When objects were measured in a non-IMU embodiment, the video frame extracted from video was undistorted from a hemi-spherical view to a flattened view using an undistortion matrix (camera model) for that particular sensor. An object detection model was then used to identify an object of interest from within the video frame. The pixel coordinates of the detections centroid or bounds of that detection were recorded. Measurement of the object height used a combination of the pixel coordinates and a camera's intrinsic matrix. The center of the image collected was the same as the mounting height of the camera. Measuring objects away from the center of the camera view required adding (for higher objects from center) or subtracting (for lower objects from center) a calibrated distance which is calculated from the camera's intrinsic matrix associated with the specific camera and the objects distance from the camera (or depth) is used as a multiplier. This process was done for each frame of a video.
When objects were measured in an embodiment comprising an IMU, useful when there would be significant variation from a horizontal plane, the video frame extracted from video was undistorted from a hemi-spherical view to a flattened view using an undistortion matrix for that particular sensor and/or camera model. The IMU was used to correct for variable camera angles encountered when operating the system by measuring the camera orientation in space relative to an artificial horizon. Pitch, roll, and yaw measurements from the IMU were used in Euler equations to warp the perspective of the image back to a nominally positioned camera as if it was level to the horizon and perpendicular to the target object. An object detection model was used to identify an object of interest. The pixel coordinates of the detections centroid or bounds of that detection were recorded, and a flattened image matrix model was used to convert pixel coordinates to real world measurements. This process was done for each frame of the video. This may be done either during or post video collection for determining ear or tiller height, potential yield or other plant characteristics, but must be done during video collection for use in directing automated pollination.
In other embodiments the cameras or IMUs that are associated with another transport device such as a robotic vehicle (such as those produced by Boston Dyanamics or NAIO Technologies), tractors and drones can be used through an application program interface (API) rather than adding additional camera sensors. Onboard computation may also be used for the processing of imagery through an API as well, obviating the need to add additional hardware resources.
A series of GNSS points were collected, with each point representing the geographic coordinates of where the image was taken by the imaging system. In one embodiment, using GPS described herein, several images were tagged with the same GPS position since the GPS system took about 10 positions per second, and the imaging system took about 30 frames per second. A box image was created with a series of boxes, with each box representing a 17 foot long row of corn with a width of 30 inches and a point representing the location of the camera when each image was taken.
Natural lighting was utilized. However, artificial lighting may also be utilized to assist in non-daylight hours when phenotyping and/or pollination is needed. Ultraviolet or infrared lighting or thermal imaging may be utilized to enhance illumination of the corn silks, flowers, or other plant parts. While the camera may have some level of automatic gain and exposure to enhance imaging in low light conditions, this can also result in motion blur. To remedy this, the exposure can be limited to a threshold value and then gain may be used, or an external sensor can be used to adjust exposure and lighting.
Identifying Ears and Silks with Trained Models
Videos were collected in a number of corn fields during different stages of physiological growth stage and environmental lighting conditions. A training set was created from the videos and flattened and corrected images were generated.
Approximately 18,000 images were labeled by tracing a square polygon around pixels associated with corn ear objects, as is shown in
Images taken at about 30 frames per second will show the same ear across several images, so the system tracks continuity of the ear (or other plant reproductive part) throughout the various images. One embodiment that may be utilized to achieve this continuity of image is to locate plant reproductive part relative to the center point of the image.
Images were taken of fixed height objects in order to correlate the height of objects in the raw image with those in a flattened and corrected image.
The application of the flattening and correction noted above, as well as accounting for the elevation of the camera when there was vertical displacement, resulted in highly accurate measurements with an average error of only 0.65 inches. In contrast, this was significantly more accurate than when a non-specific, generic camera intrinsic matrix and distortion coefficient was used, since the average error of the measurements using the generic intrinsic matrix was 6.23 inches on the same objects.
In order to measure the height of a detected object the distance between the camera and the object of interest must be known. For the embodiments designed for corn, a 15-inch depth of field was used based on a standard plant row spacing of 30 inches. This distance assumption would be adjusted for the plant row spacing used. Some camera movement also occurred since the camera was not always positioned in the center of the alley equally between the two rows. Modification of this distance assumption also requires tuning the camera intrinsic matrix and distortion coefficients model to the specific desired distance.
In other embodiments, a system for camera stabilization such as camera gimbals or gyroscopic stabilizers may be added to maintain a stable camera position in 3D space while the transport system moves through the scene. The camera drifting off center of the inter-row space or varying in its angles of view may be affected by either the irregular soil surface the camera is being moved along upon, or thorough other leaning of drifting of the apparatus. A gimbal or related stabilizer could be used to alleviate such manipulation of the camera position.
Various options exist for an automated pollen applicator, for example, once the ear, silk or plant fruiting body is identified and the location determined, a robotic arm can be used to direct a pollinating nozzle or spray tube.
One embodiment of a pollen applicator was designed as shown in
Determining if/when to Spray Pollen
A second camera may then be used to direct the pollen applicator more specifically to the plant reproductive part, and may be used to confirm application of the pollen to the silk as well. The same type of camera, image processing and image recognition software may be used, or alternately, a standard camera lens with less curvature and need for image processing may be used. To account for the latency between the lead set of cameras and the pollen applicator, the lead set of cameras may be physically positioned in front of pollen applicator by some distance to allow time for image processing. A lead camera distance of about 3 feet will allow for sufficient latency time when the machine is traveling at 5 mile per hour. An ethernet or USB camera may be used to avoid signal delay. Colorant and/or fluorescent dye may be added to the pollen for verification and to enhance image verification of pollination.
In addition, the device and methods described herein may be used to phenotype or characterize plants. For example, with corn, a GPS heat map of silk density and distribution may be generated, corn silks may be counted, 5%, 50%, 75% and/or 100% flowering time may be estimated, stem size can be measured or lodging resistance estimated, ear size and diameter may be measured and grain yield can be estimated. The device and methods can also be utilized to count total primary ears, total secondary ears, and/or total ears. Likewise for wheat, the total number of primary spikes, total number of tillers, and/or total number of wheat seed heads may be counted.
While the invention has been particularly shown and described with reference to a preferred embodiment and various alternate embodiments, it will be understood by persons skilled in the relevant art that various changes in form and details can be made therein without departing from the spirit and scope of the invention. For instance, while particular examples may illustrate the methods and embodiments described herein using corn, the principles in these examples may be applied to any plant. Therefore, it will be appreciated that the scope of this invention is encompassed by the embodiments of the inventions recited herein and in the specification rather than the specific examples that are exemplified below.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US22/70890 | 3/1/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63172197 | Apr 2021 | US |