A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever. The following notice applies to the software and data as described below and in the drawings that form a part of this document: Copyright Raven Industries, Inc. of Sioux Falls, S. Dak., USA. All Rights Reserved.
This document pertains generally, but not by way of limitation, to the determination of droplet characteristics for sprayed application of agricultural products (fluid or gaseous).
Agricultural sprayers are used to distribute agricultural products, such as fertilizers, insecticides, herbicides and fungicides to crops. Agricultural sprayers include one or more sprayer booms that are long enough (e.g., 60 feet to 150 feet) to spray multiple rows of crops in a single pass. Each of the sprayer booms include multiple sprayer nozzles to distribute an agricultural product in a spray pattern.
Sprayer nozzles are ubiquitous tools used to perform agricultural processes such as applying pesticides, other chemical treatments, or nutrients to crops. A problem with the current agricultural sprayer nozzles is that only a small percent of the spray sticks to the crop as intended (Massinon et al., 2017). Due to plants' hydrophobic properties, most of sprayed droplets bounce off plants, land on the ground, and become part of the runoff that flows to streams and rivers, potentially causing pollution. Damak et al. (2016) related the problem of droplets bouncing off plants to the sizes and velocities of the droplets sprayed by existing nozzles.
Current techniques to measure the properties of sprayed droplets are manual, imprecise, or expensive. For example, a simple estimation method is to spray colored water onto a white sheet, or a “patternator”, then process the image of the patterns to arrive at a rough estimation of droplet sizes. Improved measurements can be obtained by replacing the white sheet with a glass plate coated with silicone oil, referred to as an immersion sampling method. However, immersion sampling is limited to droplets large enough to break the surface tension of the oil layer, and is less effective for detecting fine droplets sprayed from agricultural nozzles. More advanced techniques that use laser analyzers, such as a Fraunhofer diffraction method and a laser Doppler method, require complicated and delicate experiment set-ups, expertise with laser technologies, and are expensive to acquire. Accordingly, these techniques are difficult to implement in a rugged agricultural environment. Additionally, some of these techniques implement the subtraction of a background to detect droplets. For instance, a consistently colored background (e.g., green, white or the like) is provided. Droplets cause color variations relative to the background, and by virtually subtracting the background from the varied colors (e.g., with image processing) droplets are detected. The maintenance of a consistent background relative to a spray pattern is difficult in an agricultural environment. For instance, soil, crops, fences, the sky, bodies of water, livestock, implements and vehicles have varied colors that frustrate the maintenance of a consistent color for droplet detection.
Accordingly, these techniques depend on consistent and predictable environmental factors that are difficult to achieve in an agricultural environment. Characteristics including consistent colors, consistent and predictable lighting, solid mounts and minimal vibrations for instruments are preferred for droplet detection. Accordingly, sensing and analyzing droplets (e.g., location, size, shape, velocity or the like) in precision agricultural are outstanding problems.
In one example, enhanced nozzles are warranted to produce droplet sizes, velocities, spray patterns or the like (herein spray characteristics) that are effective for a variety of conditions (e.g., adhesion to plants, overcome the effect of winds or the like). Additionally, improvement in droplet monitoring (e.g., spray pattern, droplet profile, droplet vectors such as velocity or the like) for control of sprayer output and confirmation of delivery in agricultural environments is specified to implement control of enhanced nozzles as well as existing nozzles and associated sprayers. The present subject matter provides enhanced monitoring of sprayer output including identification of droplets, detection of droplet characteristics or the like.
The present inventors have recognized, among other things, that a problem to be solved can include enhancing detection and monitoring of spray droplets in agricultural environments, and detecting and addressing inconsistencies in the application of agricultural products in a spray pattern. The droplet characteristics of droplets (in some places referred to as droplet profile(s) having droplet characteristics) distributed in a spray pattern are determined by the present subject matter and the information is optionally used to enhance sprayer performance including, but not limited to, improved coverage of crops targeted by the sprayer.
In one example, agricultural product is applied with varied droplet sizes to address a first issue, such as spray drift. An operator may adjust spray tips or other sprayer settings to generate spray droplets that are larger than otherwise specified. The larger spray droplets have increased mass and are less affected by wind that may otherwise promote spray drift. The large spray droplets provide decreased coverage in some circumstances. For instance, the large droplets deflect from leaves, stalks or the like of targets (e.g., crops, weeds or the like). In contrast, smaller droplets provide a corresponding finer spray, and the droplets adhere to leaves or stalks more readily that larger droplets.
The present subject matter may optionally provide an artificial intelligence (AI)-enabled framework capable of processing images obtained of droplets, detecting and tracking all droplets appearing across image frames, and measuring the droplets' characteristics or profiles, such as geometric and dynamic data. The present subject matter further provides for the integration of deep-learning techniques into an image processing algorithm which enables precise and reliable measurement of droplet size and velocities (herein droplet characteristics, droplet profiles or the like). In addition, the deep-learning framework produces consistent results under a variety of uncertain imaging conditions including detection and monitoring of droplets in dynamic agricultural environments (in contrast to a predictable lab setting).
The present subject matter automates repetitive learning and discovery through analysis of high-speed video data of droplets, and can perform frequent, high-volume, computerized measurement and calculation tasks (in some places referred to as determination) reliably and without fatigue and labor cost. The approach adapts through progressive learning algorithms in which the AI system identifies structures and regularities in droplet imagery data to classify and predict the droplet characteristics, and the system adapts when given new droplet data. The deep-learning capability of the present subject matter can analyze larger and deeper droplet imagery data resulting in a precise and robust determination method, to determine droplet characteristics (or profiles) such as one or more of droplet sizes and velocities (e.g., through measurement, calculation, reference to previously identified structures and regularities, combinations of the same or the like). The determined droplet characteristics from the present subject matter are optionally incorporated into control of sprayer nozzles (e.g., automatic control) to provide for improved application of sprayed fluids to the target including but not limited to, adherence of the fluid, breadth of coverage, increased droplet size to mitigate the effect of wind based spray dispersion or the like.
This overview is intended to provide an overview of subject matter of the present patent application. It is not intended to provide an exclusive or exhaustive explanation of the invention. The detailed description is included to provide further information about the present patent application.
In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed in the present document.
The following detailed description of the present subject matter refers to subject matter in the accompanying drawings which show, by way of illustration, specific aspects and embodiments in which the present subject matter may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the present subject matter. References to “an”, “one”, or “various” embodiments in this disclosure are not necessarily to the same embodiment, and such references contemplate more than one embodiment. The following detailed description is demonstrative and not to be taken in a limiting sense. The scope of the present subject matter is defined by the appended claims, along with the full scope of legal equivalents to which such claims are entitled.
As will be described herein, an agricultural product is provided in a localized manner to each of the product dispensers whether boom sections or nozzles to provide individualized control of application of the agricultural product. For instance, monitoring of droplet characteristics within spray patterns of one or more nozzles, as described herein, facilitates feedback control of droplets, for instance to achieve specified droplet sizes; directions or speeds (e.g., velocity as well as other kinematics); or the like.
Referring now to
The carrier fluid is distributed from the carrier reservoir by way of a header 105 coupled with one or more boom tubes 106. The boom tubes 106 extend along the sprayer booms 102 as shown in
An example localized product injection system 112 is also shown schematically in two different formats in
In one example, shown in
In other examples, the sprayer 100 includes a premixed agricultural product, for instance a carrier fluid having an injection product (or products) mixed therein. The agriculture product (e.g., carrier fluid and injection product) are distributed through the boom tubes 118 to the product dispensers, for instance without downstream injection control. In this example, the each of the nozzle assemblies 110 includes one or more modulating elements configured to control flow rate of the agricultural product. Optionally, the nozzle assemblies 110 are independently controlled or controlled as a group, for instance with the system 501 shown in
The second format of the localized product injection system 112 is also shown in the
As further shown in
The nozzle assemblies 110 include one or more modulating control elements that control flow rate, pressure (e.g., pressure drop), droplet size, spray pattern or the like. Droplet characteristics, such as droplet size, spray pattern or the like are controlled with spray modulating actuators (also referred to as modulating control elements) of nozzle assemblies 110. For instance, control of agricultural product flow rate and droplet characteristics are optionally conducted with a control valve (e.g., a PWM valve) as a pre-orifice, modulating nozzle assembly or the like. These features adjust flow rate and pressure of the fluid at the nozzle and thereby also control droplet characteristics. In other examples, one or more other modulating control elements are included with nozzle assemblies 110. Some examples are shown in
Specified droplet characteristics of the injection product are achieved at the product dispensers 107 according to an individualized droplet characteristics determination (e.g., with an automated controller discussed herein) for one or more (optionally, each) of the nozzle assemblies 110 with injection interfaces 120 being optional component of the nozzle assemblies 110. As discussed herein, the sprayer 100 shown in
As discussed herein, in one example a droplet characteristic assessment system (see
Referring again to
In various examples, the agricultural product control module 500 receives an input from a droplet identification (ID) and assessment module 520, and the agricultural product control module uses the input from the droplet identification and assessment module 520 in a closed loop system to control droplet characteristics in a feedback system. In one example, the droplet identification and assessment module 520 includes at least a portion of the system for detection and tracking of spray droplets of
In some examples, the agricultural product control module 500 conducts control of one or more of the modulating elements of the nozzle assembly 110 (or assemblies 110) to address deviations in droplet characteristics. For instance, one or more of the modulating elements in
In another example, the droplet characteristic control module 504 uses the input from the droplet identification and assessment module 520 to designate one or more of the injection interfaces 120 (an optional component of nozzle assemblies 110) for adjustment of the injection flow rate of injection product (e.g., on, off, and graduated flow rates of the injection product). The droplet characteristic control module 504 selects one or more of the injection interfaces 120 for individualized control of the injection interfaces 120 to achieve a desired droplet characteristics (e.g., change in droplet characteristics) of the injection product in the carrier fluid. The rate control module 502 determines the corresponding rate for each of these selected injection interfaces 120, for instance in cooperation with the field computer interface 506 and a field computer 508 as described herein. Injection of an agricultural product or additive is an option, and some nozzle assemblies 110 may not include injection.
As further shown in
As further shown in
As shown in
In another example, the agricultural product control module 500 includes a field computer interface 506. As shown, the field computer interface 506 provides an interface for coupling with a field computer 508 (part of the sprayer 100, with a leading vehicle such as a tractor, or a standalone device) and the field computer 508 includes a field map 510. As will be described herein the field map 510 includes a series of prescriptions of agricultural products including specified droplet characteristics, seed types, irrigation or the like for various zones. The differing prescriptions for each of the zones are determined through analysis of the field terrain, yields from previous crops, environmental conditions or the like. In some examples, specified droplet characteristics may be provided by field map prescriptions.
The field map 510 provides a plurality of prescriptions for an agricultural product or agricultural products throughout the field (e.g., in one or more of the zones of the field). As the field computer 508 communicates with the agricultural product control module 500 which uses the field map 510 and its associated zone based prescriptions to independently specify the flow rate of an agricultural product, optionally an injection product for each of the injection interfaces 120 (being an optional component of nozzle assemblies 110), and further specifies droplet characteristics for spray patterns emanating from the nozzle assemblies 110.
Further, with GPS systems, mathematical representations of the product dispensers 107 (e.g., the boom sections 108 or nozzle assemblies 110) along the sprayer booms 102, or the like the location of each of the product dispensers 107 of the sprayer 100 is continuously determined on the field map. As one or more product dispensers 107 of the sprayer 100 are within a zone or are poised to enter a zone the agricultural product control module 500 selects the corresponding nozzle assemblies 110 for adjustment of agricultural product flow rate (optionally injection product) and associated droplet characteristics based on the field map prescription for the zone.
The sprayer 100 described herein is able to instantaneously deliver an accurate droplet characteristics of the injection product to the carrier system at one or more of the injection interfaces 120 (being a sometime included component of nozzle assemblies 110) by way of individualized control of each of those injection interfaces 120, including a modulating nozzle assembly. In some examples, the present modulating elements provide for near instantaneous control of the sprayed agricultural product including one or more of flow rate, droplet characteristics or the like, using the system described herein and the associated modulating element(s) provided with corresponding nozzle assemblies 110.
In another example, the modulating nozzle assembly 110 includes an injection assembly configured to administer one or more injection products (e.g., additives) to a carrier fluid provided through the boom tube 108. The injection assembly includes an injection line 308 in communication with the modulating nozzle assembly 110 at an injection port 310. An injection control valve controls the flow of the injection product to corresponding control the concentration of the injection product in the carrier fluid, the resulting composition and concentration of the agricultural product or the like. As shown in
In this example the modulating nozzle assembly 110 includes one or more modulating elements configured to control one or more of flow rate, droplet size, spray pattern or the like (e.g., the actual spray profile). The nozzle assembly 110 may include a plurality of these elements such as, but not limited to, the orifice (e.g., the modulating spray tip 312), modulating accumulator 314, gas inductor 316, or the control valve 302. Optionally, where two or more spray modulating elements are included with the modulating nozzle assembly 110 these actuators cooperate with each other to control droplet size or alternatively control a different droplet characteristic (e.g., spray pattern, droplet kinematics or the like). For instance, the control valve 302 is operated to control droplet size while the modulating spray tip 312 is operated to control the spray pattern.
A nozzle assembly controller (such as agricultural product control module 500) provides one or more control instructions to the corresponding spray modulating elements (e.g., motors, stepper motors or the like that control the movable features of the spray modulating elements that affect nozzle performance including droplet size, pattern or the like). As discussed herein, spray modulating elements include features associated with the nozzle assemblies that provide control of one or more characteristics of the actual spray profile (e.g., droplet size droplet kinematics, spray pattern or the like). Modulation, as used herein, includes, but is not limited to, controlled movement, position, operation or the like of features associated with the spray modulating elements to control characteristics of the actual spray profile including changing characteristics, maintaining characteristics (e.g., while other characteristics, like pressure or flow rate, change) or the like. In one example, modulation or modulated control includes, but is not limited to, graduated control of nozzle fittings, orifice plates, gas inductors, accumulators or the like between open and closed positions as well as intermediate positions to control aspects of actual spray profiles from the nozzle assemblies. In another example, modulation may include the reciprocating operation of a pre-orifice type actuator such as a flow rate control valve using pulse width modulation control (e.g., duty cycle, duty cycle frequency or the like).
Referring again to the control valve 302 of the modulating nozzle assembly 110, the valve operator 306 includes one or more of a solenoid operated poppet, butterfly valve, needle valve or the like. In some examples the control valve provides flow rate control and is also an example of a spray modulation actuator. For instance, the control valve 302 is operated as a pre-orifice to the nozzle assembly 110 and the spray tip 304 to control pressure (e.g., control droplet size) through variation in the valve opening, duty cycle, duty cycle frequency (number of open and close instances per unit time) or the like. For example, increasing one or more of the opening of the control valve 302, duty cycle (percent open), duty cycle frequency or the like minimizes pressure drop across the valve 302, and thereby decreases the droplet size from the spray tip and the spray pattern droplets are finer. Conversely, decreasing one or more of opening of the control valve 302, the duty cycle, duty cycle frequency or the like increases pressure drop across the valve 302 and thereby increases the droplet size from the spray tip making the spray pattern droplets coarser.
Another example of a pre-orifice actuated as a spray modulation actuator includes one or more orifice plates 307 shown in
Other examples modulating spray actuators include, but are not limited to, a modulating spray tip 312, a modulating accumulator 314, gas inductor 316 in addition to or alternative to the control valve 302. As shown in
The gas inductor 316 is another example of a modulating spray actuator of the modulating nozzle assembly 110. In the example shown in
The induction valves 322 include valves configured to control a flow rate of gas to the agricultural product prior to application. In an example, the induction valves 322 include needle valves and associated actuators that move the valve needles between seated and unseated configurations to meter the introduction of gas to the agricultural product.
In operation, the induction valves 322 are controlled to increase, decrease or maintain a flow rate of gas to the agricultural product. An increased flow rate of gas to the agricultural product increases droplet size in the sprayed agricultural product (e.g., the droplets are coarser). Conversely, a decreased flow rate of gas to the agricultural product decreases droplet size and provides a finer sprayed agricultural product. Optionally a gas quality sensor, humidity sensor or composition sensor is included with the gas inductor to assess the input gas (e.g., atmosphere) prior to or during induction.
The modulating nozzle assembly 110 shown in
In the example shown in
As further shown in
The example nozzle assembly controller (such as agricultural product control module 500) includes one or more algorithms, feedback controllers, operator or manufacturer specified parameter changes or the like that are implemented based on a combination of monitored sprayer profile deviations and the available sprayer modulating elements for the respective modulating nozzle assembly 110. Examples of parameter changes made based on sprayer profile deviations and example nozzle assembly setups are shown in
In one example, the magnitude of spray modulation actuator changes, priority of modulation changes or the like (e.g., to orifice size, gas induction flow rate, pre-orifice size, accumulator volume or the like) are made according to one or more of operator or manufacturer preferences, algorithm based outputs, magnitude of deviation from the specified spray profile, type of deviation (or deviations) determined as part of the spray profile deviation or the like. In other examples, magnitude, priority or the like of the actuator control instructions are determined according to apportionment (e.g., weighting) between available spray modulating elements of respective modulating nozzle assemblies. For example, as shown in various permutations of spray profile deviations with modulating nozzle assemblies 110 having two or more spray modulating elements (e.g., columns for 4 to 6 and 6′) a priority of control is provided with green (first hatching) having a higher priority relative to yellow (second hatching), and yellow having a higher priority to blue (no hatching in the arrow). For example, with the modulating nozzle assembly 110 shown in column 4, having the gas inductor 316 and the modulating spray tip 312 priority is optionally set in this permutation with the gas inductor 316 control priority greater than the modulating spray tip 312 control. Accordingly, one or both of priority of modulation, magnitude of modulation or initial and secondary modulation is prioritized for the gas inductor 316 first and the modulating spray tip 312 second.
In another example, in column 6′ having a modulating nozzle assembly 110 with pre-orifice control (e.g., the control valve 302 or pump are actuated to control droplet and spray characteristics), the gas inductor 316, modulating spray tip 312 and the modulating accumulator 314 the nozzle assembly controller of the sprayer nozzle assembly control system 120 prioritizes the actuators. Injection interfaces 120 are an optional component of the nozzle assemblies 110. For instance, the pre-orifice control is prioritized higher than the gas inductor 316, the gas inductor 316 is prioritized higher than each of the modulating spray tip 312 and the modulating accumulator 314, and the spray tip and accumulator have equal, matched or identical priorities. Accordingly, one or both of priority of modulation, magnitude of modulation or initial, secondary and tertiary (or greater) modulation is prioritized for the pre-orifice, the gas inductor 316, and the accumulator 314 and spray tip 312 in that order.
Accordingly, with the field map 510 having various prescriptions (specified flow rates, specified spray profiles including droplet characteristics, or the like) and may include a plurality of product dispensers 107 with individually controlled injection interfaces 120. Injection interfaces 120 are an optional component of the nozzle assemblies 110. The system 501 includes sensors for observing spray 525 and nozzle assemblies 110 having one or more spray modulating elements.
The droplet characteristics of sprayed droplets provided by nozzle assemblies of an agricultural sprayer are determined by the present subject matter and the information is used to enhance sprayer performance including, but not limited to, improved coverage of crops targeted by the sprayer. An artificial intelligence (AI)-enabled framework may optionally be provided that is capable of processing images obtained of droplets, detecting and tracking all droplets appearing across image frames, and measuring the droplets' geometric and dynamic data, such as droplet size and velocities. The determined droplet characteristics are optionally used to control sprayer nozzles to provide for improved application of sprayed fluids to the targeted agricultural product including but not limited to, adherence of the fluid, breadth of coverage, increased droplet size to mitigate the effect of wind based spray dispersion or the like.
In various embodiments, the present system uses one or more sensors, such as a camera, infrared sensor or the like (collectively referred to as sensors or cameras), to obtain images of sprayed agricultural product including images of droplets from actual sprayed agricultural products in working environments including fields. The images are processed using a series of filters to isolate the droplets, in various embodiments. In examples, the system and method identifies the perimeter of the droplet and the center of gravity of the droplet, and uses the identification information to track droplets in successive image frames, and determine one or more of droplet velocity and direction (e.g., examples of droplet kinematics). In some embodiments, the system uses multiple size categories from fine to coarse to distinguish droplets.
According to various embodiments, the present system optionally applies artificial intelligence to provide for detection and tracking of droplets between image frames. The system applies a procedure to flip an image or change an angle of an image to augment a droplet dataset, in an embodiment. The algorithm is validated using a separate droplet dataset, in various examples, and the validated algorithm is tested using successive image frames to detect droplets using a testing data set to produce a desired accuracy level. The present system may detect droplets in successive frames at an accuracy level of approximately 90 percent (in other examples, accuracy levels are 60, 70, 80 percent or more). Various examples of the present system use a Kalman filter to track droplets among image frames. In some embodiments, the present system includes multiple stages of operation, including but not limited to prediction of droplet position, updating of the prediction of droplet position data, filtering of the data using an optimal or sub-optimal filter, and tracking of droplet position between successive image frames using the filtered data. The present system applies a frame label to droplets of interest in a current image frame to assist in droplet detection and tracking using AI, in various embodiments. Using the frame label or identification (ID), the present system can distinguish droplets that appear identical or nearly identical in multiple image frames, in some examples.
After each stage in the droplet identification and tracking process, the present system refines data used and stored in one or more memories accessible by the system, in some examples. The system can scan the entire image frame or some portions of the image frame, including using regions of interest (RoI) identified by the system in which the probability of droplet location is relatively high compared to other regions of the image frame. The system can limit focus or search for droplets to the RoI, in various embodiments. In some embodiments, the present system applies noise reduction as an image processing function, such as using filters to remove objects that are less likely to be droplets from the image frame. In one embodiment, a Gaussian filter is applied to the image frame using the present system. The system draws a bounding box or border around each droplet of interest detected, in various examples, and inputs the bounding box data including coordinates, perimeter and area of the bounding box to a fully connected neural network to locate and track the droplet of interest. In some examples, the present system fits an ellipse to the detected droplet.
Various examples of the present systems and methods index and track droplets once identified using equations provided to compare a difference in parameter sets that are extracted and stored for each droplet of interest. The difference in parameter sets for droplets is compared to a programmable threshold to determine a match of droplets and track the droplets across image frames, in various examples. This difference in parameter sets may be a unitless value, in some examples. In some embodiments, a linear distance between droplets is calculated and used for droplet identification and tracking. In various embodiments, a cost function is used that combines a parameter set difference and a linear distance using a weighted combination, such as an equally weighted combination, to enhance identification and tracking of droplets among image frames. Other characteristics (or number of characteristics) can be used without departing from the scope of the present subject matter. In some examples, a droplet can split into two droplets in a subsequent image frame, and the present system can identify and track to two droplets using the methods provided for single droplet tracking.
The system can match droplets in image frames by generating a vector of position, geometric properties, and motion of identified droplets, and comparing vectors of identified droplets using a cost function, in various embodiments. The system can build the cost function and match droplets by minimizing a cost or difference between droplet data, in some embodiments. In various examples, each droplet detected has an opportunity to be matched in a programmable number of subsequent image frames, otherwise the droplet is discarded. If a droplet is detected close to a boundary of an image frame, the droplet has potential to be a new droplet, or to be removed, in various examples. Some embodiments of the present system use one or more Kalman filters to predict motion of the droplets. In some examples, if the droplet is not matched in successive frames using a cost function, the system will make a guess or prediction of the location of the droplet in a successive image next frame. If no match of the droplet is obtained using the cost function and the Kalman filters, the system may remove the droplet data from a buffer or memory and stop tracking the droplet. In various examples, the system tracks pixels per distance (e.g., pixels per inch) of droplet displacement between image frames, and uses the known frame rate of the image sensor to compute velocity of a droplet.
In various examples, the present systems utilize a cost function and a predictive function for droplet matching and tracking. In one example, prediction of droplet motion can be combined with or built into the cost function. In various examples, the cost function uses a weighted combination of droplet characteristics to identify droplets of interest, and can include a predictive component for the identification calculation. In some examples, to track or match a droplet in a next image frame, droplet data (or characteristics) of the droplet are compared to all or some of the droplets in the next image frame, and the lowest cost produced by the comparison is deemed to be the matched droplet. A predictive function is applied in various examples to locate and match the droplet. In one example, a programmable threshold for the cost function is provided such that a droplet will not be identified and tracked if the cost function does not produce a result within the threshold. In this example, the system can search for the droplet in a predetermined number of subsequent image frames, and if not matched within the predetermined number of frames, the droplet data with be dropped from the buffer. The system compares an identified droplet from a first frame of video to an array of droplets in a second frame, selects droplet from the array that has the lowest cost function to identify droplet, and once identified, count pixels of displacement and based on the displacement produce a vector (using motion and direction of motion) used to identify and match the droplet.
In various embodiments, the system labels droplets and uses the labels (or unique identifiers) to assist in training the AI component for droplet identification in one or more image frames. The system processes the droplets in a loop using an index or buffer of droplet data including the droplet label, in various examples.
As discussed herein, in one example a droplet characteristic assessment system 200 (see
According to various embodiments, the frame extractor module 204 is configured to extract frames from an input video (including successive still camera images) at a fixed frame rate as defined by a user. The droplet shape and size extraction module 206 is configured to extract geometric information about detected droplets in a current frame provided by the frame extractor module, in an example. In some examples, the droplet tracking module 208 is configured to track droplets by matching the droplets between frames based upon information obtained from a new frame which has been processed by the droplet shape and size extraction module 206. The data log module 210 is configured to store a list of droplets and information associated with the droplets, including one or more of: position in a current frame, position in a previous frame, shape and size of each droplet in the current frame, parameters of a dynamic model associated with each droplet, a numerical value that counts a number of times each droplet has been matched continuously with previous frames referred to as success value, a numerical value that counts a number of times each droplet has not been matched with previous frames referred to as loss value, and a unique identifier for each droplet.
In some examples, the droplet shape and size extraction module 206 is configured to output droplet shape information, and the droplet shape and size extraction module 206 includes a frame noise suppression module using a Gaussian filter, an object detection module configured to determine a detected droplets confidence score and corresponding bounding box coordinates by executing a trained deep neural network model, a refinement module configured to select droplets having a confidence value above a user-defined threshold, a least square-based ellipse fitting module configured to provide information associated with a fitted ellipse for the selected droplets, and an extraction module configured to extract information by fitting an ellipse to detected droplets and transmit the information to the droplet tracking module.
In various examples, the droplet shape and size extraction module 206 includes a neural network configured to refine droplet image data. In these examples, the droplet shape and size extraction module 206 includes a feature extraction module including a deep neural network with convolution operations that learns to recognize important features in an image to allow the deep neural network to distinguish droplets, a region proposal network including a second neural network that receives an output from the feature extraction module and identifies and outputs proposed regions in a feature map that contain an object, a region of interest (ROD pooling module configured to make all of the proposed regions a same size, and an inference network including connected layers that take reshaped region proposal regions as an input and output the droplets and their corresponding bounding box, and further outputs a confidence score for each of the detected droplets in the image, wherein the confidence score is a numerical value within a range of 0 and 1 that is configured to filter out detections having confidence scores below a user-defined threshold, which are decarded as not being droplets.
According to various embodiments, accuracy of the trained model is evaluated using images in a set Ω as defined in an equation:
In this example, a total number of images in the set Ω is represented using n(Ω), MDx is a total number of droplets missed by the trained model for an image when compared to its ground truth, FDX is a total number of incorrect detections made by the trained model for the image when compared to its ground truth, TDX is a true detection which is computed by subtracting a total number of droplets detected by the trained model for the image with the total number of incorrect detections for the image. The ground truth includes manually labeled data used for network training, including a bounding box around each droplet in the image.
In some embodiments, the accuracy of the trained model is configured to be used to guide a training process of the trained model, and wherein the training process of the model is stopped when the accuracy is greater than or equal to 90%. The training process is not limited to real data of droplets, and the training process is configured to use generated images of droplets using generative models during the training of the neural network model used for droplet detection, in various examples. The droplet tracking module 208 can include a multi-step process comprising droplet matching and sharing of outputs with the data log module. The droplet matching can include computing a cost between new droplets and past droplets that are stored in the data log module, applying a Hungarian algorithm to determine a unique match between new droplets and past droplets that are stored in data log module 210, and providing refinement and grouping of droplets that are matched between past droplets and new droplets obtained by applying the Hungarian algorithm.
In various examples, the refinement and grouping of droplets that are matched between past droplets and new droplets includes, for each match between new droplets and past droplets: if the cost is below a user-defined threshold, accept each pair and update information for matched droplets based upon the information from the new droplet, and if the cost is above the user-defined threshold then: for unmatched droplets stored in the data log module 210: update information for unmatched droplets using a Kalman filter for predicting a position of the unmatched droplets in a next frame, and for unmatched droplets detected in a current frame, append each of the unmatched droplets detected in the current frame into the data log if a number of detected droplets in the current frame is greater than the number of droplets in the data log. In some examples, updating information for matched droplets includes copying current location information of a past droplet into past location information of the past droplet, including replacing the current location information of a past droplet with location information of a new droplet, updating shape and size of each droplet of the past droplet with the shape and size information of the new droplet, updating parameters of a dynamic model stored for the past droplet, increasing a first numerical value by one that counts a number of times the past droplet has been matched continuously in previous frames, and updating a second numerical value to zero that counts a number of times the past droplet has not been matched continuously in previous frames.
According to various examples, updating information for unmatched droplets includes the following steps for each droplet pair of a new droplets and a past droplet: copying current location information of the past droplet into past location information of the past droplet, and replace the current location information with a Kalman filter prediction location of each droplet in the next frame, updating shape and size of each droplet of the past droplet with the shape and size information of the new droplet, updating a numerical value to zero that counts a number of times the past droplet has been matched continuously in previous frames, and updating a loss value with an increment of one. Some examples include appending information into the data log module associated with each droplet, including: position in the current frame is stored based upon the information extracted from droplet shape and size extraction module, position in a previous frame is left empty, shape and size of each droplet in the current frame is stored based upon the information extracted from droplet shape and size extraction module, parameters of a dynamic model associated with each droplet use the dynamic model, a first numerical value that counts a number of times each droplet has been matched continuously with previous frames referred to as success value is set to zero, a second numerical value that counts a number of times each droplet has not been matched with previous frames referred to as loss value is set to zero, and a unique identifier for each droplet is added.
In some examples, tracking updates values stored in the data log module 210, including determining whether a droplet current position is outside an image frame, and if so to removing the droplet and its associated characteristics from the data log. The system is configured to repetitively detect and track droplets for image frames extracted by a frame extractor, in some examples.
In various examples, the system further includes a nozzle controller, such as control module 500, configured to control function of the nozzle (or nozzle assembly 110) to implement feedback control of nozzle output based on the information associated with the droplet. At least a portion of the system is incorporated into an agricultural machine, in various embodiments.
In one example, the image is sharpened to increase the contrast level between the pixels of interest that represent droplets and the background pixels, and the image is altered using the Gaussian filter to suppress noise. This Gaussian controlled to prevent the loss of information that may include data of droplets, but noisy pixels are blurred which significantly lowers noise contrast. A range of red, green and blue (RGB) values is used corresponding to a typical droplet's color, and pixels with RGB values outside this range are filtered out and pixels near the frame boundary are cropped off. The results are depicted in
The present system and method connects black pixels that are close to each other to form contours that define regions of interest (ROI), thus defining regions of the image frame most likely to represent droplets. For example,
The shapes of droplets can be irregular, thus it is not appropriate to define droplet size and position based directly on the contours of the droplets detected in the image frame. The present subject matter provides for outlining the droplet contours using a standard shape such as an ellipse that best fits the droplets' shapes. After all the noise has been filtered out, and ROI are defined like those depicted in
The prior methods of computer vision techniques fails to detect and track droplets when the droplets in a number of situations, such as when droplets have similar color and brightness condition to the background, which makes it difficult to detect and track the droplets since the pixels along the boundary have a low level of contrast. In addition, the droplets may have irregular shapes (not just circles and ellipses) and their shapes may change between the current image frame and a subsequent image frame. Because the droplets are more or less the same size and shape, it is difficult to differentiate a particular droplet from a neighboring droplet between two image frames, and the present subject matter uses an identification method to track each particular droplet. The droplets are dynamic with substantially different positions between image frames, and droplets can be of different sizes depending upon how the image is recorded. For example, if the image is taken from a sensor that is relatively close to the droplets, then droplets appear larger and have more distinct features. However, this is not the case when the video image is recorded from a relatively far distance where all droplets appear smaller and appear to have a similar shape.
The previously mentioned challenges with droplet identification and tracking may occur ubiquitously in all droplet images because of the nature of crop spray systems and the camera/sensor positioning. Classical computer vision techniques as discussed above can be fine-tuned for a particular image capturing system with a specific properties to achieve best performance, but the performance can drastically decrease as the image properties vary during the recording process. Thus, these classical computer based techniques are not adaptive enough to be robust to uncertainties encountered in a crop spraying system.
The present subject matter overcomes these difficulties by providing for droplet detection and tracking using a data-driven method optionally using artificial intelligence (AI). Specifically, the present system may use AI to automate repetitive learning and discovery using image frames from highspeed videos of droplets, and the present system can perform frequent, high-volume, computerized measurement tasks reliably and without additional labor cost. The present approach may adapt through progressive learning algorithms in which AI identifies structures and regularities in droplet imagery data to classify and predict the droplet characteristics, and adapts when given new droplet data. The deep-learning capability of the present system can analyze larger and deeper droplet imagery data resulting in a precise and robust measurement method. Overall, the present system provides for adaptability in the presence of uncertainties and condition variation.
The present system provides for dividing an input image frame into one or more sections, in various embodiments, and formulates and executes a classifier to determine if a droplet is present in each section. In one example, the image frame is divided into N equal sections when the scale is 1. In this example, when the scale is 2, the image frame is divided into 2N equal sections. The present system uses a classifier for each divided section, in various examples, and the time to execute the classifier in each section is t seconds. Each section may contain droplets of multiple sizes in the image frame, and to account for the multiple droplets the present system can use multiple scales 1, 2 . . . n which directly control a total number of sections where the classifier is executed. As n increases, the total time (N+2N+ . . . +nN)t it takes to classify droplets in those sections would also significantly increase. This approach depends upon the user to choose the n value and can be comparatively slower based upon the chosen n value.
Some methods use a selective search algorithm to determine sections of the image frame, to decrease a number of sections compared to manually selecting sections as discussed above. These image frame sections (also known as a proposals) are selected such that there is a high probability of having droplets in each selected image frame sections. Instead of using all regions in an image, the system then limits the use of a classifier in only the selected sections. A limitation with this approach is that the object detection performance is directly dependent upon the ability of the selective search algorithm to detect regions containing droplets and the ability of the classifier to correctly detect droplets in those regions. In this situation, regardless of the effectiveness of the classifier, a droplet may not be detected if the selective search algorithm misses a region containing the droplet. As an example,
An advanced object detector may be used instead of using a selective search algorithm, in various examples. Two existing object detection methods include a Faster region-based convolutional neural network (Faster-RCNN) and a Single Shot Detector (SSD). In various embodiments, the present subject matter employs a Faster-RCNN object detector for increased accuracy of object detection.
Faster-RCNN 554 has a base network that performs feature extraction known as a backbone of the network, in various examples. The base network can be replaced by another backbone that fits better with the particular task. An example of a backbone is a residual network (ResNet), which is employed in the present subject matter as a template for the base network of the Faster-RCNN, as ResNet allows for training much deeper networks (with many more layers) which otherwise could be prone to a vanishing gradient. In various examples, an inference network 556 provides an output 558 using droplet probability and bounding boxes. In one example, the present subject matter implements a ResNet with 50 layers. Other numbers of layers may be used without departing from the scope of the present subject matter.
In various embodiments, each training image has a ground-truth label (bounding box). A bounding box includes height, width and center hg, wg, xg, and yg, and an anchor box includes a height, width and center, ha, wa, xa, and ya. The present subject matter defines p* as a target probability of having an object (a droplet or something that appears standing out from the background) in the region covered by a bounding box described by t*=[tx*, ty*, tw*, th*], where tx*=(x9−xa)/wa, ty*=(yg−ya)/ha, tw*=log (wg/wa), th*=log (hg/ha). For each pair of a ground truth label and an anchor box, the present system computes a ratio of the area of intersection between the ground truth label and the anchor box over the total area covered by both the ground truth label and the anchor box. For any pair, if this ratio is greater than 0.7, the present system assign p*=1, in an embodiment. Similarly, the present system assigns p*=−1, if the value is less than 0.3, in an embodiment. In all other cases, the present subject matter assigns p*=0, in various embodiments.
For a given input feature map, the region proposal network of the present system produces a set of numbers pi and ti. The parameters of the neural network are tuned by the present system via an optimization scheme until the neural network produces pi and ti values that converge to the target values pi* and ti*. The optimization scheme aims to minimize total loss of region proposal formulated as
Lcls is the classification log loss and Lreg is the regression loss.
The present subject matter performs Region-of-Interest (RoI) pooling, in various embodiments. From the above region proposal, numerous pi and ti, are generated, where pi is the probability of having an object in the region covered by a bounding box described by ti. In various examples, not all regions are of the same size (height and width). ROI pooling is used to resize each region obtained from the region proposal to have a fixed size, in various embodiments. The outputs of the RoI pooling process are regions of the same size that may contain droplets or other objects, for example things in the background.
As shown in
With a target probability ui*=1 if region i contains a droplet indicated from the ground truth and ui*=0 if that region does not contain a droplet, v* is obtained from the ground truth. In various examples, Lcls is the classification log loss and Lreg is the regression loss.
In order to compute the performance of the system, the trained network was used on a set of evaluation images. Each evaluation image has a ground truth, in which every droplet in each evaluation image is manually labeled with a bounding box. Hence, the ground truth is the best possible performance resulted from careful inspection by a human operator. Each image was passed through a trained model and the results were compared with the ground truth, storing the following information for each image: (1) a total number of droplets missed by the trained model (MD) for a given image when compared to its ground truth, (2) a total number of incorrect detections (FD) made by the trained model for a given image when compared to its ground truth, which can occur when the trained model detects a droplet but the ground truth does not consider it as a droplet, (3) a total number of detections by the trained model for the given image (DT), since not all detections contain droplets, for example FD consists of detections counted in DT that are not droplets when compared to the ground truth, (4) true detection (TD) which is computed by subtracting total number of droplets detected by the trained model for the given image with the total number of incorrect detections for that image, and (5) accuracy which is computed as
With this set of metrics, the analysis penalized for every missed detection and any wrong detection. As described, the value of accuracy will be 1 if the droplet detector result exactly matches the ground truth. With multiple images, overall accuracy is the average accuracy over all the experimental images. The accuracy defined here will be used as a performance metric to quantify the ability of the model to detect droplets. With the performance metrics defined, the training process and droplet detection results can be analyzed.
The present subject matter performs training by collecting images containing droplets and creating a database with all the collected images. The present system implements image augmentation to increase the size of the database. In addition to increasing the size of training data, augmentation can yield better classification results. For every image, the system labels the regions and provides the ground truth. The system divides the database into three different sets: train, dev, test. Every image includes the ground truth for that image. The train dataset contains images for training the object detector. The Faster-RCNN model of the present system learns from images within the train dataset during the training process. The dev dataset is generally used to evaluate the performance of a model, in various examples. The model occasionally sees these images during the training process. The test dataset contains images which the model has never seen before. These images inside the test dataset are used to compute the accuracy of the trained model. In one example, the train dataset has 470 images, the test dataset has 44 images and the dev dataset has 47 images. The 470 images in the train dataset contains around 4907 droplets, in an example. Similarly, the dev dataset contains 630 droplets, while the test dataset contains 600 droplets in an example.
In various embodiments, anchor boxes are a set of predefined bounding boxes and are used for object detection in a Faster-RCNN analysis. Anchor boxes can be bounding boxes where the network initially predicts the probability that a set of pixels are a droplet or background objects, in various examples. The present algorithm refines each anchor box as it learns the location of droplets from the ground truth. Based upon the correction made using the anchor box, the network reports the coordinates of the bounding box enclosing the droplet. Hence, an anchor box can be thought of as an initial guess for the bounding box.
In various examples, the size of an anchor box is defined in terms of by an aspect ratio (width/height), r, and a scale factor, s, and base anchor height and width as: the height of an anchor box=(height of base anchor)*(s√{square root over (r)}) and the width of an anchor box=(width of base anchor)*(s√{square root over (r)}). The value of 256 is chosen for both height of a base anchor and the width, in an example. Using input training data, the present system was used to extract every bounding box for that data, and the result is plotted in
To compute accuracy, the trained model is used to detect droplets for each image stored inside the test folder. For every detection made by the model, a confidence value (score) associated with the detection is calculated, in one example. Detections with confidence level below a confidence threshold, which is set to 0.9 to minimize FD and TD in an example, are discarded. The value of the confidence threshold can be varied in various embodiments. A high confidence threshold will lead to droplet detections with high confidence, but fewer droplets are detected and valid droplet detections may be discarded. In contrast, a low confidence threshold will lead to more droplets detected but also more false detections. The accuracy of the trained Faster-RCNN model when used to detect droplets with images the program has never seen is listed in the following table:
Accuracy computed for Faster-RCNN model for the test folder
The trained Faster-RCNN model produces an overall accuracy of approximately 90%. Additionally, the total number of missed detections, MD, is greater than the total number of false detection, FD, given the high value of the confidence threshold. As an example,
The deep-learning algorithm detects droplets in a given image frame, in various examples. The detection algorithm considers images as independent and neglects the information implied between images, namely the dynamics of each droplet governed by physical laws, and is used to measure droplet size. The present system further relates droplets between frames, inferring dynamics information of each droplet, labelling each droplet, and then monitoring motion of the droplet between successive images. The present tracking algorithm is used when measuring droplet velocities. The problem of relating droplets between frames depicted in
Various embodiments of the present system execute a correspondence matching module. In order to match a droplet in multiple image frames, each droplet is described using a vector (or a descriptor). Vectors can be compared in successive images in order to establish a similarity relationship with vectors, for droplet tracking. In one embodiment, a descriptor is described using the equation:
descriptor=[Cx,Cy,ρ,A,E,T]
In this embodiment, Cx and Cy are the x- and y-coordinates of a droplet center, ρ is the perimeter of a droplet, A is the area of a droplet appearing in an image, E is the eccentricity of a droplet, and T=(4πΔ)/ρ2 is the thinness ratio. To extract descriptor information for each droplet as defined above, the present system executes the following steps: extracting each droplet detected by the trained model, and transforming rectangular boxes into best fit ellipses; converting each of the extracted images to grayscale where each pixel has a value between 0 (black) and 255 (white); applying the Sobel-Feldman operator to recreate droplet images emphasizing the boundaries between droplets and the background, where for example, the second column in
Applying the above-discussed method provides a set of descriptors defined in associated with ellipses fitting all droplets in an image frame as depicted in
In various embodiments, the present system matches a descriptor in a frame with the corresponding descriptor in the next frame, thereby tracking a droplet in successive image frames. Matching is achieved by developing a relationship between descriptors across consecutive frames. Given two descriptors, the Bhattacharyya distance is computed as:
In this equation, σp is the variance of p descriptor, σq is the variance of q descriptor, μp is the mean of p descriptor and μq is the mean of q descriptor. In addition, the present system computes the Cartesian distance (DC) between two droplets centers whose locations in image are known, in an embodiment. In various embodiments, the system defines the cost value as a linear combination of DB(p, q) and DC(p, q):
cost(p,q)=0.5DB(p,q)+0.5DC(p,q)
In some examples, both DB and DC play a role in the matching decision as droplets appear more circular as they get smaller in the image. As a result, for relatively smaller droplets, it becomes more difficult to distinguish them by comparing descriptors. When descriptors are too similar to make unique distinction, various embodiments use the Cartesian distance to match droplets.
In one embodiment where 1 is the number of droplets detected in a frame k, all droplet descriptors are in set Rk={r1, r2, . . . , r1}. Various embodiments of the present system match the descriptors in set Rk to another set Rk+1={s1, s2, . . . sm} containing m droplet descriptors from the next frame k+1. The system defines the cost matrix A with the (i, j) entry using the equation:
A[i,j]=cost(ri,sj),i=1,2, . . . ,l and,j=1,2, . . . ,m
Each row of matrix A contains the droplet descriptors which are matched with the droplets stored in a column of the matrix, in various embodiments. The system can use a Hungarian method (a combinatorial optimization algorithm commonly used to solve matching or assignment problems) to find a unique match for each row with each unique column, such that a match is concluded when the corresponding cost function is minimal. By providing a unique match for a droplet, the system prevents matching of a droplet in an image frame with more than one corresponding droplet image in a subsequent frame. This unique match comes at a cost: when 1≠m, some droplets in frame k might not be present in frame k+1, or vice versa, and the algorithm returns false matches. To minimize these errors, in various embodiments the present system removes any match that implies a droplet is moving upward, because all droplets are sprayed downward under the nozzle pressure and the influence of gravity. In various embodiments, the system also removes any matches associated with a significant change in droplet location as compared to the remaining droplets in that frame, since droplets are moving with similar velocities and their displacement between consecutive frames should be similar. Thus, the present system minimizes errors in correspondence matching and effectively eliminates false matches caused by new droplets entering a video frame.
Another source of tracking errors is produced by new droplets that come into an image frame while others are leaving the frame. As a result, the correspondence matching algorithm discussed above (and shown in Algorithm 1 below) may leave some unmatched descriptors in both frame k and k+1. In one example, several droplets in Rk exit the frame k and thus do not appear in frame k+1 (Rk+1). The present subject matter removes these droplets and no longer tracks them in subsequent frames. In another example, several unmatched droplets in Rk still exist in frame k+1 but were not detected by the detection algorithm. The present subject matter runs a Kalman filter prediction and stores the predicted state of these droplets. The correspondence matching is then used to match these droplets in subsequent frames. In yet another example, some new droplets enter in Rk+1 and are not matched. The present system then assigns new IDs for the new droplets as they are detected for the first time. In a further example, some unmatched droplets in Rk+1 were present in the frame k but were not detected by the detection algorithm in frame k+1. The present system then runs a Kalman filter prediction and stores their predicted state. The correspondence matching matches these droplets in upcoming frames, in various embodiments.
Thus, the Kalman filter enables state prediction for droplets that are not matched. To minimize errors and improve tracking performance, the present system formulates and implements a Kalman filter in addition to the correspondence matching as discussed above. The use of the Kalman filter is divided into a prediction phase and an update phase. In the prediction phase the Kalman filter predicts the future state of each detected droplet. Therefore, even if a droplet in a given frame k is not matched with any droplets in the next frame k+1, the present system can predict the state of the unmatched droplet, and then use the predicted state to match the droplet to a corresponding droplet in the next new frame k+2.
In one example, {right arrow over (x)}k|k is a column vector containing the position and velocity of a droplet at frame k. In one example, the present system detects a droplet at frame k−1, such that {right arrow over (x)}k−1|k−1 is known, and the system fails to detect this droplet at frame k, such that {right arrow over (x)}k|k is not known. This unknown state is predicted via the following equation:
{right arrow over (x)}
k|k−1
=F{right arrow over (x)}
k−1|k−1
+Bu
k
+w
In this equation, w is the gaussian process noise and F is the state-transition model matrix and B is the control input matrix both of which are defined as:
In this equation, Δt is the time interval between the two frames and uk is the gravitational acceleration. In one embodiment, the accuracy of the estimation is represented by an estimate covariance matrix Pk|k. In parallel with the state prediction, the present system predicts the estimate covariance matrix by the equation:
P
k|k−1
=FP
k−1|k−1
F
T
+Q
In this equation, Q is the covariance of the process noise. This equation predicts the new position of the droplet based upon its previous state.
In the update phase, the observation and probability information is incorporated into the state estimation to prepare for the next frame prediction with the following set of equations:
{right arrow over (y)}
k
={right arrow over (z)}
k
−H{right arrow over (x)}
k|k−1
S
k
=HP
k|k−1
H
T
+R
K
k
=P
k|k−1
H
T
S
k
−1
{right arrow over (x)}
k|k
={right arrow over (x)}
k|k−1
+K
k{tilde over ({right arrow over (y)})}k
P
k|k=(−KkJ)Pk|k−1
In these equations, zk is the state measurement at frame k, and H is the observation model matrix defined as
This measurement transition matrix provides for tracking of measurements (droplet detection) in terms of the position but not the velocity of the droplet. In addition, {right arrow over (y)}k is the prediction error, R is the covariance matrix of the observation noise, Sk is the pre-fit residual, and K is the Kalman gain.
After the Kalman filter is initialized, the inputs to the Kalman filter are {right arrow over (x)}k−1|k−1 and Pk−1|k−1, while the outputs are {right arrow over (x)}k|k and Pk|k. If the measurement {right arrow over (z)}k is available, and hence the droplet is detected at frame k, then {right arrow over (x)}k|k and Pk|k are computed by the above equations. If the measurement {right arrow over (z)}k is not available, the droplet is not detected, then the system sets {right arrow over (x)}k|k={right arrow over (x)}k|k−1 and Pk|k=Pk|k−1.
In various embodiments of the Kalman filter implementation, the following parameters are used: the time interval between iterations is set to Δt=0.0004, which is computed using the total number of frames captured by the image sensor per second (fps); the initial covariance matrix P110 is a 4 by 4 zero matrix; R is set as 0:01I; and the noise covariance matrix Q is set as:
According to various embodiments, the present system provides a tracking framework for tracking droplets in successive image frames. As discussed above, the present system and method extracts the descriptor for each droplet, and using descriptor information, matches droplets among frames. The system can use a Kalman filter for state prediction and updates to mitigate the effect of missed and false droplet detections on droplet tracking. These steps provide a droplet tracking framework as shown in Algorithm 2 below. In this algorithm, the following variables and the values they store include:
current_bbox: For every new frame, the system executes a trained model to perform droplet detection. The output from droplet detection includes bounding boxes (rectangles) for the detected droplets appearing in the image frame. The system stores the coordinates of two corners of the rectangle; top left and the bottom right to describe the rectangle. Thus, the current_bbox is a two dimensional array with four columns representing these four coordinates and the number of rows equal the number of detected droplets in the given frame, in an embodiment.
current_descriptor: As discussed in the last section, the present system extracts a descriptor for each droplet using correspondence matching. For each row (each droplet) of current_box, the system executes a process such as illustrated in
current_tracker: This variable stores the information about the droplets that are being tracked by the present system including: [y_min, x_min, y_max, x_max, p, A, E, T, {right arrow over (x)}k|k, loss, tracking_id, success, flatten(Pk−1|k−1)], where the first four elements are the coordinates of the bounding box, tracking_id is the unique ID assigned to each droplet to count the total number of droplets; loss stores the number of times a particular droplet is not matched with a new droplet in a new frame as the present method moves forward in time. If the droplet is rematched with a droplet in a new frame, the value of loss is set back to zero. Thus, loss acts as a buffer when the correspondence matching fails, and the present system does not immediately discard the droplet but predicts a new state of the droplet in subsequent frames as discussed above; success stores the number of times a droplet has been matched to a new droplet from a new frame; flatten(Pk−1|k−1) flattens the matrix Pk−1|k−1 into a one-dimensional row vector by appending each row to the end of another row. Thus, current_tracker contains information needed to execute the Kalman Filter prediction and update for all currently tracked droplets. Each row of current_tracker corresponds to a droplet, in an embodiment.
The system and method of the present subject matter applies deep learning for droplet detection, by detecting droplets and extracting their geometric characteristics. In various embodiments, the system uses least square fitting of an ellipse to each detected droplet, and geometric characteristics of detected droplets are extracted. In various embodiments, the object detector model of the present subject matter is trained until the accuracy of around 90% is achieved. Thus, the ability to detect droplets seems to be limited by the training data available. The present subject matter further provides for tracking droplets among successive image frames. For smaller droplets in the frame (approximately covering 0.03% or less of total image area), it becomes relatively harder to track as such droplets are harder to detect. The present subject matter provides for user control over parameters that affect the performance of detection tracking of the droplets, in various embodiments.
In alternative embodiments, the machine 1600 can operate as a standalone device or can be connected (e.g., networked) to other machines. In a networked deployment, the machine 1600 can operate in the capacity of a server machine, a client machine, or both in server-client network environments. In an example, the machine 1600 can act as a peer machine in peer-to-peer (P2P) (or other distributed) network environment. The machine 1600 can be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations.
The machine (e.g., computer system) 1600 can include a hardware processor 1602 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 1604, a static memory (e.g., memory or storage for firmware, microcode, a basic-input-output (BIOS), unified extensible firmware interface (UEFI), etc.) 1606, and mass storage 1608 (e.g., hard drive, tape drive, flash storage, or other block devices) some or all of which can communicate with each other via an interlink (e.g., bus) 1630. The machine 1600 can further include a display unit 1610, an alphanumeric input device 1612 (e.g., a keyboard), and a user interface (UI) navigation device 1614 (e.g., a mouse). In an example, the display unit 1610, input device 1612 and UI navigation device 1614 can be a touch screen display. The machine 1600 can additionally include a storage device (e.g., drive unit) 1608, a network interface device 1620, and one or more sensors 1616, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor. The machine 1600 can include an output controller 1628, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
Registers of the processor 1602, the main memory 1604, the static memory 1606, or the mass storage 1608 can be, or include, a machine readable medium 1622 on which is stored one or more sets of data structures or instructions 1624 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 1624 can also reside, completely or at least partially, within any of registers of the processor 1602, the main memory 1604, the static memory 1606, or the mass storage 1608 during execution thereof by the machine 1600. In an example, one or any combination of the hardware processor 1602, the main memory 1604, the static memory 1606, or the mass storage 1608 can constitute the machine readable media 1622. While the machine readable medium 1622 is illustrated as a single medium, the term “machine readable medium” can include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 1624.
The term “machine readable medium” can include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 1600 and that cause the machine 1600 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting machine-readable medium examples can include solid-state memories, optical media, magnetic media, and signals (e.g., radio frequency signals, other photon-based signals, sound signals, etc.). In an example, a non-transitory machine-readable medium comprises a machine-readable medium with a plurality of particles having invariant (e.g., rest) mass, and thus are compositions of matter. Accordingly, non-transitory machine-readable media are machine readable media that do not include transitory propagating signals. Specific examples of non-transitory machine-readable media can include non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
The instructions 1624 can be further transmitted or received over a communications network 1626 using a transmission medium via the network interface device 1620 utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks can include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMAX®), IEEE 802.15.4 family of standards, peer-to-peer (P2P) networks, among others. In an example, the network interface device 1620 can include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network 1626. In an example, the network interface device 1620 can include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine 1600, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software. A transmission medium is a machine readable medium.
Example 1 is a system for detection and tracking of droplets sprayed from a nozzle, the system comprising: a sensor configured to observe droplets sprayed from the nozzle; a frame extractor module configured to obtain data from the sensor; a droplet shape and size extraction module configured to extract information from the data obtained by the frame extractor module; a droplet tracking module configured to track individual droplets using the information extracted by the droplet shape and size extraction module; and a data log module configured to store a list of droplets and information associated with the droplets.
In Example 2, the subject matter of Example 1 includes, wherein the frame extractor is configured to extract frames from an input video at a fixed frame rate as defined by a user.
In Example 3, the subject matter of Examples 1-2 includes, wherein the droplet shape and size extraction module is configured to extract geometric information about detected droplets in a current frame provided by the frame extractor module.
In Example 4, the subject matter of Examples 1-3 includes, wherein the droplet tracking module is configured to track droplets by matching the droplets between frames based upon information obtained from a new frame which has been processed by the droplet shape and size extraction module.
In Example 5, the subject matter of Examples 1-4 includes, wherein the data log module is configured to store a list of droplets and information associated with the droplets, including one or more of: position in a current frame, position in a previous frame, shape and size of each droplet in the current frame, parameters of a dynamic model associated with each droplet, a numerical value that counts a number of times each droplet has been matched continuously with previous frames referred to as success value, a numerical value that counts a number of times each droplet has not been matched with previous frames referred to as loss value, or a unique identifier for each droplet.
In Example 6, the subject matter of Example 3 includes, wherein the droplet shape and size extraction module is configured to output droplet shape information, the droplet shape and size extraction module including: a frame noise suppression module using a Gaussian filter; an object detection module configured to determine a detected droplets confidence score and corresponding bounding box coordinates by executing a trained deep neural network model; a refinement module configured to select droplets having a confidence value above a user-defined threshold; a least square-based ellipse fitting module configured to provide information associated with a fitted ellipse for the selected droplets; and an extraction module configured to extract information by fitting an ellipse to detected droplets and transmit the information to the droplet tracking module.
In Example 7, the subject matter of Example 3 includes, wherein the droplet shape and size extraction module includes a neural network configured to refine droplet image data, the droplet shape and size extraction module including: a feature extraction module including a deep neural network with convolution operations that learns to recognize important features in an image to allow the deep neural network to distinguish droplets; a region proposal network including a second neural network that receives an output from the feature extraction module and identifies and outputs proposed regions in a feature map that contain an object; a region of interest (ROI) pooling module configured to make all of the proposed regions a same size; and an inference network including connected layers that take reshaped region proposal regions as an input and output the droplets and their corresponding bounding box, and further outputs a confidence score for each of the detected droplets in the image, wherein the confidence score is a numerical value within a range of 0 and 1 that is configured to filter out detections having confidence scores below a user-defined threshold, which are decarded as not being droplets.
In Example 8, the subject matter of Example 6 includes, wherein accuracy of the trained model is evaluated using images in a set Ω as defined in an equation:
where a total number of images in the set Ω is represented using n(Ω), MDx is a total number of droplets missed by the trained model for an image when compared to its ground truth, FDx is a total number of incorrect detections made by the trained model for the image when compared to its ground truth, TDx is a true detection which is computed by subtracting a total number of droplets detected by the trained model for the image with the total number of incorrect detections for the image.
In Example 9, the subject matter of Example 8 includes, wherein the ground truth includes manually labeled data used for network training, including a bounding box around each droplet in the image.
In Example 10, the subject matter of Example 8 includes, wherein the accuracy of the trained model is configured to be used to guide a training process of the trained model, and wherein the training process of the model is stopped when the accuracy is greater than or equal to 90%.
In Example 11, the subject matter of Example 10 includes, wherein the training process is not limited to real data of droplets, and the training process is configured to use generated images of droplets using generative models during the training of the neural network model used for droplet detection.
In Example 12, the subject matter of Examples 1-11 includes, wherein the tracking module includes a multi-step process comprising droplet matching and sharing of outputs with the data log module.
In Example 13, the subject matter of Example 12 includes, wherein the droplet matching includes: computing a cost between new droplets and past droplets that are stored in the data log module; applying a Hungarian algorithm to determine a unique match between new droplets and past droplets that are stored in data log module; and providing refinement and grouping of droplets that are matched between past droplets and new droplets obtained by applying the Hungarian algorithm.
In Example 14, the subject matter of Example 13 includes, wherein the refinement and grouping of droplets that are matched between past droplets and new droplets includes: for each match between new droplets and past droplets: if the cost is below a user-defined threshold, accept each pair and update information for matched droplets based upon the information from the new droplet; and if the cost is above the user-defined threshold then: for unmatched droplets stored in the data log module: update information for unmatched droplets using a Kalman filter for predicting a position of the unmatched droplets in a next frame; and for unmatched droplets detected in a current frame: append each of the unmatched droplets detected in the current frame into the data log if a number of detected droplets in the current frame is greater than the number of droplets in the data log.
In Example 15, the subject matter of Example 14 includes, wherein updating information for matched droplets includes: copy current location information of a past droplet into past location information of the past droplet, including replacing the current location information of a past droplet with location information of a new droplet; update shape and size of each droplet of the past droplet with the shape and size information of the new droplet; update parameters of a dynamic model stored for the past droplet; increase a first numerical value by one that counts a number of times the past droplet has been matched continuously in previous frames; and update a second numerical value to zero that counts a number of times the past droplet has not been matched continuously in previous frames.
In Example 16, the subject matter of Example 14 includes, wherein updating information for unmatched droplets includes the following steps for each droplet pair of a new droplets and a past droplet: copy current location information of the past droplet into past location information of the past droplet, and replace the current location information with a Kalman filter prediction location of each droplet in the next frame; update shape and size of each droplet of the past droplet with the shape and size information of the new droplet; update a numerical value to zero that counts a number of times the past droplet has been matched continuously in previous frames; and update a loss value with an increment of one.
In Example 17, the subject matter of Example 14 includes, wherein the appending includes appending information into the data log module associated with each droplet, including: position in the current frame is stored based upon the information extracted from droplet shape and size extraction module; position in a previous frame is left empty; shape and size of each droplet in the current frame is stored based upon the information extracted from droplet shape and size extraction module; parameters of a dynamic model associated with each droplet use the dynamic model; a first numerical value that counts a number of times each droplet has been matched continuously with previous frames referred to as success value is set to zero; a second numerical value that counts a number of times each droplet has not been matched with previous frames referred to as loss value is set to zero; and a unique identifier for each droplet is added.
In Example 18, the subject matter of Example 9 includes, wherein droplet tracking updates values stored in the data log module, including determining whether a droplet current position is outside an image frame, and if so to removing the droplet and its associated characteristics from the data log.
In Example 19, the subject matter of Examples 1-18 includes, wherein the system is configured to repetitively detect and track droplets for image frames extracted by a frame extractor.
In Example 20, the subject matter of Examples 1-19 includes, further comprising a nozzle controller configured to control function of the nozzle to implement feedback control of nozzle output based on the information associated with the droplets.
In Example 21, the subject matter of Examples 1-20 includes, wherein at least a portion of the system is incorporated into an agricultural machine.
Example 22 is a method for detection and tracking of droplets sprayed from a nozzle, the method comprising: observing droplets sprayed from the nozzle using one or more sensors; obtaining data from the one or more sensors by examining a frame from the one or more sensors; extracting information from the data obtained from the one or more sensors regarding a droplet shape and size; tracking individual droplets using the information extracted regarding the droplet shape and size; and storing a list of droplets and information associated with the droplets in a data log.
In Example 23, the subject matter of Example 22 includes, further comprising repetitively detecting and tracking droplets from image frames.
In Example 24, the subject matter of Examples 22-23 includes, further comprising tracking droplets by matching the droplets between frames based upon information obtained from a new frame.
Example 25 is at least one machine-readable medium including instructions that, when executed by processing circuitry, cause the processing circuitry to perform operations to implement of any of Examples 1-24.
Example 26 is an apparatus comprising means to implement of any of Examples 1-24.
Example 27 is a system to implement of any of Examples 1-24.
Example 28 is a method to implement of any of Examples 1-24.
Each of the non-limiting aspects or examples described herein may stand on its own or may be combined in various permutations or combinations with one or more of the other examples.
The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments that can be practiced. These embodiments are also referred to herein as “examples.” Such examples can include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein.
All publications, patents, and patent documents referred to in this document are incorporated by reference herein in their entirety, as though individually incorporated by reference. In the event of inconsistent usages between this document and those documents so incorporated by reference, the usage in the incorporated reference(s) should be considered supplementary to that of this document; for irreconcilable inconsistencies, the usage in this document controls.
In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.
The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) can be used in combination with each other. Other embodiments can be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is to allow the reader to quickly ascertain the nature of the technical disclosure and is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features can be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter can lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. The scope of the embodiments should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
This patent application claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Patent Application 63/278,773, filed Nov. 12, 2021, and also claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Patent Application 63/314,042, filed Feb. 25, 2022, the disclosures of which are hereby incorporated by reference herein in their entirety. This patent application is related to U.S. Provisional Application No. 63/217,755 filed Jul. 1, 2021 and entitled SPRAYER NOZZLE MONITOR AND CONTROL SYSTEM AND METHODS FOR SAME, and is also related to U.S. Provisional Application No. 63/254,466 filed Oct. 11, 2021 and entitled SPRAYER NOZZLE MONITOR AND CONTROL SYSTEM AND METHODS FOR SAME, which are hereby incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
63278773 | Nov 2021 | US | |
63314042 | Feb 2022 | US |