The present disclosure generally relates to agricultural vehicles and, more particularly, to systems and methods for identifying objects, such as crop rows and/or soil ridges, present within a field across which an agricultural vehicle is traveling.
Agricultural sprayers apply an agricultural substance (e.g., a pesticide) onto crops as the sprayer is traveling across a field. To facilitate such travel, sprayers are configured as self-propelled vehicles or implements towed behind an agricultural tractor or other suitable agricultural vehicle. A typical sprayer includes one or more boom assemblies on which a plurality of spaced apart nozzles is mounted. Each nozzle is configured to dispense or otherwise spray the agricultural substance onto underlying crops and/or weeds.
Typically, a sprayer includes an imaging device that captures data for use in guiding the sprayer across the field during the spraying operation. For example, in certain instances, the imaging device may correspond to a camera. As such, the camera may capture images for use in guiding the sprayer across the field. Moreover, these captured images may be displayed to the operator of sprayer to allow the operator to visualize the objects (e.g., crop rows, soil ridges, obstacles, and the like) present within the field. However, when the imaging device corresponds to a transceiver-based sensor, such as a light detection and ranging (LIDAR) sensor, the data set captured by such sensor may be too large to transmit via the communicative links/protocols typically used by agricultural sprayers (e.g., CANBUS).
Accordingly, an improved system and method for identifying objects present within a field across which an agricultural vehicle is traveling would be welcomed in the technology. Specifically, a system and method for identifying objects present within a field across which an agricultural vehicle is traveling that allows data captured by a transceiver-based sensor to be analyzed and subsequently displayed to the operator would be welcomed in the technology.
Aspects and advantages of the technology will be set forth in part in the following description, or may be obvious from the description, or may be learned through practice of the technology.
In one aspect, the present subject matter is directed to a system for identifying objects present within a field across which an agricultural vehicle is traveling. The system includes a transceiver-based sensor configured to capture point cloud data associated with a portion of the field present within a field of view of the transceiver-based sensor as the agricultural vehicle travels across the field. Additionally, the system includes a display device and a controller communicatively coupled to the transceiver-based sensor and the display device. The controller, in turn, includes a processor and associated memory, with the memory storing instructions that, when implemented by the processor, configure the controller to analyze the captured point cloud data to create a sparse point cloud, the sparse point cloud identifying at least one of a crop row or a soil ridge located within the portion of the field present within the field of view of the transceiver-based sensor. Furthermore, the controller is configured to initiate display of an image associated with the sparse point cloud on the display device.
In another aspect, the present subject matter is directed to a method for identifying objects present within a field across which an agricultural vehicle is traveling. The method includes controlling, with one or more computing devices, an operation of the agricultural vehicle such that the agricultural vehicle travels across the field to perform an agricultural operation on the field. Additionally, the method includes receiving, with the one or more computing devices, captured point cloud data associated with a portion of the field as the agricultural vehicle travels across the field. Furthermore, the method includes analyzing, with the one or more computing devices, the captured point cloud data to create a sparse point cloud, the sparse point cloud identifying at least one of a crop row or a soil ridge present within the portion of the field. Moreover, the method includes initiating, with the one or more computing devices, display of an image associated with the sparse point cloud on the display device.
These and other features, aspects and advantages of the present technology will become better understood with reference to the following description and appended claims. The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the technology and, together with the description, serve to explain the principles of the technology.
A full and enabling disclosure of the present technology, including the best mode thereof, directed to one of ordinary skill in the art, is set forth in the specification, which makes reference to the appended figures, in which:
Repeat use of reference characters in the present specification and drawings is intended to represent the same or analogous features or elements of the present technology.
Reference now will be made in detail to embodiments of the invention, one or more examples of which are illustrated in the drawings. Each example is provided by way of explanation of the invention, not limitation of the invention. In fact, it will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the scope or spirit of the invention. For instance, features illustrated or described as part of one embodiment can be used with another embodiment to yield a still further embodiment. Thus, it is intended that the present invention covers such modifications and variations as come within the scope of the appended claims and their equivalents.
In general, the present subject matter is directed to a system and a method for identifying objects present within a field across which an agricultural vehicle is traveling. Specifically, in several embodiments, a controller of the disclosed system may be configured to control the operation of an agricultural vehicle (e.g., a sprayer) such that the vehicle travels across the field to perform an agricultural operation (e.g., a spraying operation) thereon. As the vehicle travels across the field, the controller may be configured to receive point cloud data (e.g., three-dimensional point cloud data) captured by one or more transceiver-based sensors (e.g., a light detection and ranging (LIDAR) sensors) installed on the vehicle or an associated implement. The captured point could data may, in turn, be indicative of one or more objects present within and/or characteristics of the portion of the field present within the field(s) of view of the transceiver-based sensor(s).
In accordance with aspects of the present subject matter, the controller may be configured to initiate display of one or more images on a display device of the vehicle based on the captured point cloud data. More specifically, the point cloud data captured by the transceiver-based sensor(s) may be too large to transmit over the communicative links/protocols (e.g., the CANBUS) used by the sprayer. In this respect, the controller may be configured to analyze the captured point cloud data to create a sparse point cloud identifying one or more crop rows and/or soil ridges present within the field of view(s) of the transceiver-based sensor(s). The sparse point cloud may, in turn, be a simplified data set of the captured point that can be transmitted over the sprayer's communicative links. As such, in one embodiment, the sparse point cloud may be a two-dimensional representation of the three-dimensional captured point with associated metadata. In such an embodiment, the metadata may indicate the presence of crop rows and/or soil ridges within the field of view(s) of the transceiver-based sensor(s) and identify a characteristic(s) of such crop rows and/or soil ridges. Thereafter, the controller may be configured to initiate display of one or more images depicting or otherwise associated with the sparse point cloud. For example, when the sparse point cloud identifies the presence of a crop row, an image of a crop row stored within its memory may be displayed on the display screen.
Referring now to the drawings,
In the illustrated embodiment, the agricultural vehicle 10 is configured as a self-propelled agricultural sprayer. However, in alternative embodiments, the agricultural vehicle 10 may be configured as any other suitable agricultural vehicle that travels across a field relative to one or more crop rows or soil ridges within the field. For example, in some embodiment, the agricultural vehicle 10 may be configured as an agricultural tractor (with or without an associated agricultural implement, such as a towable sprayer, a seed-planting implement, or a tillage implement), an agricultural harvester, and/or the like.
As shown in
Additionally, the vehicle 10 may include a boom assembly 24 mounted on the frame 12. As shown, in one embodiment, the boom assembly 24 may include a center boom section 26 and a pair of wing boom sections 28, 30 extending outwardly from the center boom 26 along a lateral direction 32. The lateral direction 32, in turn, extends perpendicular the direction of travel 18. In general, a plurality of nozzles (not shown) mounted on the boom assembly 24 may be configured to dispense the agricultural fluid stored in the tank 22 onto the underlying plants and/or soil. However, in alternative embodiments, the boom assembly 24 may include any other suitable number and/or configuration of boom sections.
Referring particularly to
It should be further appreciated that the configuration of the vehicle 10 described above and shown in
In accordance with aspects of the present subject matter, one or more transceiver-based sensors 102 may be installed on the vehicle 10 and/or an associated implement (not shown). In general, the transceiver-based sensor(s) 102 may be configured to capture point cloud data depicting one or more crop rows and/or soil ridges present within an associated field(s) of view (indicated by dashed lines 104) as the vehicle 10 travels across the field to perform an operation (e.g., a spraying operation) thereon. As will be described below, a controller may be configured to analyze the captured point cloud data to identify the crop row(s) and/or soil ridge(s) present within the field(s) of view 104 of the sensor(s) 102.
The transceiver-based sensor(s) 102 may generally correspond to any suitable sensing device(s) configured to emit output signals for reflection off objects (e.g., the crop rows and/or soil ridges) within an associated field of view 104 and receive or sense the return signals. For example, in several embodiments, each transceiver-based sensor 102 may correspond to a light detection and ranging (LIDAR) sensor configured to emit light/laser output signals for reflection off the objects present within its field of view 104. In such an embodiment, each transceiver-based sensor 102 may receive the reflected return signals and generate point cloud data based on the received return signal(s). The point cloud data may, in turn, include a plurality of data points, with each point indicative of the distance between the sensor 102 and the object off which one of the return signals is reflected. However, in alternative embodiments, the transceiver-based sensor(s) 102 may correspond to a radio detection and ranging (RADAR) sensor(s), an ultrasonic sensor(s), or any other suitable type of transceiver-based sensor(s).
The transceiver-based sensor(s) 102 may be installed at any suitable location(s) that allow the transceiver-based sensor(s) 102 to capture point cloud data depicting one or more crop rows and/or soil ridges within the field. For example, in the illustrated embodiment, a transceiver-based sensor 102 is mounted on the roof of the cab 20. In such an embodiment, the transceiver-based sensor 102 has a field of view 104 directed at a portion of the field in front of the vehicle 10 relative to the direction of travel 18. As such, the transceiver-based sensor 102 is able to capture point cloud data depicting the one or more crop rows or soil ridges positioned in front of the vehicle 10. However, in alternative embodiments, the transceiver-based sensor(s) 102 may be installed at any other suitable location(s), such as on the boom assembly 24. Additionally, any other suitable number of transceiver-based sensors 102 may be installed on the vehicle 10 or an associated implement (not shown), such as two or more transceiver-based sensors 102.
Referring now to
As shown in
In addition, the controller 106 may also include various other suitable components, such as a communications circuit or module, a network interface, one or more input/output channels, a data/control bus and/or the like, to allow controller 106 to be communicatively coupled to any of the various other system components described herein (e.g., the engine 34, the transmission 36, and/or the transceiver-based sensor(s) 102). For instance, as shown in
The controller 106 may correspond to an existing controller(s) of the vehicle 10, itself, or the controller 106 may correspond to a separate processing device. For instance, in one embodiment, the controller 106 may form all or part of a separate plug-in module that may be installed in association with the vehicle 10 to allow for the disclosed systems to be implemented without requiring additional software to be uploaded onto existing control devices of the vehicle 10.
Moreover, the functions of the controller 106 may be performed by a single processor-based device or may be distributed across any number of processor-based devices, in which instance such devices may be considered to form part of the controller 106. For instance, the functions of the controller 106 may be distributed across multiple application-specific controllers, such as an engine controller, a transmission controller, an implement controller, and/or the like.
Furthermore, the system 100 may also include a user interface 114. More specifically, as will be described below, the user interface 114 may be configured to display images to the operator of the vehicle 10 associated with the crop rows and/or the soil ridges identified by analyzing the point cloud data captured by the transceiver-based sensor(s) 102. As such, the user interface 114 may include one or more display screens or display devices 116 (e.g., an LCD screen(s)) configured to display the images. In this respect, the user interface 114 may be communicatively coupled to the controller 106 via the communicative link 112 to permit the data associated with the images to be transmitted from the controller 106 to the user interface 114. In some embodiments, the user interface 114 may also include other feedback devices (not shown), such as speakers, warning lights, and/or the like, configured to provide additional feedback from the controller 106 to the operator. In addition, the user interface 114 may include one or more input devices (not shown), such as touchscreens, keypads, touchpads, knobs, buttons, sliders, switches, mice, microphones, and/or the like, which are configured to receive user inputs from the operator. In one embodiment, the user interface 114 may be mounted or otherwise positioned within the operator's cab 20 of the vehicle 10. However, in alternative embodiments, the user interface 114 may mounted at any other suitable location.
In several embodiments, the controller 106 may be configured to control the operation of the agricultural vehicle 10 such that the vehicle 10 travels across the field to perform an agricultural operation on the field. Specifically, in one embodiment, the controller 106 may be configured to transmit control signals to one or more components (e.g., the engine 34 and/or the transmission 36) of the vehicle 10 (e.g., via the communicative link 112). The control signals may, in turn, instruct the component(s) of the vehicle 10 to operate such that the vehicle 10 travels across the field in the direction of travel 18 to perform an agricultural operation (e.g., a spraying operation) on the field.
Additionally, the controller 106 may configured to create a sparse point cloud identifying one or more crop rows and/or soil ridges present within the field as the vehicle 10 travels across the field. More specifically, as described above, one or more transceiver-based sensors 102 (e.g., a LIDAR sensor(s)) may be supported or installed on the vehicle 10. Each transceiver-based sensor(s) 102 may, in turn, capture point cloud data associated with a portion of the field present within its field of view 104. For example, in one embodiment, the captured point cloud data may be three-dimensional data (e.g., each data point may have X, Y, and Z coordinates). In this respect, as the vehicle 10 travels across the field to perform the agricultural operation thereon, the controller 106 may be configured to receive the captured point cloud data from the transceiver-based sensor(s) 102 (e.g., via the communicative link 112). The controller 106 may be configured to process/analyze the received point cloud data create the sparse point cloud identifying one or more crop rows and/or soil ridges present within the field(s) of view 104 of the transceiver-based sensor(s) 102. The sparse point cloud may, in turn, be a simplified data set or version of the captured point cloud that can be transmitted over the sprayer's communicative links. As such, in one embodiment, the sparse point cloud may be a two-dimensional representation of the three-dimensional captured point cloud (e.g., each data point may only have X and Y coordinates and associated metadata as described below). The controller 106 may be configured to use any suitable point cloud data or visual data processing techniques to create the sparse point cloud based on the received point cloud data.
Moreover, in several embodiments, when analyzing the captured point cloud to create the sparse point cloud, the controller 106 may be configured to determine additional information or metadata associated with the crop row(s) and/or the soil ridge(s) depicted in the captured point cloud. Specifically, in one embodiment, the controller 106 may be configured to process/analyze the received point cloud data to identify crop row(s) and/or the soil ridge(s) present within the field(s) of view of the transceiver-based sensor(s) 102 and determine the position of the identified crop row(s) and/or the soil ridge(s) relative to the vehicle 10. In some embodiments, the controller 106 may be configured to process/analyze the received point cloud data to determine one or more characteristics or parameters of the identified crop row(s) and/or the soil ridge(s). For example, when a crop row(s) is identified, the controller 106 may be configured to determine the height, volume, and/or canopy coverage of such row(s) and/or the distance/spacing between pairs of crop row(s). Similarly, when a soil ridge(s) is identified, the controller 106 may be configured to determine the height, width, residue coverage, and/or weed coverage of such soil ridge(s). In this respect, each data point of the sparse point cloud may include two-dimensional coordinates (e.g., X and Y coordinates) and associated metadata (e.g., whether the data point corresponds to a crop row or a soil ridge and one or more characteristics of the crop row/soil ridge, such as its height). However, in alternative embodiments, the controller 106 may be configured to determine any other suitable parameter(s)/characteristic(s) associated with the identified crop row(s) and/or the soil ridge(s) when creating the sparse point cloud.
In accordance with aspects of the present subject matter, the controller 106 may be configured to initiate display of one or more images associated with the sparse point cloud. In general, as described above, the transceiver-based sensor(s) 102 may be configured to capture point cloud data. Such point cloud data may generally be too large to be readily transmitted via CANBUS and other communicative links/protocols used by the sprayer 10. As such, the controller 106 may be configured to transmit the sparse point cloud to the display device(s) 116 (e.g., via the communicative link 112). Upon receipt of the transmitted data, the display device(s) 116 may be configured to display the image(s) associated with the crop row(s) and/or the soil ridge(s) identified by the sparse point.
The displayed image(s) may be a simplified image(s) or image-like representation(s) associated with the identified the crop row(s) and/or the soil ridge(s). In several embodiments, the displayed image(s) may not be generated from the received point cloud data. That is, in such embodiments, the displayed image(s) are not rendered or modeled based on the received point cloud data. Instead, the controller 106 may be configured to initiate display of an image(s) (e.g., image(s) from an library stored within its memory device(s) 110) depicting the identified crop row(s)/soil ridge(s) in a manner that allows the operator to easily identify the row(s)/ridge(s) displayed on the display device(s) 116. For example, when a crop row is identified within the received data point cloud, the controller 106 may be configured to initiate display of a simplified image (e.g., a pictogram) of a crop row on the display device(s) 116.
Displaying a simplified image based on the sparse point cloud (as opposed to a rendered three-dimensional surface(s) of the captured point cloud data or images/video captured by a separate camera) reduces the amount of bandwidth necessary to transmit data from the controller 106 for display on the display device(s) 116. More specifically, CANBUS and other communications protocols typically used by agricultural vehicles have limited bandwidth. As such, these protocols may not allow for a rendered three-dimensional surface(s) of the captured point cloud data (or images/video captured by a camera) to be transmitted from the controller 106 to the display device(s) 116 in real-time. As such, displaying a simplified image of identified crop row(s) and/or soil ridge(s) based on the sparse point cloud may allow the operator to better visualize the crop row(s) and/or soil ridge(s) present within the field (e.g., better than with bar graphs), while still allowing CANBUS and/or other low bandwidth communication protocols to be used. As such, the displayed image(s) may include sufficient detail to allow the operator to easily recognize the displayed image(s) as either a crop row or a soil ridge, but not so much detail to prevent transmission via CANBUS and/or other low bandwidth communication protocols.
Furthermore, in several embodiments, the displayed image(s) may include the metadata/additional information associated with the identified the crop row(s) and/or the soil ridge(s) from the sparse cloud. Specifically, in some embodiments, the displayed image(s) may depict location(s) of the identified crop row(s) and/or soil ridge(s) relative to the vehicle 10. For example, in such embodiments, the displayed image(s) of the crop row(s) and/or soil ridge(s) may be in a first-person view in which the crop row(s) and/or soil ridge(s) are depicted as viewed by the transceiver-based sensor(s) 102. Alternatively, the displayed image(s) of the crop row(s) and/or soil ridge(s) may be in a third-person view in which a graphic of the vehicle 10 is overlaid on the displayed image(s) of the crop row(s) and/or soil ridge(s), thereby providing an indication of the location(s) of the identified crop row(s) and/or soil ridge(s) relative to the vehicle 10. Additionally, in some embodiments, the displayed image(s) may depict the determined characteristics of the identified crop row(s) and/or soil ridge(s). For example, the displayed image(s) may include a flag(s), a text box(es), a scale(s), and/or the like that indicates the determined characteristics of the identified crop row(s) and/or soil ridge(s).
Referring now to
Referring again to
Additionally, the controller 106 may be configured to determine a confidence value associated with the identification of each data point in the sparse cloud as a crop row and/or a soil ridge. In general, the determined confidence value(s) may provide an indication (e.g., via a numerical value(s)) of the certainty or confidence that the identification of the crop row(s) and/or soil ridge(s) by the controller 106 is correct. For example, a high confidence value may indicate high level of certainty in the identification of the crop row(s), while a low confidence value may indicate high level of certainty in the identification of the crop row(s) and/or soil ridge(s). As such, the controller 106 may be configured to use any suitable statistical analysis techniques to determine the confidence value of each identified crop row or soil ridge. Such confidence value(s) may be part of the metadata of the sparse point cloud.
Moreover, in several embodiments, the controller 106 may be configured to adjust one or more parameters of the displayed image(s), such as the texture, hue, saturation, and/or the like, based on the determined confidence value(s). Such image parameter adjustments may indicate the level of certainty in the identification of the crop row(s) and/or soil ridge(s) depicted in the image(s) by the controller 106. For example, in one embodiment, the controller 106 may be configured to adjust the texture of the displayed image such that any crop rows and/or soil ridges depicted therein that were identified with a low level of certainty are blurry or fuzzy. Conversely, in such an embodiment, any crop rows and/or soil ridges depicted in the displayed image that were identified with a high level of certainty may have a clear texture. However, in alternative embodiments, the controller 106 may be configured to adjust the parameters of the displayed image(s) based on the determined confidence value(s) in any other suitable manner.
Referring now to
As shown in
Additionally, at (204), the method 200 may include receiving, with the one or more computing devices, captured point cloud data associated with a portion of the field as the agricultural vehicle travels across the field. For instance, as described above, the controller 106 may be configured to receive captured point cloud data associated with a portion of the field from one or more transceiver-based sensor(s) as the agricultural vehicle 10 travels across the field.
Moreover, as shown in
Furthermore, at (208), the method 200 may include initiating, with the one or more computing devices, display of an image associated with the sparse point cloud on the display device. For instance, as described above, the controller 106 may be configured to initiate display of an image associated with the sparse point cloud on the display device(s) 116 of the user interface 114.
It is to be understood that the steps of the method 200 are performed by the controller 106 upon loading and executing software code or instructions which are tangibly stored on a tangible computer readable medium, such as on a magnetic medium, e.g., a computer hard drive, an optical medium, e.g., an optical disc, solid-state memory, e.g., flash memory, or other storage media known in the art. Thus, any of the functionality performed by the controller 106 described herein, such as the method 200, is implemented in software code or instructions which are tangibly stored on a tangible computer readable medium. The controller 106 loads the software code or instructions via a direct interface with the computer readable medium or via a wired and/or wireless network. Upon loading and executing such software code or instructions by the controller 106, the controller 106 may perform any of the functionality of the controller 106 described herein, including any steps of the method 200 described herein.
The term “software code” or “code” used herein refers to any instructions or set of instructions that influence the operation of a computer or controller. They may exist in a computer-executable form, such as machine code, which is the set of instructions and data directly executed by a computer's central processing unit or by a controller, a human-understandable form, such as source code, which may be compiled in order to be executed by a computer's central processing unit or by a controller, or an intermediate form, such as object code, which is produced by a compiler. As used herein, the term “software code” or “code” also includes any human-understandable computer instructions or set of instructions, e.g., a script, that may be executed on the fly with the aid of an interpreter executed by a computer's central processing unit or by a controller.
This written description uses examples to disclose the technology, including the best mode, and also to enable any person skilled in the art to practice the technology, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the technology is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they include structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims.
This application is based upon and claims the right of priority to U.S. Provisional Patent Application No. 63/037,690, filed on Jun. 11, 2020, the disclosure of which is hereby incorporated by reference herein in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
63037690 | Jun 2020 | US |