The present application relates to systems and methods for performing maintenance functions on grassy terrain, such as applying a herbicide.
Grassy terrains, such as those in residential neighborhoods, require regular maintenance. Lawns are mowed, fertilized, weeded, aerated, and raked to keep them healthy. Manual weeding is time consuming and often goes undone. Herbicides are available that selectively impact broadleaf weeds over surrounding grass. However, much of the herbicide is wasted when spread indiscriminately across areas having both grass and weeds. Wasted herbicide is expensive and negatively impacts the environment.
In some embodiments, an autonomous weed treating device can treat weeds in a manner that reduces the environmental impact of herbicides.
In some embodiments, an autonomous weed treating device can treat weeds without requiring manual labor.
In some embodiments, an autonomous weed treating device may navigate about a grassy terrain (e.g., a lawn) using artificial intelligence to distinguish undesirable weeds from desirable grass and applying a herbicide to the weeds.
In some embodiments, the device may be considered a weed eliminating robot available for a retail consumer.
In some embodiments, an autonomous weed treating device may avoid the need for manually tuned variables and developer defined features associated with certain computer vision techniques.
In some embodiments, an autonomous weed treating device or robot may use deep learning to identify weeds as well as boundaries.
In some embodiments, an autonomous weed treating device or robot may use deep learning to identify weeds within a field of grass.
In some embodiments, an autonomous weed treating device may use a deep learning algorithm allowing a model to recognize a wide variety of weed types under a variety of illuminations, orientations and surrounding grass types.
Some embodiments may use deep learning to recognize grass health, different kinds of grass, and/or mushrooms.
Referring now to
Device 10 may be configured to operate under battery power autonomously to navigate a grassy terrain within a boundary, identify the presence of a weed, and treat the weed with a herbicide. Device 10 may be configured to navigate systematically in rows or at different non-180 degree angles after a boundary is reached. In other embodiments, device 10 may be configured to perform other yard maintenance operations autonomously, such as cutting grass, fertilizing grass, watering grass, collecting and/or mulching leaves, etc.
Device 10 comprises a reservoir having a fill cap 16, a bumper 18 on a front portion 20 of device 10, with the handle 15 disposed on a rear portion of device 10. Handle 15 may be disposed on a front portion or other portions of device 10, and in some embodiments at least two handles may be coupled to housing 14. One or more hand-holds may by formed as recesses in housing 14 or chassis 12 in place of, or in addition to, handle 15. While device 10 may move in a variety of directions, device 10 may move in the direction of front portion 20 during scanning for weeds and treatment of weeds.
In alternative embodiments, rear rotating member or members 30 may be driven and front rotating member or members 24 may be freely rotating, or more than two wheels may be driven.
Processing circuit 44 may also be configured to control a drive mechanism to drive rotating members 24 and/or 30 to move 12 chassis 12 along the grassy terrain.
Front rotating members 24a, 24b may be coupled to chassis 12 by motors and/or drive mechanisms. Rear rotating member 30 may be coupled to chassis 12 by a pivot mechanism 50. According to another aspect, chassis 12 and rotating members 24a, 24b and 30 are configured to provide front portion 20 of the chassis at a first distance 52 from grassy terrain 34 which is higher than a second distance 54 from grassy terrain 34 of rear portion 22 of chassis 12, for example when device 10 is disposed on terrain which is substantially level or horizontal. First distance 52 may be at least about two inches, at least about three inches, and/or less than about six inches, less than about ten inches, or about 4.8 inches in different embodiments. Second distance 54 may be at least about one inch, less than about five inches, less than about eight inches, or other lengths in different embodiments.
According to another aspect, chassis 12 may have a bottom surface 12 having a plane which is provided non-parallel grassy terrain 34, at least over a portion 48 of bottom surface 12.
According to another aspect, the one or more front rotating members 24a, 24b may be larger than rear rotating member 30, such as having a diameter at least 25 percent larger, 50 percent larger, etc. than rear rotating member 30.
Referring now to
A location circuit may be provided for performing navigation functions, such as receiving signals from global positioning system satellites, cellular network towers, Wi-Fi routers, or other devices. The location circuit may be configured to determine a location of the autonomous device and/or provide the location to processing circuit 44. Processing circuit 44 and/or the location circuit may be configured to navigate the autonomous device across a grassy terrain based on a program stored in a memory circuit.
The location circuit may comprise one or more of a Global Positions System receiver and processor, odometry devices (e.g., wheel motor encoders), an inertial measurement unit (IMU), a magnetometer, or other sensors. The inertial measurement unit may be configured to measure and report the device's force, angular rate, and/or orientation and may use one or more of accelerometers, gyroscopes, and magnetometers. The location circuit may further use a filter such as an Extended Kalman Filter (EKF) to predict location, orientation, and/or movement information on a global coordinate frame and/or locally.
The memory circuit may be in communication with processing circuit 44 and/or may be a part of processing circuit 44. The memory circuit may comprise a tangible computer readable medium comprising any type of computer readable storage. The term tangible computer readable medium excludes propagating signals. The memory circuit may store algorithms to implement processes described herein using coded instructions (e.g., computer readable instructions) stored on a non-transitory computer readable medium such as a hard disk drive, a flash memory, a read-only memory, a cache, a random-access memory and/or any other storage media in which information is stored for any duration (e.g., for extended time periods, permanently, brief instances, for temporarily buffering, and/or for caching of the information). The memory circuit may include any desired type of volatile and/or non-volatile memory such as, for example, static random access memory (SRAM), dynamic random access memory (DRAM), flash memory, read-only memory (ROM), etc.
In an exemplary embodiment, processing circuit 44 comprises a printed circuit board and assembly (PCBA) 60 and a microprocessor 62 or multiple microprocessors which may be configured to execute machine learning algorithms as well as the other functions described herein. A location circuit comprising a GPS receiver may also be disposed on the PCBA 60. A GPS antenna 66 may be coupled to (e.g., adhered) a pocket on a battery compartment to provide clearance from other electronics and height for good reception of GPS signals. In one embodiment, battery compartment may comprise a metal flat surface (or plastic flat surface supporting a sheet of metal) to act as a ground plane. Battery compartment 64 comprises one or more batteries configured to power processing circuit 44, drive motors 68, camera 36, pump 42, and/or other components of device 10. Camera 36m provide a top-down view of grassy terrain beneath device 10 and weeds therein. The battery may be rechargeable using charging port 70, which may be coupled to an external power source, to a solar charging unit, or to other power sources.
First and second rotating members 24a, 24ba are separated by a wheelbase 72 of at least about 10 inches, at least about 15 inches, less than about 24 inches, or other lengths. A second wheelbase 74 may be defined between forward rotating members 24a, 24b and rearward rotating member 30. Wheelbases 72 and/or 74 may be wide/long enough to provide stability on hills and bumps.
Bumper 18 may be coupled to one or more bumper switches 86 (e.g., touch sensors) for detecting contact of bumper 18 with an object, such as a fence, tree, brick, or building. Bumper 18 may be coupled to device 10 with couplings or fasteners, such as snap fits to constrain bumper 18 horizontally into chassis 12. Guides may be provided constrain bumper 18, allowing bumper 18 to slide forward and back, with springs around these guides. The guides may be configured to activate bumper switches 86, which may comprise limit switches mounted to and/or in communication with PCBA 60, when either side of bumper 18 is compressed. If most of the contact force is on the left side, for example, only the left limit switch will be compressed. If contact force is generally in the center of bumper 18, both switches will be compressed. Processing circuit 44 may be configured to receive signals from the switches and cause device 10 to react accordingly, differently depending on which of left switch, right switch, or both are compressed.
Referring now to
Also shown in
In some embodiments, the camera may be disposed between at least about 3 inches and/or less than about ten inches from the grassy terrain. In some embodiments, the dispenser (e.g., sprayer) may be disposed at least about 3 inches and less than about ten inches from the grassy terrain.
In a further example, the drive shaft 104 of motor 102 may be eccentric to a housing of motor 102 to provide additional vertical displacement for front distance 52 (
In some embodiments, the device may be configured to navigate itself within an area or region within a boundary or geofence or virtual border. The boundary can be programmed into memory of device 10 in any of a number of ways. In one example, a smartphone or other handheld computing device may operate an application downloaded from an application store as a companion app for device 10. The handheld computing device may communicate wirelessly with a network interface circuit of device 10. The application may display a map of a user's location on the screen (e.g., a satellite image of the area) and receive from the user a tracing or drawing of a boundary or perimeter to follow using drawing tools. The application may use image processing on the satellite image to propose a boundary and then allow the user to modify the boundary to simplify the task for the user. GPS points or coordinates for the boundary can be transmitted from the handheld computing device to the device 10. Device 10 may be configured to apply preprocessing to the data points to generate a workable geofence. With a geofence established and stored in memory, the processing circuit of device 10 may be configured to control the device to navigate within the boundary defined by the boundary coordinates using real time location data from the location circuit (e.g., from a sensor fusion of the different location devices). If device 10 crosses the geofence, the processing circuit may be configured to find a closest location on the geofence and use point-to-point navigation to return the device to that point. The device 10 may then proceed navigation on the inside of the geofence.
In some embodiments, the processing circuit may be configured to use the camera to detect a lack of movement while the wheel motors are moving. This condition may indicate device 10 is stuck. Alternatively, other sensors such as GPS, magnetometer, IMU, etc. may be used together or independently to determine a stuck condition. An alert can be generated by the processing circuit for display on the user interface and/or for transmission to an application on a smartphone to alert the user to the stuck condition.
In some embodiments, the smartphone app may further be configured to push firmware updates over the air to device 10, receive logs on use and errors, etc.
In some embodiments, an autonomous weed treating device may be configured to acquire images of a grassy terrain, process the images to identify a weed or distinguish among a weed, grass and an operating boundary, and control the dispenser to dispense a substance on the weed. The processing circuit of the device may be configured to use machine learning, for example deep learning to process the images. Deep learning may comprise using a neural network algorithm that is many layers deep. First layers may learn simple gradients and lines and as the layers get deeper, they recognize more complex features of an image. A final layer is then able to distinguish if there is a weed (or another feature) in the image. Deep learning may comprise using at least five layers, at least fifteen layers, at least thirty layers, etc. In one embodiment, the neural network may be 53 layers deep.
In some embodiments, a computing system may be configured to use deep learning via a convolutional neural network. In one example, the computing system may be configured to use one or more steps of the MobileNetV2 architecture described in M. Sandler et al., “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” arXiv:1801.04381 [cs.CV], 21 Mar. 2019. See also The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 4510-4520. The computing system (e.g., laptop, cloud server, desktop computer, etc.) may be configured to train a model to be able to classify images as belonging to one or more classes, such as: grass, weeds, boundary, etc. A model may be a program that has been trained on a set of data to recognize patterns. In some embodiments, representative images are classified into one of three categories: grass, weeds, or boundary. In various embodiments, three categories may be used, more than three categories (mushrooms, healthy grass, brown grass, unhealthy grass, specific types of weeds, etc.), less than three categories, etc. The computing system may be configured to label a set of images and then to use supervised learning to automatically tune a set of weights. The processing circuit of the autonomous device may be configured to use the weights paired with a deep learning model architecture to make predictions about what classes an image belongs to as the images are obtained from the camera.
In some embodiments, the processing circuit may be configured to use one or more artificial neural networks with representation learning. Learning may be supervised, semi supervised or unsupervised. In some embodiments, one or more open-source algorithms may be used. In some embodiments, a plurality of machine learning algorithms may be used, for example by concatenation, interweaving, etc.
In some embodiments, images may be processed by generating a plurality of layers to progressively extract higher-level features in the images.
In some embodiments, the processing circuit is configured to compare grass to weeds (e.g., dandelions, crabgrass, Creeping Charlie, clover, etc.) in a grassy terrain. An image can be classified as containing grass when the processing circuit identifies the image as corresponding to any type of grasses, whether cool season (e.g., Kentucky Bluegrass, fescue, rye) or warm-season grasses (Centipede grass, Bermuda grass, Zoysia grass, St. Augustine Grass, etc.).
One class that may be identified may be a boundary between a grassy terrain and a neighboring region, such as dirt, concrete, a garden, etc.
After the processing circuit determines the presence of one or more classes, such as a weed or a boundary, the processing circuit may be configured to control one or more actuators in response. For example, in response to identifying a boundary, the processing circuit may be configured to drive rotating members in a reverse direction for a predetermined period of time or until the camera images show a return to a grassy terrain The processing circuit may then be configured to drive the rotating members to turn a direction of travel of the chassis to an angle relative to the direction of travel when the device traveled in the reverse direction. In some embodiments, upon detecting a boundary, the processing circuit may be configured to change the direction of travel by an angle of less than 180 degrees (or in some cases 180 degrees). In some embodiments, the angle may be pseudorandomly selected by the processing circuit. In other embodiments, the angle may be deliberately selected.
Referring now to
In some embodiments, the computing system is configured to build a model based on sample data (e.g., training data, a training corpus, etc.) comprising the images which are acquired and sorted. When programmed into an autonomous weed treating device, the model may be used by processing circuit 44 to make predictions or decisions about the content of a newly acquired image. The computing system may be configured to use one or more of a convolutional neural network, a Bayesian network, a nearest neighbor algorithm, reinforcement learning, or other algorithms to teach the model.
The training can be done iteratively. The computing system may be configured at each iteration to calculate the error and the iterations may continue until the error is no longer decreasing, indicating the model has completed the training with the given inputs. In some embodiments, additional images can be added and the method can continue iterations with the additional images for improved training of the model. In some embodiments, other results may be used as an indication to stop the training, such as an error of sufficiently small size, an error that is decreasing below a predetermined rate, or other results. At a block 512, the computing system may be configured to quantize and compile the model to run on embedded hardware in the autonomous device described herein.
In some embodiments, an autonomous weed treating device may be configured to use a machine learning model generated using one or more of the steps described above with reference to
In some embodiments, a computing system may comprise a machine-learned deep learning neural network configured to receive images from a camera and to process the images to classify the images as belonging to a weed class, a grass class, and/or a boundary class.
The steps in
In some embodiments, unsupervised learning may be used by autonomous device 10 which would allow device 10 to continue learning while in operation. In another embodiment, device 10 may be configured to detect a type of grass being seen in the images and select one of a plurality of models or weights from a memory to correspond to the type of grass detected. In another alternative, images may be collected by a robot while in use treating weeds and a user may be allowed to update the model to make a customized model using images from the user's yard.
Referring now to
In some embodiments, upon detecting a boundary, the processing circuit may be configured to store coordinates for the boundary point or segment in a memory. The boundary coordinates may be used to create, edit, or update a boundary being used by the device as a limit to the region of travel of the device.
In some embodiments, upon detecting a weed, the processing circuit may be configured to store coordinates for the location of the weed in a memory. The coordinates may be used to generate a map image showing locations of weeds that were treated for presentation to a user on a display screen, for example, on a user's smartphone or other handheld computing device.
In some embodiments, a system or method may comprise training a machine learning prediction model using deep learning based on images in predefined classes comprising a grass image, a weed image and a boundary image. The system or method may further comprise applying the machine learning prediction model to predict a classification of a new image as comprising grass, a weed and/or a boundary.
In some embodiments, a computer-implemented method of training a neural network for weed detection may comprise collecting a set of digital images from a database, creating a first training set of images by sorting the images into a category comprising a weed, creating a second training set of images by sorting the images into a category comprising a grassy area, and training the neural network using deep learning and using the first and second sorted images to create a trained model. In some embodiments, an autonomous weed treating device is configured to use the trained model to classify images obtained by a camera of the autonomous weed treating device as it traverses a grassy terrain.
In some embodiments, a system or method may comprise processing acquired digital images of a grassy terrain with a deep learning neural network configured to detect a presence of a weed amongst the acquired digital images and to trigger a dispenser in response to the detection of the weed. The system or method may further comprise wherein the deep learning neural network is trained using a plurality of training image files which have been classified.
In one embodiment, an autonomous yard maintenance device for maintaining a grassy terrain beneath the device comprises a body comprising a chassis, at least one front rotating member coupled to the chassis and at least one rear rotating member coupled to the chassis, wherein at least one of the front and rear rotating members is driven by a motor having a drive shaft disposed eccentric to a center of the wheel. The device further comprises an actuator configured to perform a yard maintenance operation on the grassy terrain and a processing circuit. The processing circuit may be configured to drive the rotating members to move the chassis along the grassy terrain and control the actuator to perform the yard maintenance operation on the grassy terrain.
In some embodiments, the yard maintenance operation is selected from the group comprising dispensing a herbicide, cutting grass, fertilizing grass and watering grass.
In some embodiments the chassis may have a bottom surface having a front portion opposite a rear portion. In some embodiments, the chassis and rotating members may be configured to provide the front portion of the chassis at a first distance to the grassy terrain higher than a second distance to the grassy terrain of the rear portion of the chassis.
In some embodiments, the device may comprise a camera coupled to the body and configured to acquire images of the grassy terrain, the processing circuit configured to process the images to identify a weed, wherein the actuator comprises a herbicide sprayer.
In some embodiments, the camera may be disposed between about two inches and about ten inches from the grassy terrain.
In some embodiments, the sprayer may be disposed between about two inches and about ten inches from the grassy terrain.
In some embodiments, the drive shaft of the motor may be eccentric to the motor housing.
In some embodiments, a pinion gear may be coupled to the drive shaft and configured to drive an internal ring gear to drive the rotating member.
In some embodiments, the chassis may have a bottom surface, wherein the motor is disposed at least partially above the bottom surface of the chassis.
In some embodiments, the motor may be disposed near a top of the wheel.
In some embodiments, either of the front and rear rotating members may comprise a pivoting caster.
In some embodiments, the front rotating member may have a diameter at least fifty percent larger than the rear rotating member.
In some embodiments, the camera may be disposed on the front portion of the chassis and the dispenser may be disposed rearward of the camera.
In some embodiments, the device may be configured to be lifted and moved to a new location by a human person.
In some embodiments, a handle may be disposed on the body configured to be used by the human person to lift and move the device.
In some embodiments, a bumper may be disposed at a front portion of the chassis and at least one bumper switch may be configured to detect contact of the bumper with an object, the processing circuit configured to receive a signal from the bumper switch indicating contact with the object.
In some embodiments, a processing circuit of the device may be coupled to or comprise an inductive sensor configured to detect signals transmitted by a wire (e.g., low voltage wire) set up for robotic lawnmowers, wireless dog fences, etc. The processing circuit may be configured to use the detected signal in its navigation calculations, for example, to identify a border of an area to be treated.
Certain embodiments described herein can omit one or more of the method steps and/or perform the steps in a different order than the order listed. For example, some steps need not be performed in certain embodiments. As a further example, certain steps can be performed in a different temporal order, including simultaneously, than listed above.
While the embodiments have been described with reference to certain details, it will be understood by those skilled in the art that various changes can be made and equivalents can be substituted without departing from the scope described herein. In addition, many modifications can be made to adapt a particular situation or material to the teachings without departing from its scope. Therefore, it is intended that the teachings herein not be limited to the particular embodiments disclosed, but rather include additional embodiments falling within the scope of the appended claims.
The present application claims the benefit of U.S. Provisional Application No. 63/393,118 filed Jul. 28, 2022, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63393118 | Jul 2022 | US |